id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
45673718 | pes2o/s2orc | v3-fos-license | Colour as a backup for scent in the presence of olfactory noise: testing the efficacy backup hypothesis using bumblebees (Bombus terrestris)
The majority of floral displays simultaneously broadcast signals from multiple sensory modalities, but these multimodal displays come at both a metabolic cost and an increased conspicuousness to floral antagonists. Why then do plants invest in these costly multimodal displays? The efficacy backup hypothesis suggests that individual signal components act as a backup for others in the presence of environmental variability. Here, we test the efficacy backup hypothesis by investigating the ability of bumblebees to differentiate between sets of artificial flowers in the presence of either chemical interference or high wind speeds, both of which have the potential to impede the transmission of olfactory signals. We found that both chemical interference and high wind speeds negatively affected forager learning times, but these effects were mitigated in the presence of a visual signal component. Our results suggest that visual signals can act as a backup for olfactory signals in the presence of chemical interference and high wind speeds, and support the efficacy backup hypothesis as an explanation for the evolution of multimodal floral displays.
Colour as a backup for scent in the presence of olfactory noise: testing the efficacy backup hypothesis using bumblebees (Bombus terrestris) David The majority of floral displays simultaneously broadcast signals from multiple sensory modalities, but these multimodal displays come at both a metabolic cost and an increased conspicuousness to floral antagonists. Why then do plants invest in these costly multimodal displays? The efficacy backup hypothesis suggests that individual signal components act as a backup for others in the presence of environmental variability. Here, we test the efficacy backup hypothesis by investigating the ability of bumblebees to differentiate between sets of artificial flowers in the presence of either chemical interference or high wind speeds, both of which have the potential to impede the transmission of olfactory signals. We found that both chemical interference and high wind speeds negatively affected forager learning times, but these effects were mitigated in the presence of a visual signal component. Our results suggest that visual signals can act as a backup for olfactory signals in the presence of chemical interference and high wind speeds, and support the efficacy backup hypothesis as an explanation for the evolution of multimodal floral displays.
Background
Floral displays often consist of multiple signal components which transmit information simultaneously across multiple sensory modalities. These displays are complex in that they broadcast visual, olfactory, tactile, gustatory, electrostatic and even acoustic information [1][2][3] We argue here that bumblebees have the capacity to use visual information as a backup for olfactory information in the presence of environmental noise. Previous tests of the efficacy backup hypothesis have shown bumblebees to use scents to back up colour signals in low-light conditions [17], giving precedence for sensory backup of other modalities. This, coupled with both the susceptibility of olfactory signal transfer to noise [24] and the fact that scent rather than colour has been shown to be the preferred discriminating factor in honeybees [39], suggests that this sensory backup could take place. In this study, we test the converse possibility of Kaczorowski et al. [17], who suggest that under well-lit conditions, visual signals can play an efficacy backup role for olfactory signals when the latter are obfuscated by noise. We used essential oils from other plant species to simulate chemical interference (hereafter referred to as the chemical interference test), and an electric fan to simulate high wind speeds (hereafter called the wind-simulation test). We recorded learning times, the number of successful drinks and the number of correct choices a forager bee made after landing on an artificial flower. We hypothesize that both chemical interference and interference from high wind speeds will detrimentally affect bee foraging efficiency and that these effects will be mitigated with the introduction of an additional visual signal component to act as a backup.
Flight arena and bumblebee colony conditions
All three experiment types were carried out in wooden framed 72 × 104 × 30 cm flight arenas topped with UV-transparent Perspex with the floor covered in Advance Green gaffer tape (Stage Electric, UK). Flight arenas were connected to the plastic nesting box of flower naive Bombus terrestris dalmatinus colonies (Biobest, Sustainable Crop Management, Belgium) via a transparent gated tube which could be manually manipulated to regulate which bees, and how many, could enter or leave the flight arena. Forty-six Sylvania Activa 172 Professional 36 W fluorescent tubes (Germany) on a 12 L : 12 D regime were used to simulate natural illumination. Bees were fed 30% sucrose solution daily after experiments had taken place and pollen was added directly to the colony three days a week. A total of ten colonies were used over the three experiment types (summarized in the electronic supplementary material, tables S1 and S2)-in analysis, we assumed that there was sufficient behavioural difference between individuals to ignore any colony effects, as is discussed in Thomson & Chittka [40]. Foraging individuals were marked on the thorax with non-toxic paint in order to be identified and used during the experiments.
Artificial flowers
Ten white Perspex discs (80 mm diameter, 3 mm width) were used as foraging stimuli (artificial flowers) during each experiment. Each disc had 43 holes (2 mm diameter) in a hexagonal pattern (figure 1), with a transparent plastic cover placed on top of each disc. Each cover had 2 mm holes corresponding to those on the disc and an upturned 0.5 ml Eppendorf container lid glued to the centre of the disc surface to be used to contain a sucrose reward or water. The plastic covers were used in order for the discs to be cleaned thoroughly after experiments without compromising the Eppendorf feeding wells. At the beginning of each experiment, self-adhesive film was placed on the underside of each disc so the holes could contain small amounts of liquid. At the end of each day, this film was removed and the discs were soaked overnight in a detergent solution to remove volatiles and glue.
Within treatments which incorporated scented flowers, five of the ten artificial flowers had a lavender oil solution added using a pipette (1 : 10 mix of lavender essential oil : mineral oil) in a hexagonal arrangement (2.5 µl of oil added to 6 of the 43 holes, figure 1) and in the remaining five artificial flowers a peppermint oil solution was added using a pipette with the same arrangement and amount (1 : 10 mix of peppermint oil : mineral oil. Lavender and peppermint oils were supplied by Amphora Aromatics, Bristol, UK). Within treatments which incorporated visual cues five of the ten artificial flowers had a yellow-coloured disc (Hue: 58; Sat: 80; Lum: 100) and the other five received blue-coloured discs (Hue: 219; Sat: 72; Lum: 100) placed underneath the transparent plastic films on top of each artificial flower. The yellow and blue colours were used as bees are known to perceive and differentiate between these 'dissimilar' colours [41,42]. Each of these coloured discs was covered on both sides in self-adhesive film for the easy removal of floral volatiles at the end of each experiment. Scented and visual sets were matched so that there were only two different artificial flower groups presented to each forager, e.g. only blue-lavender-scented flowers and yellow-peppermint-scented flowers were presented to one forager and only yellow-lavender-scented flowers and blue-peppermint-scented flowers were presented to another forager.
Flight arena preparation
In each experiment, the flight arena was cleared of bees and the gated tube connected to the nest was blocked. Ten artificial flowers were placed in the flight arena, five from each scent groups (lavender-or peppermint-scented flowers) with a 30% sucrose solution (20 µl) reward placed in the central wells of each disc of one group, and into the central well of the other group 20 µl of distilled water was added as an unrewarding stimulus. Each disc was placed on top of an upturned plastic container (6 cm height, 150 ml, Sterilin UK) and distributed randomly throughout the flight arena.
Experiment 1: chemical interference experiment
Foragers were randomly allocated to one of four groups (18 bees to each group, summarized in the electronic supplementary material, table S1): (A) with no chemical interference and unimodal-scented flowers (control treatment), used to gain a baseline level of foraging efficiency when using olfactory cues in an interference-free environment; (B) with chemical interference and unimodal-scented flowers, used to see the effects of the introduced interference; (C) with chemical interference and bimodal scented and visual flowers, used to see if effects caused by interference (as demonstrated by group B) could be mitigated with the addition of a visual cue; and (D) with chemical interference and unimodal visual flowers, used to demonstrate if there are effects to foraging efficiency when purely visual artificial flowers are presented alongside chemical interference. The three groups with chemical interference (groups B, C and D), which was used to reduce the reliability of recognition cues, had four sets of two upturned Eppendorf lids, each set with 200 µl of one of four essential oils distributed throughout the flight arena. The scents used were essential oils from geranium Pelargonium graveolens, bog myrtle Myrica gale, juniperberry Juniperus communis and camomile Roman Anthemis nobilis (from Amphora Aromatics, Bristol, UK), which were applied using a pipette. Group A had no additional scents in the flight arena. This wide range of essential oils, rather than comparatively simpler single odorants, was used as both floral recognition cues and chemical interference to better simulate the complexity of odours which pollinators are presented with in the wild.
Individual marked bees which were naive to both visual and olfactory stimuli, but had experienced drinking from Eppendorf lid wells, were then allowed entry into the flight arena. The sequence of lands on rewarding or non-rewarding artificial flowers was recorded as well as whether the forager drank after landing or abandoned the flower before drinking. Visits to the same flower without visiting another flower in between were not recorded. Flowers which had been visited had their sucrose or water refilled and after each foraging bout the artificial flowers were removed and wiped with ethanol to remove visual cues and foraging pheromones [43]. After this, the artificial flowers were placed back in the flight arena in a different arrangement to avoid foragers learning the location of rewarding artificial flowers.
Behavioural metrics and learning criterion
Foragers were assumed to have satisfactorily learned to discriminate between the two flower groups when eight of the last ten drinks were from rewarding flowers, not counting visits which did not lead to drinks. Only visits which led to drinks were included in this count as it was clearer in these instances that bees made correct choices (positive drinks) or incorrect choices (unrewarding drinks); this was less clear in visits which did not lead to drinks. However, as fine-tuned discrimination of flower odours is known to occur post-landing, and as these visits have implications in terms of benefits to the plant [44,45], we investigated post-landing decisions in our 'number of correct choices made after landing' comparisons. Forty flower visits were recorded before wiping the artificial flowers with ethanol and focusing on another forager. If the bee had not reached the learning criterion within 40 flower visits, a learning time of 40 was used: this occurred with four bees out of the 72 tested. ). This new disc arrangement (figure 2), which differs from the arrangement in experiment 1, was chosen to allow air to pass over all artificial flowers. This new arrangement had the potential to affect the foraging behaviour of the bees; therefore, results from experiments 1 and 2 were not directly compared. One group would receive a yellow-coloured disc and the other would receive a blue-coloured disc (same artificial flowers as previous experiment).
Experiment 2: wind simulation experiment
At the beginning of the experiment, the fan was turned onto its highest setting with wind speed measured at mean ± s.d. = 1.07 ± 0.86 m s −1 (using a Kestrel 4500 pocket weather tracker), which passes the threshold at which odour source finding is compromised in tsetse flies Glossina pallidipes [35]. Individual marked forager bumblebees were then allowed entry into the flight arena with the same procedure as the first experiment and an identical learning criterion. Forty flower visits were recorded before wiping the artificial flowers with ethanol and focusing on another forager. If the bee had not reached the learning criterion by the fortieth land the experiment would continue until it met the learning criterion (this occurred with six bees out of a tested 72). Trials which incorporated chemical interference were not performed on the same day as those without chemical interference in order to prevent odour effects on later experiments. For the same reason, all flight arena slats were opened and the flight arena was cleaned after trials which incorporated chemical interference.
Scent preference tests
Scent preference tests were also undertaken to investigate if naive bees had an innate preference to peppermint or lavender. Within these preference tests naive forager bees were presented with 10 artificial flowers (colourless artificial flowers shown in figure 1), five with diluted peppermint oil and five with diluted lavender oil (1 : 10 mix of essential oil : mineral oil) and the first 20 flower visits were recorded. Flowers which had been visited by foragers had their sucrose or water refilled during experiments. Between foraging bouts, artificial flowers were removed and wiped with ethanol.
Analysis
The number of flower visits taken to reach the learning criterion, the number of correct choices after landing and the number of drinks from rewarding flowers between the treatments and control were compared using Kruskal-Wallis tests as the data did not fit the requirements for parametric testing, except for the total number of drinks from rewarding flowers where an analysis of variance was used. Post hoc comparisons were conducted using Dunn's tests with Holm-Bonferroni corrections to avoid familywise errors, and a pairwise t-test was used for the previously mentioned exception. In order to see if the rewarding scent or colour used had an effect on learning time, we compared the number of
Total number of drinks from rewarding flowers
The total number of drinks from rewarding flowers was different between the treatments and control (analysis of variance: and those with chemical interference and bimodal scented and coloured flowers (p < 0.0001). Foragers presented with chemical interference and unimodal coloured flowers also drank from fewer rewarding flowers compared with those presented with chemical interference and bimodal scented and coloured flowers (p = 0.014).
Correct choices made after landing
The number of correct choices made after landing was different between the treatments and control (Kruskal-Wallis test: χ 2 3 = 27.06, p < 0.001, figure 3c). Post hoc comparisons revealed that foragers presented with both chemical interference and unimodal scented flowers and those presented with chemical interference and unimodal coloured flowers made fewer correct choices after landing compared with foragers presented with no chemical interference and unimodal scented flowers (p = 0.001, p = 0.032, respectively) and those with chemical interference and bimodal scented and coloured flowers (p < 0.0001, p = 0.015, respectively).
Total number of drinks from rewarding flowers
The total number of drinks from rewarding flowers was different between the treatments and control (χ 2 3 = 30.6, p < 0.001, figure 4b).
Post hoc comparisons demonstrated that forager bees exposed to wind simulation and unimodal scented flowers made fewer consecutive rewarding drinks compared with foragers presented with no wind simulation and unimodal scented flowers (p = 0.001), those presented with wind simulation bimodal scented and coloured flowers (p < 0.001) and those with wind simulation and unimodal coloured flowers (p < 0.001).
Correct choices made after landing
The number of correct choices made after landing was different between the treatments and control (χ 2 3 = 26.14, p < 0.001, figure 4c). Post hoc comparisons revealed that foragers presented with wind and unimodal scented flowers made fewer correct choices after landing compared with foragers presented with no wind simulation and unimodal scented flowers (p = 0.002), those presented with wind and unimodal coloured flowers (p = 0.003) and those with wind and bimodal scented and coloured flowers (p < 0.0001).
Discussion
We examined bumblebee foraging behaviour in the presence of one of two types of environmental noise which affect the transfer of volatiles: chemical interference and high wind speeds. Our results clearly show that both chemical interference and high wind speeds have detrimental effects on the foraging behaviour of B. terrestris in scented unimodal flowers, but the inclusion of an additional visual signal component to floral displays negated these effects. This suggests that in scenarios where olfactory communication is compromised, visual signal components can act as a backup, supporting the efficacy backup hypothesis as an explanation for the evolution and maintenance of multimodal floral signals. Both chemical interference and high wind speeds caused foragers to take more flower visits to reach the learning criterion in unimodal scented flowers compared with foragers presented with unimodal scented flowers without either interference type (figures 3a and 4b). These results correspond with the claim by Wilson et al. [24] that olfactory communication between plants and pollinators is vulnerable to chemical interference and windy conditions. The total number of rewarding drinks as well as the number of correct choices made after landing was also lower in foragers presented with either of the two interference types and unimodal scented flowers compared with their counterparts which were not presented with interference (figures 3 and 4). These results complement a previous study [23] in which background odours affected the ability of hawkmoth Manduca sexta to correctly navigate an odour plume to its source. The detrimental effects on learning time, the total number of rewarding drinks and the number of correct choices made after landing were mitigated in foragers presented with bimodal floral displays and either interference type. The increase in post-landing accuracy also complements a previous study [17] where the converse possibility was explored and similar accuracy benefits were found when olfactory signals were used as a backup for visual signals in low-light conditions.
These detrimental effects to foraging efficiency are likely to have negative consequences to both plant and pollinator. Reductions in total number of rewarding drinks would limit the net energy gained by individual bees during foraging and subsequently the colony, negatively affecting survival, growth and reproductive output [46]. A decrease in correct choices made after landing may also be detrimental to plant fitness via increased clogging of stigmas by foreign pollen or pollen loss on interspecific species [47,48]. Our findings suggest that in windy habitats with crowded, scented vegetation (e.g. common to Mediterranean habitats), coloured flowers with species specific fragrances would enhance pollinator constancy and foraging efficiency [49,50]. This implies that visual stimuli can act as a backup for olfactory stimuli and that multimodal displays are more reliable stimuli in the presence of olfactory interference, giving further support to the efficacy backup hypothesis. These results mirror a previous study [17] in which olfactory stimuli were found to act as a backup for visual signals for flowers at different levels of illumination. In the light of the previous study [17], our own findings, and the observation of honeybees using scent as the primary discriminating factor over colour [39], it is possible that both visual and olfactory modalities backup each other, with bees using whichever stimuli are most conspicuous at the time. This also complements a previous study [51] in which bumblebees used spatial arrangements of either visual or olfactory stimuli to reduce nectar discovery times.
Adverse effects to forager learning times also suggest that both chemical and wind interference can affect associative learning in B. terrestris. This effect on learning may have detrimental effects to the level of flower constancy a forager reaches, lessening benefits to the plant, and potentially increase the costs associated with switching plant species for the pollinator [52,53]. This also suggests that flower constancy resulting from multimodal learning is more beneficial to flowers than unimodal learning in the presence of environmental variability. These findings are particularly relevant to nocturnal or crepuscular flowers and pollinators, such as hawkmoths, which rely heavily on olfactory cues to identify and discriminate between flowers, putting them at particular risk of noise which compromises olfactory communication [54][55][56][57]. Although nocturnal, these foragers can still incorporate visual display components into their search behaviour as seen in the hawkmoth Deilephila elpenor, which can use colour vision to discriminate between coloured stimuli in light condition equivalent to dim starlight [58].
Curiously, chemical interference also impeded foraging on purely visual artificial flowers ( figure 3). This could be an indication of multisensory integration whereby information from one sensory modality influences processing in another modality [59], which has been discussed in relation to bee vision and olfaction [60]. This also implies that chemical interference may be detrimental to foraging efficiency even when visual cues are available. Alternatively, this effect of impeded foraging may also be caused by the essential oils overstimulating the odour receptor cells causing disorientation in the bees, which occurs with other volatiles such as naphthalene [61]. If this hindered foraging efficiency is caused by essential oil induced disorientation, our findings suggest that multimodal stimuli may mitigate the effects of insect repellents.
Considering the results, chemical interference has the potential to affect plant-animal interactions in multiple ways. Flowering plants and pollinators inhabiting environments with high plant species richness, such as tropical forests [62] or Mediterranean climatic regions [63], would be particularly susceptible to disruption by chemical interference. Chemical interference could also affect pollinator behaviour in environments with lower plant species richness if the perceptual systems of a pollinator experiences a similarity between odours [64]. Depending on the perceptual similarity of available odours, their concentration, and the variation of odours experienced during their learning, this phenomenon (referred to as olfactory generalization) could occur in areas of lower plant species richness [65].
Pollinators may mitigate these effects of olfactory noise through perceptual filtering, whereby only particular odours present in complex odour blends are detected by the antenna and only a select few of these detected volatiles elicit behavioural responses [66][67][68][69]. This perceptual filtering could assist in the location of particular flowers in the presence of multiple VOCs. On the other hand, olfactory noise could be beneficial to plants that are at risk of attracting herbivores through their volatiles [8], or to fooddeceptive Batesian mimic flowers [70] which could potentially receive more visits from pollinators if the olfactory signals of rewarding flowers have compromised efficacy during learning or foraging.
These effects of chemical interference also relate to atmospheric pollution, which affects all terrestrial ecosystems and potentially affects VOC transfer at a global level [71]. McFrederick et al. [64] propose that the distance at which pollinators can detect highly reactive volatiles has changed from kilometres during pre-industrial times to less than 200 m in modern times due to the destruction of volatiles via chemical reactions with atmospheric pollutants. It would be of interest to know which VOCs are particularly reactive to atmospheric pollutants and consequently which plants, pollinators and environments are put at particular risk by these pollutants. Wilson et al. [22] suggest that characterizing entire 'odoromes', the collective scent profile of habitats, would be a useful tool in understanding the baseline levels of olfactory noise that insects encounter as well as increasing our understanding of how airborne pollutants from anthropogenic sources affect volatile signalling. With these data, we would also gain insights into which habitats have collective scent profiles that facilitate or hinder the transmission of volatile signals used by pollinators, herbivores or insect parasitoids and predators. It would also be valuable to explore other interactions which would be affected by chemical interference and turbulence caused by increased wind speeds in contexts other than pollination, as VOCs play important roles in other multitrophic interactions including the repelling of herbivores, attracting predators and parasitoids and signalling to other plants [72,73].
It is worth noting that in some instances, the inclusion of additional blends of floral volatiles may not contribute towards noise or decreases in the efficacy of a learned VOC blend. In cases where odour blends have no chemical overlap or have shared compounds that are not perceived by the forager, it is unlikely that there would be any detrimental effects to signal transmission. It has also been speculated that in some instances, the presence of additional and contrasting VOC blends may enhance a response to a learned VOC blend by highlighting the perceptual contrast [74]. However, the detrimental effects demonstrated in the treatments with chemical interference imply that this is not the case with the VOCs used within our study.
In terms of interference through high wind speeds, we saw detrimental effects to learning speeds, nectar collection rate and post-landing accuracy, mirroring the negative effects found in previous studies [35,37]. This disruption to foraging behaviour is likely to be caused by turbulent air movements stretching, compressing and tearing apart odour filaments alongside the creation of odour-free gaps, creating difficulties when locating the source of the odour [20,75]. It is also unknown if the flight arenas used within the experiments are large enough to allow for this formation and subsequent degradation of odour plumes which are of ecologically relevant sizes. Therefore, field experiments done in natural wind and airborne chemical environments of study organisms would be beneficial. Alpine meadows [76] and Mediterranean climactic regions [77] would make appropriate study locations as habitats at risk of this wind-related noise. Wind speeds are also projected to increase in certain environments due to climate change [78,79]. Depending on the environment, these increases in wind speed may possibly decrease the distance at which plants can elicit a behavioural response from their pollinators through odour plume disruption [80].
Communication between plants and their visitors is of extreme importance to all terrestrial ecosystems and by understanding the factors which influence the evolution and effectiveness of this information transfer we gain a great insight into how this relationship can be affected in a changing world. Our results suggest that both chemical interference and high wind speeds have negative effects on the foraging behaviour of bumblebees B. terrestris in scented unimodal flowers, but the inclusion of an additional visual signal component to the floral displays negated these negative effects. This suggests that visual signal components can act as a backup in environments where olfactory communication is compromised, benefitting both plant and pollinator, and supporting the efficacy backup hypothesis as an explanation for the evolution and maintenance of multimodal floral signals. It is our hope that this study provides a proof of concept for these effects on the transmission of olfactory signals and inspires future research into which areas these interference types are most (or least) likely to occur, and at what spatial scales.
Ethics. This study did not need ethical approval as it involved invertebrates, and conforms to UK legislation. Data accessibility. The datasets supporting this article have been uploaded as part of the supplementary material. Data are available from the Dryad Digital Repository, http://dx.doi.org/10.5061/dryad.3g591 [81].
Authors' contributions. D.A.L., H.M.W. and S.A.R. conceived the ideas and designed the methodology; D.A.L. collected the data; D.A.L. analysed the data; D.A.L. and S.A.R. led the writing of the manuscript. All authors contributed critically to the drafts and gave the final approval for publication.
Competing interests. We have no competing interests. Funding. H.M.W. was supported by an ERC Starting Grant (#260920) and a BBSRC responsive mode grant (BB/M002780/1). | 2018-04-03T06:07:16.384Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "f942516419b378c5f02caf1ed23337f4ce62d847",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.170996",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "23618b140ec361db823acd365cc4b57c06ac69da",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
187729608 | pes2o/s2orc | v3-fos-license | Understanding Fashion Buying Motivation for SME
This paper offers a concise view on the way in which the motivation of textile product consumers influences the activity of micro-enterprises and small and medium companies. Changes in behavior can bring benefits but also disadvantages to new entrepreneurs that are taking the first steps in building their own brand in the fashion industry. For these entrepreneurs motivation is a key factor in pushing consumers to buy clothing items, along with key-phrases such as "when do they decide to buy", "where do they prefer to buy" but also "how often do they buy". In an age that is characterized by spectacular speed, in which through a single click consumers can find a large amount of information that helps them choose their preferred clothing items. It can be easily observed that consumers are always connected to current findings in various fields and that they wish to gain more independence and become more aware of their environment, and of the way in which their actions affect society. For this reason, SMEs must create the appropriate marketing strategies that can be adapted to new demands in the fashion industry.
Introduction
Consumer motivation is of critical importance in the activity of microenterprises, small and medium companies within the field of fashion. When consumers' motivations change, small companies must adapt to these changes without missing a beat. How can this be achieved? Through adopting flexible strategies that transform chaos into opportunity, through identifying innovative materials, investments, successful partnerships and attracting and maintaining qualified and reliable personnel. In order to achieve this, micro-enterprises and small and medium enterprises must be capable of identifying external threats earlier. In Europe small and medium enterprises are referred to as being the backbone of the economy because they offer a significant employment resource, economic growth and constant innovations.
Motivations that influence the purchase of textile products
The study in consumer behavior helps us understand not only the trends in consumption that occurred in the past, but also the future trends. Motivation is the keyelement that can influence this behavior through the viewpoint of psychological, economic or social factors (Kotler, 2014). Motives constitute the basis of buying behavior and they result from the merging of biological, physical and social factors (Cătoiu și Teodorescu, 2004). Experts within the field state that motivation is exerted by the individual's needs and by the state of tension that these needs cause (Solomon, 2012). Buying behavior can also be influenced by extrinsic motivation from the usage of the Internet. Many entrepreneurs prefer to take their business into an online medium because this makes it much easier for them to be in touch with new technologies and also to maintain their connection with collaborators and consumers. On the other hand, intrinsic motivation is the one that maintains the interest regarding discovering and getting to know a brand or the interest in certain products, because the time spent on identifying them is perceived as being a pleasure (Caniëls, Lenaerts, and Gelderman, 2015). Both consumer behavior and the fashion industry are characterized by change (Lewis and Hawksley, 1990). It can be observed that there is a growing tendency for consumers to purchase more varied and much cheaper clothing articles and there is also the growing number of stores that fit this profile within urban zones and even the increase of this segment in hypermarkets (Bruce and Hines, 2006).
In the case of textile product consumers we can identify the following types of motivations: the quality of the materials, the affinity for certain brands, moral values, design, innovation, the color palette, uniqueness, exclusivism, brand history or image, the offered services, distribution, entertainment and cultural motivations such as the influence of the group to which one belongs, education, the desire for knowledge or lifestyle. These aspects were highlighted by two studies (qualitative -32 in depth interviews and quantitative with over 400 respondents) that researched the influence of motivation on the buying behavior of luxury clothing articles. The motives that were identified can be classified into hedonistic motivations and utilitarian motivations. Hedonistic motivations have to do with the aspects that bring a state of general happiness, satisfaction and fulfillment to the user. Utilitarian motivations refer to the operational part of consumer satisfaction, such as innovation, distribution, quality and high price (Diaconu, 2016, Diaconu andCerceloiu, 2017).
The conducted qualitative study focuses on determining the particularities of clothing items users and also on identifying less obvious motivations when it comes to luxury product consumption. Luxury fashion represents a reflection of the environment in which the consumer lives and reflects their personality and social status (Arnold, 2002 Evans, 2003). Luxury clothing products are goods that have high quality and prices with their own identity and that preserve the brand's tradition, offering extravagance, elegance, power and self esteem to their wearers (Silverstein, 2003, pp. 10-11). Among the identified purchase or nonpurchase motives for luxury clothing some were temporal in nature such as: childhood, weddings, New Year parties or similar events that determined the usage of exclusivist goods. Some people have stated that they have discovered luxury products during their childhood or their adulthood when they had to attend social events; when they had to follow a certain dress code. Users stated that when wearing certain exclusivist clothing items they felt pride, a sense of prestige, respect and superiority. The content analysis that followed the 32 in-depth interviews revealed a series of motivations that can influence the purchase behavior of luxury clothing items: • emotional motives that are based on feelings and passions: the motivation of being respected and recognized among others -"I would wear outfits that command respect when entering a room"; uniqueness and rarity -"First of all, I would like to wear clothes that nobody else would wear."; the motivation to be a model of inspiration; the desire to be in trend; the motivation for pleasure and fun -"A luxury piece of clothing should offer me the necessary amount of fun when I am trying to stand out from the crowd"; memories and memorable experiences; culture and family given education; extravagance and innovation; the addiction to uniqueness and innovation; and fear of social pressure; • rational motives that are based on experience, education and culture; the motivation related to the utility of the product; the motivation for comfort; the desire for learning and improvement; • motives with a clear set goal: the motivation to wear renown brands -"I want to wear certain luxury clothes because they belong to world renowned brands such as Burberry, Moschino, Valentino."; the motivation to wear superior quality materials -"I usually wear clothes that are made out of natural fibers and I would like to invest more in such products"; durability; timelessness; and the experience of shopping in luxury stores.
The quantitative research studied the motives that were identified during the qualitative research and their influence on the buying behavior for luxury clothing. A total of 16 attributes were analyzed during the study of the purchase behavior. These attributes were: price, the craftsmanship involved in the product's creation, the quality of the materials, the product's design, color, the texture of the materials, country of origin, durability, innovation, timelessness, uniqueness, the brand's degree of fame, the experience when visiting the store, postpurchase services, the brand's image and personality, the symbol for social and financial status. The attributes were correlated with the social class, age, gender and wage categories of respondents. The attributes that received the highest scores when it came to influencing the purchase of luxury clothing items were: design with a medium score of 8.9, the quality of the materials with 8.7 and the texture of the materials with a score of 8.6. The product's design was appreciated more by men (8.9) than women (8.6), by people aged 31 to 40 (9.4), by people with master's degrees (8.9) and people with medium incomes. The quality of the materials was almost as highly appreciated by men (8.9) as it was by women (8.8). The same attribute was equally appreciated by people with bachelor's degrees and those with master's degrees (8.8). The same can be stated about people with average incomes and those with above average incomes, both categories are equally appreciating the quality of the materials (8.7). Within the hierarchy of attributes that were considered when purchasing luxury clothing items, the texture of the materials received a higher score from men (8.7) than from women (8.6). People aged 31 to 40 appreciated the texture of the materials more than other age groups (9.2). The texture of the materials was also highly appreciated by respondents with master's and doctoral studies (8.7) and by those with above average incomes (8.7). The attribute that was the least considered by respondents and that received the lowest score was country of origin with 6.1. This suggests the fact that people appreciate the product's quality, disregarding the place in which the product was made.
An individual who has an active lifestyle, who is oriented towards practicing sports will always choose clothing products that facilitate movement and flexibility, but that are also visually appealing from an aesthetic point of view. The same thought mechanism is used when choosing a car or a home. The choices that are made are in a close relationship with the lifestyle, but also with the individual's personality; that is why due to the increases in income, consumers will choose to buy better cars, clothes and cosmetics that are higher in quality. On the other hand, people that are oriented toward the world of art will purchase pieces of art that will build their cultural heritage (Konečnik, Ruzzier and Hisrich, 2015, p 103).
A statistical survey that was conducted online with a sample comprising of 10,000 participants from 10 countries identified 10 dimensions in regard to consumers: connected, social, do it yourself, independent, experimental, inventive, disconnected, implicated, aware and minimalistic (Accenture, 2013). These dimensions regarding consumers are strongly influenced by variables that are endogenous in their nature (the desire for better living, to improve physical and mental capabilities) but also exogenous (changing of one's status, the betterment of one's living conditions).
Particularities of the textile industry in Romania
The textile industry in Romania comprises an important segment when it comes to exports, second only to the automotive industry. Companies from the textile industry generally operate in an assembly system or a loan system, lacking an innovative character, and competitivity is given by the reduced cost of labor, the degree of specialization and the quality of execution and the standards are imposed by the big brands. Despite all of these aspects, the commercial balance is deficitary and has continued to fall during the 2012-2017 period, from -20% down to -30% (INSSE, 2018). Another situation that prevents the development of the textile industry in Romania to a global level is determined by frequent political and fiscal changes, poor levels of education and the decline of qualified labor force due to western emigration. Currently, there are over 9700 companies in Romania that produce clothing and textile goods, being the second largest provider of employment from this industry at the European level. The total revenue in 2017 provided by the textile industry in Romania was 3 billion euro (Eurostat, 2017, Piata, 2018). Romanians spend on average approximately 100 euro on clothing per year, coming in the last place in a study regarding clothing and textile consumption (Euratex, 2017). There are Romanian brands that have reached the global market such as Murmur, I.D. Sarrieri, Irina Schrotter etc. and that represent a real competition for renowned brands. Fashion trends are heavily influenced by changes in economical, political, social and technological environments, meaning that manufacturers must adapt to the demands of sophisticated clients. On an international level companies are investing in applications and hardware that allow the creation of smart materials that can measure speed, values of certain health indicators, fibers that emit light or materials that change their color depending on body temperature or machines that use processing and placing software applications. All of these innovations offer the consumer not only aesthetic attributes, but also a certain degree of comfort and health. For example, the Japanese brand Descente has created a collection of ski clothes that are not at all bulky and contain a heating technology that adapts to the wearer's body temperature. Chinese scientists have developed materials that blend two types of polymeric fabrics (one that is conductive and the other that is nanogenerating) and transform mechanical energy into electrical energy, managing to provide a mean to charge a smartphone. The Waldorf Project has developed the Futuro collection; the materials of which could glow in the dark when are in the proximity of certain sensors (Trendhunter, 2016). Globalization is yet another factor that influences the textile industry and the distribution of wares is strongly correlated to the degree of specialization of the factories, the costs of production, salary levels and the distance in order to reach the chain of distribution. When it comes to the luxury textile sector, a large part of Romanian consumers prefer to shop abroad due to a preconception regarding the price of the goods. Furthermore, the lack of products in stock burdens the relationship with potential consumers that are forced to wait between 2 to 4 weeks in order to receive the product that they have ordered. The online market was still underdeveloped for the entirety of 2017 in Romania. There is a small number of manufacturers and designers that sell their product through e-commerce. Despite these facts, along with the increase of wages, the desire for a better quality of life and a better education intensifies, along with the desire to purchase products that are of a higher quality and that satisfy the consumers' superior needs (Euromonitor, 2017). In order to perform, Romanian companies must conduct a diagnosis of the current state of the business environment; find opportunities for growth; invest in new technologies; identify and attract new suppliers and clients; rationalize their available resources in a more efficient way; adopt change and seeing it through; implement project management and create their own organizational culture (Tudor, 2018).
Particularities of the textile industry entrepreneur
Fashion entrepreneurs are different from entrepreneurs from other economic sectors. The fashion industry is very dynamic and has the capability of creating innovative goods at a high quality and with a unique character to them, being highly personalized goods that stand the test of time. Furthermore, the textile industry is one of the most profitable sectors at a global scale.
Based on previously conducted research regarding the influence of motivation on the purchase of luxury textile products, a new study has been conducted in order to better understand the issue through the perspective of SME's. An observational analysis has been conducted in order to study the issue of purchase motivation that is specific to small and medium companies. The analysis was conducted on a group of 20 Romanian designers, men and women, with ages between 21 and 45, designers that are starting out in the fashion industry. The goal of this qualitative study was to investigate the behavior of the textile industry entrepreneur and find the main variables that they seek out in order to motivate consumer buying behavior. The observational analysis was divided into 3 sections: particularities of the designerentrepreneur, the perception of the competitive environment and aspects regarding the influence of purchase motivation of potential buyers. textile market, attracting the necessary resources (both material and human resources), creating a unique and innovative product that differs from the competition, clearly defining a company mission, setting medium and long term objectives, knowing and setting commercial conditions, strategies for promoting the brand and for attracting and retaining clients, post-sale services and the continual reevaluation of the companies activity and its processes.
Small freshly-started companies are bound to come across numerous problems that are due to factors such as the lack of professional experience from the owner or manager, not knowing the competition or the consumer that they are addressing and a faulty relationship with its personnel or the clients. Very few designers from the fashion industry have the entrepreneurial and economical knowledge, most have no economical justification in making business decisions and rely solely on creativity, having no knowledge of how to promote themselves. Whatever degree of success is achieved in such cases can more or less be attributed to favourable external circumstances that are beyond the entrepreneur's control: "I am an artist and I don't have to know these commercial details."; "It doesn't matter that I don't know how to sell correctly, clients will seek me out when I least expect them."; "I only work with purchases that come from recommendations, I don't recruit new clients."; "I never stopped to analyze the amount of production I generate, I just buy cheap raw materials and then figure out what to create and produce."; "I lack visibility in the online medium, I only sell directly from my workshop or at fairs and expos." Some entrepreneurs that have textile micro companies admitted that although they have art studies they refuse to take care of the administrative side of the business. However they want to know the necessary information regarding their company and delegate it to a specialist. "I am an artist and I'm not good at finance or economy, but I'd like to know what to ask a specialist when I hire him." "I'm
attending this workshop in order to find out how to better coordinate my company and my employees' activity."; "I'm attending this workshop in order to find out how to raise my sales numbers and how to promote my brand."
When asked about the policies regarding price that they practice within their companies, the participants admitted that they have tried to use the highest price possible, arguing that the product that they are selling is a piece of art that is meant to convey "quality", "innovation", "uniqueness", "sensibility", "femininity", "power", "pride", "modernity", "inspiration" or "freedom" to the client. Some of them tried to place the products that they created at a price level that is similar to their closest competitors, the companies that they meet frequently at fairs within the fashion industry or the ones that they compete with online (e-shops such as welovecouture, bandofcreators, moleculef, moja, endra). Another aspect that was revealed by the observational analysis was the fact that designers tend to position their own brand within the market depending on the degree of innovation or depending on a high price. "When I attended the last fair my stand was not positioned correctly. Instead of being next to other designers that use innovative materials and that have high prices, I was positioned to the side, very close to brands with more accessible prices." On the other hand, some designers, prior to creating their own company, preferred to gain some experience within the field of creating patterns, modeling and creating products, thus having an advantage when forming their own team of tailors and technicians. Furthermore, designers who have a technical certification (for using specialized machines and creating patterns or materials) and not just creative certifications have developed a second field of business for their company, that of consulting other young designers that lack experience when it comes to fashion production and logistics.
Thus, designers represent a distinct category of entrepreneurs, the particularities of which are strongly correlated with the type of In Romania, art schools and universities do not offer their students courses in entrepreneurship, marketing brands or products, sales techniques or client retention. The purpose of these institutions is to create and train scholars or artists in different fields of art and the development of their social and professional skills falls under the students' responsibility. The approach of these institutions is focused on building and developing a code for interpreting and understanding art, understanding artistic currents, the methodology of fashion collections, aesthetics and design and concepts regarding clothing merchandising. It is important that specialists from this field have at least a basic level of knowledge that helps them create their own businesses after graduating, which also implies specialized programs in which they get to work as interns in multiple fields, along with receiving mentoring from fields adjacent to fashion.
The study revealed that due to the lack of entrepreneurial experience and economical knowledge there is the state of stagnation of newly founded companies, coupled with a lack of knowledge when it comes to resolving the problems that relate to customer service. Furthermore, these newly founded companies lack knowledge regarding their competitors. The fact that entrepreneurs do not have the means to search in their business environment as they do not have a network, is severely limiting the number of potential collaborators, in addition to the fact that they do not accept constructive criticism. All of these factors often lead to misplaced or unjustifiable investments. Small and medium enterprises that are founded by entrepreneurs that have a minimum of experience when it comes to production and logistics processes have higher chances of making the right decisions in developing their businesses and attracting consumers to their brands and products.
Challenges faced by micro firms and small and medium enterprises due to changes in purchase motivation
Policies regarding regional or international affairs, globalization and the changes within consumption behavior have led to an increase in competitiveness not only on a local scale, but also globally. In order to withstand these changes, small companies center themselves on the creating of strategies that are based on knowledge, creativity and technological innovations. Changes in purchase motivation of consumers can be identified due to a series of factors: insufficient financial resources, lacking access to a certain brand or product, the lack of information regarding a certain brand or product, the presence of new brands or more attractive products, the do it When a new SME is created, one must take into account the value that the goods that are produced or sold can have for potential consumers (Atkinson, 1964). When creating a clothing product it is important to mention that it must reflect the consumer's identity starting with the reality that surrounds us and that the one who is creating it must be a keen observer (Guba and Lincoln, 1994). Furthermore, value can also be given by the consumer's expectations and preferences regarding a brand or a certain product. Value is a motivational force that can also be represented by the psychological experience of being attracted by a brand or a product (Higgins, 2007) due to the story that was created around these elements, the offered services and the quality of the materials.
Micro-enterprises and SME's must know their competitive environment well, but must not lose track of the most important player within the market, the client or the potential consumer. In order to achieve this, a series of objectives must be established and constantly be maintained during the entrepreneurial activity: • Identifying the consumer segment that they address; • Establishing the type of product that best suits this consumer that is always in motion; • Establishing the innovation factor that can attract the consumer; • Creating and maintaining products that are attractive from an aesthetic and design point of view; • Establishing the correct price threshold at which the brand wishes to place itself in regards to its competitors; • Identifying local suppliers that can quickly respond to the company's needs; • Identifying well-prepared employees that know the technological process and motivating them; • Maintaining the profile of an expert within the field of fashion in order to increase the consumer's trust in the brand and its products.
Furthermore, in order to cope with changes in consumer motivation, SME's must direct their actions more and more toward the online medium. This medium offers, freely or with very low costs, solutions for gathering data regarding the textile market, consumers, trends, online consumer behavior. One of the given challenges within the online medium is an apparent chaos, that can be easily controlled by experts or influencers that have earned the consumers' trust.
Consumers are online 24/7 and want to receive feedback quickly from companies with which they interact. Unlike the case of big companies that require a bureaucratic process and numerous internal policies regarding the resolving of public relation, acquisitions or ethics, SME's have the advantage of being much more involved in the solving of problems that are signaled by consumers. Generally, SMEs are run by members of a family or by entrepreneurial organizations and associations and the environment is less rigid, informal and much more flexible (Spence and Essoussi, 2010).
The manager of such an organization must have experience in coordinating a team and must gain expertise within the field in which the company is active in order to understand what it is that he asks from his employees that are involved in the production process and also from the ones that are involved in the administrative activities. If the manager does not have leadership skills, does not have solid knowledge about the fashion industry or about the technological processes in which clothing items are obtained, the company's activity will be hindered. The lack of managerial expertise and also of specific knowledge can lead not only to poor business development and chaotic and unstructured activities (Gilmore, 2001) but also to a weak understanding of business relations with collaborators or investors, or even a failure to understand the relation between entrepreneur -producer -buyer (Reijonen Another challenge faced by entrepreneurs from micro-enterprises and small and medium enterprises can be of finding qualified personnel and keeping them using motivation. An employee that is properly motivated for the effort that he or she spends within the organization will continue to maintain a high standard both on the production line and also during logistic processes, negotiations, sales or the counseling of clients.
Source: proposed by the authors
As it is shown in the previous figure, the keyattributes with which micro-enterprises, small and medium companies can attract consumers are strategies that imply positioning themselves toward modernity and innovation, flexibility in consumer and collaborator relations, expertise and knowledge within the field of textiles from which trust in the brand and its products can converge, followed by the offered services at the highest possible quality. Modernity can be given through an approach that is characterized by timeless design or through marketing campaigns that attract young and old consumers alike, through the same message. Flexibility can be achieved through the same degree of customization that is offered in creating a collection or when creating "sur mesure" pieces according to customer size and body measurements. Along with these elements, the visual component that most attracts potential buyers within the fashion industry. Some specialists from the textile industry state that a clothing item, such as a dress, can be an efficient way for conforming or masking, but also differentiating. When a person wears a dress that has a common design or fading colors, that person can easily disappear or blend in a crowd, but if the person choses an _____________________________________________________________________________ innovative design or bright colors, the person can stand out (Cumming, 2004, p 99).
Limitations & conclusions
The textile industry in Romania will remain dynamic, but in order to continue its development it needs to be closely monitored, to identify possibilities for expansion and to make better use of the opportunities of entering new markets and consolidate the relation with consumers. The limitations of this paper are represented by the relatively small collectivity on which the two qualitative studies have been conducted and by the impossibility of gathering a more encompassing analysis of the problems that face small and medium enterprises in the textile industry.
The challenges faced by micro-enterprises and SMEs can be easily overcome if these companies try and value the resources that they possess, maximizing their limited marketing budget and recovering their investment through an appropriate market research, business planning and the constant evaluation of their activities.
The consumer represents the central point in developing small and medium enterprises. If individuals used to purchase clothing in the past only to wear them on certain occasions or only when previously owned clothing items got too worn out, the current day situation is a different one. Consumers are willing to spend frequently on the purchase of new clothes, whether it is for satisfying basic needs or more sophisticated needs.
Some of the solutions that entrepreneurs can have in order to motivate consumers are: • The ability to anticipate fashion trends or the ability to create new trends; • The creation of products that are very attractive and that go well with the style and the personality of consumers; • Adapting collections based on the segment of consumers that they are addressing; • Maintaining a certain standard of quality; • Communicating an emotion through which consumers feel better with themselves; • Choosing a design and a certain chromatic theme that help clients create outfits that have a high degree of personalization; • Adapting clothes depending on utility and also on positioning through a price this is according to the quality of the products. | 2019-06-13T13:21:00.623Z | 2019-01-22T00:00:00.000 | {
"year": 2019,
"sha1": "e61115f10d84245deb15d86b7c435c9841e9b058",
"oa_license": "CCBY",
"oa_url": "https://ibimapublishing.com/articles/JMRCS/2019/773197/773197.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0946c909b9b47ab77419810fad73fdcedc19f3b6",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
233385007 | pes2o/s2orc | v3-fos-license | Mitral early-diastolic inflow peak velocity (E)-to-left atrial strain ratio as a novel index for predicting elevated left ventricular filling pressures in patients with preserved left ventricular ejection fraction
Objectives We sought to explore the relationship between an index of left ventricular diastolic function parameters combined with left atrial strain and the diastolic function of patients with preserved ejection fraction. Methods We prospectively enrolled 388 patients with left ventricular ejection fraction (LVEF) ≥ 50%, 49 of whom underwent left heart catherization. Transthoracic echocardiography was performed within 12 h before or after the procedure. Left atrial (LA) strain was obtained by speckle tracking echocardiography. These patients served as the test group. The remaining patients (n = 339) were used to validate the diagnostic performance of the mitral early-diastolic inflow peak velocity (E)-to-left atrial reservoir strain ratio (E/LASr) in left ventricular diastolic dysfunction. Results Invasive measurements of LV end-diastolic pressure (LVEDP) demonstrated that the E/LASr ratio was increased in patients with elevated LVEDP [ 2.0 (1.8–2.2) vs 3.0 (2.6–4.0), p < 0.001] in the test group (n = 49). After adjusting for age, mitral A, E/e' ratio and β-blocker use, the E/LASr ratio was an independent predictor of elevated LVEDP and showed good diagnostic performance in determining elevated LVEDP [area under the curve (AUC) 0.903, cutoff value 2.7, sensitivity 74.2%, specificity 94.4%]. In the validation group (n = 339), the E/LASr ratio also performed well in diagnosing elevated left atrial pressure (LAP) (AUC 0.904, cutoff value 3.2, sensitivity 76.5%, specificity 89.0%), while with a cut-off value of 2.7, the E/LASr ratio showed high accuracy in discriminating elevated LAP. In addition, E/LASr was a good index of excellent diagnostic utility (AUC: 0.899 to 0.996) in the categorization of diastolic dysfunction grades. Regarding the clinical relevance of this index, the E/LASr ratio could accurately diagnose HF with preserved ejection fraction (HFpEF) (0.781), especially in patients with “indeterminate” status (AUC: 0.829). Furthermore, an elevated E/LASr ratio was significantly associated with the risk of rehospitalization due to major adverse cardiac events (MACEs) within one year (odds ratio: 1.183, 95% confidence interval: 1.067, 1.312). Conclusions In patients with EF preservation, the E/LASr ratio is a novel index for assessing elevated left ventricular filling pressure with high accuracy.
Background
Over 90% of patients with heart failure (HF) have diastolic dysfunction regardless of left ventricular ejection fraction (LVEF), and left ventricular (LV) diastolic dysfunction is the predominant pathomechanism of HF with preserved ejection fraction (HFpEF) [1]. Impaired LV diastolic function will result in elevated LV filling pressure (LVFP), which is a major determinant of cardiac symptoms and prognosis in patients with chronic HF [2,3]. Thus, the noninvasive estimation of LVFP obtained by echocardiography is important for diagnosing HFpEF and managing chronic HF.
The 2016 ASE/SCAI guideline employing several parameters makes it more convenient than previous versions; nonetheless, the diagnostic quandary of "indeterminate" status for patients whose data do not neatly fulfill the algorithms is still unsolved [4]. On the other hand, the accuracy of diagnostic diastolic dysfunction will decrease for patients with pulmonary arterial hypertension, low right atrial and right ventricular filling pressure, or severe tricuspid valve lesions [4]. Therefore, an accurate parameter for detecting LV diastolic function is needed.
Several studies have shown that left atrial (LA) strain, especially LA strain during the reservoir phase (LASr), is impaired in the setting of LV diastolic dysfunction and correlates well with LVFP or pulmonary capillary wedge pressure, suggesting that LASr is clinically useful for estimating LVFP [5,6]. However, in some patients with coronary artery disease, LASr is not the best parameter for discriminating the filling pressure status [7]. Alternatively, combining LV and LA diastolic measurements could be more precise than a single parameter in predicting LVFP. In this regard, we conducted this study to explore the correlation of LVFP with the combination of LASr and LV diastolic measurements.
Population
We prospectively enrolled 394 patients treated at the First Affiliated Hospital of Soochow University from November 2018 to December 2019. Fifty-five of them who were suspected of having coronary artery disease or HFpEF underwent left heart catherization. LV end-diastolic pressure (LVEDP) was invasively measured by left heart catheterization. Standard transthoracic echocardiography was performed during the 12 h before or after the procedure, and LA strain was obtained by speckle tracking echocardiography. Six of the patients were excluded because echocardiographic imaging was not good enough; thus, 49 patients served in the test group. The remaining patients (n = 339) were used to validate the results of the test group and evaluate the diagnostic performance of E/LASr in left ventricular diastolic dysfunction.
Conventional transthoracic echocardiography
Transthoracic echocardiographic measurements of all subjects were performed using a GE Vivid E9 or GE Vivid E95 (Norway) 2.5 MHz transducer in the left lateral decubitus position at rest. The biplane algorithm was used to measure the maximum volume of the left atrium in the standard apical four-chamber and two-chamber views before mitral valve opening for 1-2 frames. The LA maximal volume was divided by the body surface area to obtain the LA maximal volume index (LAVI). LVEF was measured in the standard apical four-chamber and twochamber views by Simpson's method biplane algorithm. Pulsed-wave Doppler (PW) was used to measure the peak early-diastolic (E) and peak end-diastolic (A) transmission velocity, E/A ratio, and E wave deceleration time at the level of the mitral leaflet tips from the apical fourchamber view. In the apical four-chamber view, the sampling points were placed at the levels of the basal portion of the septal and lateral mitral annulus. Tissue Doppler imaging (TDI) and PW were used to obtain the mitral annulus movement speed, and the peak value of the longitudinal movement in the early-diastolic period (i.e., septal e' and lateral e'). Then, the mean early-diastolic myocardial velocity (e′ mean ) and the ratio of E/e' mean were calculated. The maximum velocity of tricuspid regurgitation (TR max ) was measured by continuous-wave Doppler (CW) under the guidance of color Doppler from the parasternal long axis of the left ventricle or the apical fourchamber view. Researchers were blinded to the patient's LVEDP and clinical characteristics.
Two-dimensional speckle tracking echocardiography
The left atrial strain was measured using the two-dimensional strain analysis package provided by the Echo PAC workstation (GE Healthcare). The two base points of the mitral annulus and the top of the distal end of the LA were manually selected; the area of interest was adjusted to include the entire LA wall, each view was divided into six sections, and twelve sections from each patient were analyzed. The global longitudinal LA strain was measured as an average of 12 sections. The LA reservoir strain was measured as the average of the longitudinal positive peak of LA strain, which was from all LA segments (i.e., 12 segments) of the apical 4-chamber and 2-chamber views [8].
Invasive LV pressure measurements
The left ventricular filling pressure was measured using a 6F pigtail catheter. The invasive procedure was performed via the radial artery by an interventional cardiologist who was blinded to the echocardiography data. Before coronary angiography, transducers were balanced prior to the acquisition of hemodynamic data with zero level at the midaxillary line. After coronary angiography, left ventricular angiography was performed. The 6F pigtail catheter was reset routinely and placed in the left ventricle to obtain a stable pressure curve. Then, the ECG and left ventricular pressure curves were recorded simultaneously. Left ventricular end-diastolic pressure was measured at the QRS starting point for baseline stable left ventricular pressure curves. All parameters were averaged over three consecutive cardiac cycles. LVEDP > 16 mmHg was defined as elevated LVFP [1,9].
Diagnosis of left ventricular diastolic dysfunction
According to the recommendations of the 2016 ASE/ SCAI guideline [4], 339 patients (the validation group) were assessed for left ventricular diastolic dysfunction. The following are the abnormalvalues of conventional LV diastolic parameters: (1) e' of TDI mitral annulus (septal e' < 7 cm/s or lateral e' < 10 cm/s), (2) E/e 'mean > 14, (3) LAVI > 34 ml/m 2 , and (4) TRmax > 2.8 m/s. When more than 50% of the above criteria were positive, the patients were diagnosed with LVDD, and LV diastolic function was considered normal when less than 50% of the above criteria were positive. In addition, when only 50% of the criteria were positive, patients were diagnosed as having indeterminate LV diastolic function.
Definition of elevated left atrial pressure
According to the recommendations of the 2016 ASE/SCAI guideline [4], elevated left atrial pressure was defined as: mitral E/A ratio ≥ 2 or ≥ 2 positive criteria(LAVI > 34 mL/m 2 , E/e ' mean > 14, or TR max > 2.8 m/s) when mitral E/A ratio ≤ 0.8 and E > 50 cm/s or mitral E/A ratio > 0.8 to < 2; and normal left atrial pressure was defined as: mitral E/A ratio ≤ 0.8 and E ≤ 50 cm/s or ≥ 2 negative criteria (LAVI > 34 mL/m 2 , E/e ' mean > 14, or TR max > 2.8 m/s) when mitral E/A ratio ≤ 0.8 and E > 50 cm/s or mitral E/A ratio > 0.8 to < 2.
Left ventricular diastolic dysfunction grade
According to the 2016 ASE/SCAI algorithm [4], the severity of patients with left ventricular diastolic dysfunction in the validation group was graded: when mitral E/A ratio ≤ 0.8, E ≤ 50 cm/s, and more than two of the three criteria ( E/e ' mean > 14, LAVI > 34 ml/ m 2 , TR max > 2.8 m/s) were negative, it suggested that the corresponding grade of diastolic dysfunction was grade I; when mitral E/A ratio ≥ 0.8 and E > 50 cm/s, or if the mitral E/A ratio was > 0.8 but < 2, and two or three of the three criteria were positive at the same time, it indicated that the corresponding grade of diastolic dysfunction was grade II; when mitral E/A ratio ≥ 2, it was diagnosed as grade III diastolic dysfunction [4].
Diagnostic algorithm of HFpEF
According to the "HFA-PEFF diagnosis algorithm" offered by the 2019 ESC consensus recommendation [10], we performed clinical diagnosis of HFpEF on 339 patients in the validation group. The first was an initial workup, which included evaluating the symptoms and signs of heart failure and improving the clinical diagnosis of the primary disease (step 1). Then, the patients were assessed with echocardiography and natriuretic peptide. Diastolic function parameters of echocardiography and natriuretic peptide levels were used as the main basis for evaluating HFpEF. Then the patients were scored according to the scoring system (shown in Fig. 3 of the "HFA-PEFF Diagnosis Algorithm" [10]) (step 2). A score ≥ 5 points implied definite HFpEF. An intermediate score (2-4 points) implied diagnostic uncertainty and further hemodynamic testing was recommended, including echocardiography or invasive hemodynamic exercise stress testing (step 3). Symptoms compatible with HF could be confirmed to originate from the heart if hemodynamic abnormalities were detected either at rest or during exercise.
Definition of MACEs
Major adverse cardiac events (MACEs) included allcause mortality, acute myocardial infarction, HF, stroke, and coronary revascularization.
Statistical analysis
Statistical analysis was performed using SPSS version 25.0 software. Continuous variables that were normally distributed are presented as the mean ± SD and were analyzed with an independent t-test. Variables that were not normally distributed are presented as medians with interquartile ranges (IQR = 25th-75th percentile) and were analyzed with the Mann-Whitney U test. Categorical data are expressed as absolute numbers or percentages and were analyzed with the chi-squared test. Univariate logistic regression was used to calculate odds ratios to predict LVEDP. All variables with p ≤ 0.100 (including LASr, E/LASr, peak A, E/e' mean , β-blockers) and age were included in the multiple logistic regression analysis to explore the relevance of LVEDP. P < 0.05 (two-tailed) was considered statistically significant. In our four models, LASr and E/LASr were analyzed separately due to their multicollinearity, with other control variables kept the same. The C-statistic was calculated in each model to allow comparison between them.
In the test group, the area under the curve (AUC) of the receiver operating characteristic curve was used to compare the performance of multiple variables in determining elevated LVEDP. In the validation group, receiver operating characteristic curve analysis was used to evaluate the accuracy of the E/LASr ratio for diagnosing left ventricular diastolic dysfunction, grading the severity of LVDD and HFpEF. Univariate logistic regression was used to analyze the correlation between different variables and rehospitalization due to MACEs within one year.
Characteristics of the study population
The study finally included 388 patients, including 49 in the test group and 339 in the validation group. The test group that underwent left heart catheterization was divided into a normal LVEDP group (n = 18) and an elevated LVEDP group (n = 31) according to whether the LVEDP was greater than 16 mmHg. There were no significant differences in sex, age, medical history, coronary angiography results or other conventional echocardiographic indicators, such as LVEF, LAEF, and left ventricular diastolic function indicators, between the two groups. Compared with the patients in the normal LVEDP group, those in the elevated LVEDP group showed significantly lower LASr (32.9 ± 1.5 vs 23.2 ± 1.2, p < 0.001), and E/ LASr was significantly increased [ 2.0 (1.8-2.2)vs 3.0 (2.6-4.0), p < 0.001] ( Table 1).
Logistic regression analysis and prediction model
Four different multivariate logistic regression analyses showed that after adjusting for age, peak A, E/e' mean ratio and other factors, LASr and E/LASr were independent predictors of LVEDP > 16 mmHg in their respective models. The models showed that E/LASr has a higher C-statistic than the model with LASr ( Table 2).
The accuracy of LVEDP > 16 mmHg predicted by LASr and its combination index
LASr and E/LASr (AUC 0.903, cutoff value 2.7, sensitivity 74.2%, specificity 94.4%) had good diagnostic accuracy for elevated LVEDP. The diagnostic performance of E/ LASr for LVEDP > 16 mmHg is better than that of LASr ( Fig. 1 and Table 3). Among the 339 patients in the validation group, 119 patients were diagnosed with elevated LAP according to the 2016 ASE/SCAI guideline. In agreement with these findings in the test group, E/LASr had good accuracy in diagnosing elevated LAP ( Fig. 1 and Table 3).
E/LASr ratio and left ventricular diastolic function classification
According to the 2016 ASE/SCAI guideline, patients in the validation group (n = 339) were divided into normal diastolic function (grade 0, n = 183), diastolic dysfunction grade I (n = 9), diastolic dysfunction grade II (n = 101) and diastolic dysfunction grade III (n = 8). There were significant differences in E/LASr among the groups (Fig. 2). The E/LASr ratio had higher sensitivity and specificity in evaluating the severity of LVDD, and as diastolic dysfunction worsened, its accuracy was better (Table 4).
E/LASr ratio and HFpEF
Among the 339 patients in the validation group, 37 were clinically diagnosed with HFpEF. There was a significant difference in E/ LASR between the two groups ( Table 5).
ROC curve analysis suggested that E/LASr performed well in diagnosing HFpEF. In the validation group, when diastolic function was assessed according to the 2016 ASE/SCAI guideline algorithm, 38 people were classified as having indeterminate diastolic function, and 7 of them were clinically diagnosed with HFpEF. E/LASr was used to discriminate HFpEF with high accuracy in patients with indeterminate diastolic function, which suggested that E/LASr may be used for the diagnosis of heart failure in the gray area with indeterminate diastolic function (Fig. 3).
Correlation between the E/LASr ratio and rehospitalization due to MACEs within one year
Within one year, 18 people in the validation group were hospitalized for treatment due to MACEs. Univariate logistic regression analysis showed that patients with an elevated E/LASr ratio had an increased risk of MACEs [OR: 1.183, 95% CI: (1.067, 1.312), p = 0.001], while other traditional diastolic function parameters were poorly correlated with MACEs (Fig. 4).
Discussion
This study aimed to evaluate the predictive value and potential clinical relevance of a new combination index, E/LASr, for elevated left ventricular filling pressure in patients with normal LVEF. We found that LASr and E/LASr were both independent predictors of elevated LVEDP. More importantly, we found that the combined E/LASr index predicts elevated LVEDP or LAP with increased accuracy. In addition, E/LASr can accurately diagnose HFpEF, particularly in patients with an "indeterminate" status. Furthermore, an elevated E/LASr ratio was significantly associated with the risk of rehospitalization due to MACEs within one year.
Recently, LA function measured as LA reservoir strain (LASr) has been shown to be significantly related to invasive left ventricular filling pressure [5,6,11]. This was also confirmed in our study [5].
Left atrium reservoir function reflects the relaxation and compliance of the left atrium and is regulated by left ventricular systolic function [12]. The left atrium is directly exposed to left ventricular pressure during diastolic mitral valve opening. In the early stages of left ventricular diastolic dysfunction, the left atrium can still contract to compensate for the elevated left ventricular pressure. However, in the case of long-term high left ventricular pressure, compliance of the left atrium gradually becomes blunt, resulting in a decrease in the reserve of the left atrium, which ultimately leads to enlargement and failure of the left atrium [13]. In fact, in the case of elevated left ventricular pressure, even if the left atrium has not yet expanded, the function of the left atrium has been impaired [14]. Therefore, LASr can reflect elevated left ventricular filling pressure in the early stage.
This study found that combining the early-diastolic peak inflow velocity (E) of the mitral valve and left atrial reservoir strain (LASr) as a single index further improved the ability to discriminate elevated left ventricular filling pressure (AUC = 0.903) in patients with preserved LVEF with higher accuracy. The combination index E/LASr was composed of the patient's current left ventricular filling state (mitral valve E velocity) and its LA function change (LASr). This reflected not only the influence of the pressure gradient between the LA and LV on LV filling but also the relaxation and compliance of the LA affected by LV diastolic function. Therefore, this index was a more comprehensive indicator for prediction of elevated left ventricular filling pressure.
This study further verified the accuracy of the E/LASr ratio in evaluating left ventricular diastolic dysfunction in 339 patients with LVEF ≥ 50% in the validation group. However, the invasive LVEDP cutoff value (2.7) predicted by the E/LASr ratio in the test group was lower than the elevated LAP cutoff value (3.2) for the evaluation of the E/LASr ratio in the validation group. Previous studies have shown that in the early stages of left ventricular diastolic dysfunction, only LVEDP is elevated, while LA pressure and mean pulmonary capillary wedge pressure are still normal [4,15]. However, the algorithm of the 2016 ASE/EACVI guideline is based on the prediction of mean pulmonary capillary wedge pressure, not LVEDP [4,15]. In addition, traditional diastolic function parameters such as LAVI used in the 2016 ASE/EACVI guideline algorithm have been used as chronic and severe surrogates for left ventricular diastolic dysfunction, but LAVI is an insensitive biomarker in the early stages of diastolic dysfunction [16]. This may lead to the diastolic dysfunction identified in the validation group according to the 2016 guidelines no longer being limited to the early stage. To a certain extent, these reasons may explain why the cutoff value of E/LASr in the test group was lower than that in the verification group. We also proved that the E/LASr ratio can grade the severity of left ventricular diastolic function with good accuracy. In addition, as diastolic function worsened, the accuracy of its classification was better.
Regarding the clinical relevance of E/LASr, this study found that E/LASr can accurately diagnose HFpEF in the validation group, and even among patients classified as "indeterminate diastolic function", it can accurately distinguish patients with HFpEF. This shows that the E/ LASr ratio added the value of supplementary diagnosis to the 2016 guidelines, especially for the diagnosis of HF in gray areas with indeterminate diastolic function. In addition, in this study, patients with an elevated E/LASr ratio had a significantly increased risk of MACEs. To a certain extent, this is consistent with the results of some recent studies. Braunauer et al. found that an elevated E/LASr ratio was significantly associated with worse functional capacity and HF hospitalization at 2 years [17]. A study of patients with atrial fibrillation found that an elevated E/LA strain ratio was associated with HF hospitalizations or worse cardiovascular events [18]. A study in hemodialysis patients found that the E/LA strain ratio is a useful parameter for predicting the total mortality and cardiovascular mortality of hemodialysis patients [19]. Further prospective studies are warranted to validate these findings.
Limitations
This study has several limitations. First, the sample size for invasive measurement of left ventricular filling pressure is limited, and it is necessary to increase the sample size for multicenter studies to further verify our results. Second, the left atrial strain measured by speckle tracking imaging is defined as the absolute strain value of the three phases of the left atrium, and this study measured and analyzed only the left atrial reservoir strain. Third, because patients with atrial fibrillation lack effective atrial contraction and patients with severe mitral stenosis or mitral regurgitation have an abnormally enlarged left atrium, we did not include such patients. Finally, the clarity of echocardiographic images affects the repeatability and credibility of the left atrial strain results. Therefore, the acquisition of left atrial images and analysis of strains require more skilled operators. However, an increasing number of studies have confirmed that left atrial strain may be a powerful indicator for evaluating left ventricular diastolic function, which makes it possible for left atrial strain to be included in the diagnosis and classification of left ventricular diastolic dysfunction in the future.
Conclusion
The results of this study indicate that a novel combination index (E/LASr) may be a more accurate indicator in predicting elevated LVFP and assessing diastolic dysfunction in patients with preserved EF. This indicator can not only add complementary diagnostic value to the 2016 ASE/SCAI guideline but also has potential clinical relevance for adverse cardiovascular events, which is worthy of further study. | 2021-04-25T13:41:53.338Z | 2021-04-24T00:00:00.000 | {
"year": 2021,
"sha1": "875281da15084bbb293204f2fff0c4a1bb4ed42b",
"oa_license": "CCBY",
"oa_url": "https://cardiovascularultrasound.biomedcentral.com/track/pdf/10.1186/s12947-021-00248-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "875281da15084bbb293204f2fff0c4a1bb4ed42b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261500494 | pes2o/s2orc | v3-fos-license | Utilization of Health Services Before and After Diagnosis in a Specialist Rural and Remote Memory Clinic
Background Limited research exists on the use of specific health services over an extended time among rural persons with dementia. The study objective was to examine health service use over a 10-year period, five years before until five years after diagnosis in the specialist Rural and Remote Memory Clinic (RRMC). Methods Clinical and administrative health data of RRMC patients were linked. Annual health service utilization of the cohort (N = 436) was analyzed for 416 patients pre-index (57.5% female, mean age 71.2 years) and 419 post-index (56.3% female, mean age 70.8 years). Approximately 40% of memory clinic diagnoses were Alzheimer’s disease (AD), 20% non-AD dementia, and 40% mild or subjective cognitive impairment or other condition. Post-index, 188 patients (44.9%) moved to permanent long-term care and were retained in the sample; 121 patients died (28.9%) and were removed yearly. Results Over the ten-year study period, a significant increase occurred in the average number of FP visits, all-type drug prescriptions, and dementia-specific drug prescriptions (all p <.001). The highest proportion of patients hospitalized was observed one year pre-index, the highest average number of specialist visits was observed one year post-index, and both demonstrated a significant decreasing trend in the five-year post-index period (p = .037). Conclusions A pattern of increasing FP visits and drug prescriptions over an extended period before and after diagnosis in a specialist rural and remote memory clinic highlights a need to support FPs in post-diagnostic management. Further research of longitudinal patterns in health service utilization is merited.
INTRODUCTION
In Canada, an estimated 597,000 adults aged 65 years and older live with diagnosed dementia. (1) The actual number may be much higher, as an estimated 75% of people with dementia globally are considered undiagnosed. (2) Dementia and other age-related health conditions are important issues in rural Canada (outside centres of 10,000+) where the population is increasingly aging. (3) Persons living with dementia require more care over time as the condition progresses, and higher levels of care overall than older adults without dementia. A recent review found dementia increased the risk of hospitalization by an estimated 42%, due partly to older age, comorbidities, and polypharmacy. (4) Dementia has also been found to increase hospital stay by 1.3-2 times among Canadians. (5) Higher rates of family physician and dementia specialist visits are associated with dementia in Canada (6) and Germany, (7) and with Alzheimer's disease (AD) in the United Kingdom. (8) Recent reviews found polypharmacy prevalence rates (five or more medications concurrently) ranging from 25-98% among persons with dementia or cognitive impairment (9) and potentially inappropriate prescribing rates of 24-60% among persons with dementia. (10) For rural older adults living with dementia, barriers to accessing appropriate health services and supports present challenges. With small populations, an aging workforce, and ongoing recruitment and retention difficulties, (3) rural communities are often unable to offer specialized health and social care (e.g., housing, end-of-life care). (11) Rural family physicians may have limited experience and knowledge about dementia diagnosis and management, and low access to geriatric specialists and few local resources, yet must provide care regardless. (12) Our previous research found that, for caregivers of patients seen in a specialist rural and remote memory clinic, a diagnosis provided important benefits that helped them move forward, including a sense of relief, validation, information on prognosis, and greater awareness of services. (13) However, research is limited on the use of specific health services over an extended duration among rural people with dementia. In this study, we linked clinical and administrative health data to assess patterns in annual health service use (physician visits, hospitalization, and drug prescriptions) five years before until five years after diagnosis in the specialist Rural and Remote Memory Clinic (RRMC).
Study Design and Setting
A retrospective observational cohort study was used to examine annual rates of health service use five years before until five years after diagnosis in the specialist Rural and Remote Memory Clinic (RRMC) at the University of Saskatchewan in Saskatoon. The clinical and sociodemographic data of patients who received a RRMC diagnosis between March 1 2004 and July 4 2016 were linked to administrative health data from March 1 1999 to June 2020 using unique identifiers based on personal health service numbers.
Written informed consent for the use of clinical data for research was given by patients and proxies. Ethics approval from the University of Saskatchewan Biomedical Ethics Research Board was received separately for the ongoing RRMC study and this retrospective cohort study. This study was made possible by a data sharing agreement between the University of Saskatchewan and the Saskatchewan Health Quality Council, Ministry of Health, and eHealth. Study reporting follows the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations. (14)
Study Population
The RRMC was implemented in 2004 to increase access to dementia specialists and reduce repeated travel specifically for the rural and remote population (hereafter, rural/ remote), defined for the purposes of the RRMC as persons living more than 100 km from the two tertiary care centres in Saskatchewan (15,16) (where 89% of specialists practice). (17) The RRMC offers diagnosis and management of suspected dementia in rural/remote individuals who would benefit from specialist involvement and an interdisciplinary team assessment. Despite the intended focus, AD is a common reason for referral to the RRMC, which indicates a need among rural/ remote health professionals for specialist support related to AD diagnosis. (18) Funded initially by the Canadian Institutes of Health Research as a demonstration project, the provincial government began funding the RRMC in 2009 as a clinical resource. The RRMC database, consisting of baseline and limited follow-up clinical data, is a useful research resource that has been used in several publications on multiple topics (e.g., neuropsychological assessment, medication use). (15) RRMC patients were non-institutionalized adults living more than 100 km outside the two major Saskatchewan cities where 50% of the province's 1.13 million population resides. (19) Patients were referred by primary health professionals or specialists for a one-day, in-person interdisciplinary evaluation. The RRMC focus is diagnosis and management of suspected dementia; however, it is possible some patients had a previous dementia diagnosis. Our previous investigation of referral letters in the first five years of the RRMC mainly identified requests for: confirmation of diagnosis or treatment, management suggestions, assessment by patient or family request, and consultation for difficult or complex issues such as development or worsening of symptoms. (20) RRMC evaluation included a CT head scan, blood work, and assessments by a clinical team comprised of neurology, neuropsychology, physical therapy, and nursing. Patients and caregivers participated in interdisciplinary interviews and completed questionnaires consisting of sociodemographic and clinical measures, which were entered into the RRMC database. Diagnosis and treatment recommendations were provided at clinic day end. Follow-up data were not included in the study; however, follow-up was provided to all patients on an as-needed basis by the clinic neurologist via telehealth, at one year in person in the RRMC to all patients, and after one-year in person in the RRMC for a segment of patients. Further details are available in Morgan et al. (15,16) Between March 2004 and July 2016, the RRMC enrolled 544 patients (Appendix A). Clinical and administrative health data were linked for 436 patients after excluding those who did not have a health service number (n = 47), had not yet been assessed in clinic (n = 9), or did not complete a clinic questionnaire (n = 52).
The index date for each patient was the date of their RRMC evaluation and diagnosis. After data linkage, study eligibility was determined based on continuous health insurance coverage with less than three-day gaps in the pre-index or post-index periods. Coverage was determined separately for the two periods due to the small sample, and post-index coverage was calculated only for patients who were alive at post-index end (i.e., study end). Patients admitted to permanent long-term care between their index date and study end were retained in the study. However, patients who died each post-index year were removed the following year. Therefore, we used a pre-index cohort (N = 416 every year) and a post-index cohort (N = 419 at one year, N = 407 at two years, N = 390 at three years, N = 363 at four years, N = 333 at five years) of which 413 were the same individuals in both cohorts.
Data Sources and Measures
Unique identifiers based on personal health service numbers were applied to RRMC data (8 th data release, 2017) at eHealth The date of RRMC evaluation and diagnosis was the index date for each patient. Annual health service utilization measures derived from administrative health data included physician use (FP visits, specialist visits, and FP and specialist diagnoses), hospital use (hospitalization, 30-day hospital readmission, length of stay, discharge destination), and prescription drug dispensations (all-type, dementia-specific, and non-dementia). All specialties other than family medicine were included as specialists. To identify the most frequent physician visit and hospital admission codes for RRMC patients, International Classification of Disease (ICD-9 and ICD-10) and Medical Services Branch codes were identified in the Medical Services Database and Hospital Discharge Abstracts Database.
We used the RRMC database to derive sociodemographic and clinical characteristics (Appendix C and Appendix D). Sociodemographic characteristics included age, sex, education (years), marital status, living alone (vs. not living alone), primary income source, metropolitan influenced zone (MIZ), and kilometers from patient's home community to the RRMC. Clinical measures included RRMC diagnosis following Canadian Consensus Conference on the Diagnosis and Treatment of Dementia guidelines, (21) measures as described in Appendix C, and self-reported health conditions (e.g., arthritis), physical activity or exercise (times/week), and alcohol drinks (number week).
Statistical Analysis
Descriptive statistics were used to measure annual health service utilization. For each pre-and post-index year, we calculated the frequency and proportion of patients using a service at least once (FP visits, specialist visits, prescription drug dispensations), mean number of uses and 95% confidence intervals; frequency and proportion of patients with at least one hospitalization and 30-day readmission; and mean hospital length of stay (total, acute, and alternate) and 95% confidence intervals. Significant associations between health service use and time were measured with the Spearman correlation coefficient (p < .05). Significant differences in average health service use between the pre-index and post-index periods were identified with the t-test for means (p < .05) and Wilcoxon ranked test for proportions (p < .05).
Frequencies and means were used to analyze sociodemographic and clinical characteristics for pre-and post-index periods. For each post-index year, we calculated the frequency and proportion of patients who were admitted to permanent long-term care and who died. For each pre-and post-index period, we calculated the frequency of all diagnoses based on International Classification of Disease (ICD) and Medical Services Branch codes and proportion represented by each diagnosis, and the frequency of all hospital discharge destinations and proportion represented by each destination.
Health Service Use
Over the pre-index period, the proportion of patients with at least one FP visit annually increased (p = .037), as did the average number of FP visits annually (p = .005; Figure 1, Table 1). Over the 10-year period, the average number of FP visits increased (p < .001) and was higher overall in the postindex than pre-index (p < .002; Table 2). Across the pre-index period, increases occurred in the proportion of patients with at least one specialist visit each year (p = .037) and average number of specialist visits annually (p < .001; Figure 1, Table 1). The highest average number of specialist visits was observed at one year post-index. Over the post-index period, decreases occurred in the proportion of patients with at least one specialist visit (p < .001) and average number of specialist visits (p = .037). Considering the top 10 most frequent diagnoses during pre-and post-index separately, dementia ranked 6 th in the pre-index for FP visits (2.8% of diagnoses) but did not rank in the top 10 for specialist visits or hospitalizations (data not shown). Post-index, dementia ranked in the top 10 diagnoses for each health service, accounting for 23.2% of FP diagnoses, 29.4% of specialist diagnoses, and 4.3% of diagnoses most responsible for hospital admission.
The proportion of patients hospitalized at least once each year increased over the pre-index period (p < .001) to the highest point at one year pre-index, and decreased over the post-index period (p = .037; Figure 2, Table 1). Across the 10-year period, average length of hospital stay each year increased in terms of total days (p = .008), acute care days (p = .023), and alternate level of care days (p < .001). Overall, the average length of hospital stay was higher in the post-index than pre-index (p < .001; Table 2). Home settings constituted 92.8% of all hospital discharge destinations during pre-index and 72.8% during post-index (data not shown). Institutional settings accounted for 6.2% of all destinations in pre-index and 21.8% in post-index. During the pre-index period, institutional settings to which patients were transferred were facilities providing inpatient hospital care (e.g., other acute facilities, rehabilitation). In the post-index period, institutional settings included long-term care or residential care (e.g., nursing home, hospice/palliative care), as well as hospital facilities.
Over the pre-index and 10-year period, increases occurred in the proportion of patients each year receiving at least one all-type drug prescription (p < .001 and p = .002, respectively) and at least one dementia-specific drug prescription (p = .041 and p = .014, respectively) ( Figure 3, Tables 1 and 2). One or more dementia-specific drug prescriptions were given to 4.1% of patients at two years pre-index, increasing until one year post-index (36.3%). Over the post-index period, the proportion of patients receiving dementia-specific drug prescriptions decreased (p < .001). Over the pre-index, post-index, and 10-year period, increases occurred in the average number of all-type drug prescriptions per patient each year (all p < .001). In terms of dementia-specific drug prescriptions, the overall average number increased over the pre-index (p = .041) and 10-year period (p < .001). Overall, the proportion of patients receiving all-type and dementia-specific drug prescriptions, and the average number of prescriptions per patient, were higher in the post-index than pre-index ( Table 2).
DISCUSSION
In this retrospective cohort study, we linked administrative health and clinical data of patients referred with suspected dementia to a rural/remote interdisciplinary specialist memory clinic, to examine patterns in annual health service use five years before until five years after memory clinic diagnosis. Our study adds to the literature by demonstrating that health service usage among rural/remote patients began to gradually increase as early as four years before memory clinic diagnosis, in terms of average number of family physician and specialist visits, proportion admitted to hospital, and all-type drug prescriptions (average number and proportion). Over the combined 10-year period, significant increases occurred in the average number of FP visits, all-type drug prescriptions, and dementia-specific drug prescriptions. Moreover, the highest proportion of patients hospitalized was observed one year before memory clinic diagnosis and the highest average number of specialist visits was observed one year after diagnosis, and both demonstrated a significant decreasing trend in the five-year period after diagnosis.
As early as four years leading up to memory clinic diagnosis, we observed incremental increases in the average number of FP and specialist visits, and in the proportion of RRMC patients hospitalized. Findings from other studies examining health service usage during three years before diagnosis were mixed, with UK (8) and US (22) studies of AD patients (mean age 79.9 yr in both) demonstrating a stable pattern in the number of GP or outpatient visits until an increase six months pre-diagnosis. Our results are consistent with a US study by Albrecht et al . (23) that reported increased outpatient visits over the three-year period before diagnosis. Albrecht and colleagues further reported variation by dementia subtype with visits highest in the vascular dementia subtype and lowest in AD, compared with MCI. Our previous research reported greater depressive symptoms and caregiver psychological distress among RRMC young-onset dementia compared to late-onset dementia patients, (24) and greater levels of previous psychiatric illness, depressive symptoms, and sleep concerns in subjective cognitive impairment patients compared to other patients. (25) In the present study, younger age and symptoms of suspected dementia possibly intensified the frequency of physician visits with memory clinic diagnosis delayed as patients waited for a RRMC referral. Low availability of rural/ remote dementia-specific supports and community resources to assist FPs with diagnosis (e.g., multidisciplinary teams) and FP diagnostic uncertainty (12) may have contributed to prolonged increased health-care utilization. Similar to our findings regarding hospitalization before memory clinic diagnosis, a gradual increase in the proportion of AD patients hospitalized at least once annually was also observed for most of the duration leading up to diagnosis in previous studies with pre-diagnosis periods of three to five years. (8,26,27) Most RRMC patients previously reported first noticing symptoms two years before memory clinic diagnosis (13) and hospitalization may have prompted a RRMC referral where assessment wait times have historically been as long as 12 months. While it may seem surprising that a small number of RRMC patients were prescribed dementia-specific drugs as early as two years before memory clinic diagnosis, a previous study of RRMC patients seen between 2004 and 2015 found 14% were taking a cholinesterase inhibitor before assessment. (28) Lengthy wait times for RRMC appointments may have contributed to primary care providers prescribing medication to address symptoms in the interim. Considering all-type drug prescriptions, the average number of prescriptions and proportion of RRMC patients receiving prescriptions began to increase four years before memory clinic diagnosis, congruent with a Finnish study that showed an increasing pattern in the percentage of AD patients purchasing any prescription drugs over the five-year period leading up to diagnosis. (26) Although we did not examine the number of drugs prescribed or used concurrently, the annual average after memory clinic diagnosis (63.1) suggests possible potentially inappropriate prescriptions, which are associated with increased risk of adverse drug reactions and mortality. (29) Over the combined 10-year period of this study, we found significant increases in the average number of FP visits as well as proportion and average number of all-type and dementia-specific drug prescriptions. Other studies with a post-diagnosis duration of at least one year showed varying patterns, with a German study of rural patients showing an increase in primary care physician visits from one year before until one year after dementia diagnosis, (30) and a UK study demonstrating an average number of GP visits in the one-year period after AD diagnosis similar to the three-year period before. (8) The varied diagnoses and atypical presentations of RRMC patients may have contributed to increasingly frequent FP visits as their condition progressed, as new signs and symptoms among people with atypical presentations tend to appear over time and the suspected cause of dementia changes. (2) Our findings underscore the key role of FPs in providing dementia care over the course of the illness, which Canadian recommendations suggest involves prevention, timely diagnosis, and post-diagnosis management including pharmacologic and nonpharmacologic treatment, management of co-occurring conditions, care coordination, caregiver support, and other functions. (31) Rural FPs may have limited capacity and local resources to provide dementia care, however they are often generalists with a broad scope of practice and require a range of knowledge and skills to serve their communities. (32) We observed the highest proportion of hospitalized patients one year before memory clinic diagnosis and the highest average number of specialist visits one year after diagnosis, with a significant decreasing trend in both over the five-year period after diagnosis. In the absence of studies of comparable long duration, a German study demonstrated a decreasing number of specialist visits (dementia specialists visited by rural patients) over the one-year period after diagnosis. (30) Studies with separate pre-and post-diagnosis durations of at least one year revealed varying patterns in the proportion of patients hospitalized, with a US study showing an increasing trend during the one-year period before diagnosis and a declining trend in the one-year post-diagnosis period, (33) a UK (8) study demonstrating the highest proportion six months pre-diagnosis and a stable pattern during the one-year period after diagnosis, In 3B and 3E, prior to 2yr pre-index, the number of patients receiving dementia-specific drug dispensations were ≤ 6 each year and are not shown. and a Finnish study (26) demonstrating the highest proportion six months post-diagnosis and a decreasing pattern during the two-year post-diagnosis period. In this study, 8-10% of patients were admitted to permanent long-term care after memory clinic diagnosis (retained in the sample), which possibly contributed to decreased demand for or undersupply of specialized and acute medical care. (34) The location of RRMC patients was likely also a factor in decreasing use as travel to urban centres where most specialists practice can be difficult for people with progressive cognitive and functional decline.
Limitations and Strengths
The interdisciplinary clinical diagnosis of RRMC patients based on Canadian guidelines is a strength of this study, as is consideration of health service usage over 10 years which is beyond the time frame of most studies. It should be noted approximately 40% of patients received a memory clinic diagnosis other than AD or non-AD, specifically a diagnosis of mild cognitive impairment, subjective cognitive impairment, or other condition. Moreover, it is possible that some patients received a dementia diagnosis by a health professional prior to the specialist RRMC evaluation. Therefore, this study does not necessarily reflect patterns corresponding to first dementia diagnosis. For RRMC patients assessed in 2016 (n = 13), only four years of data were available; thus service use in five years post-index may have been marginally underestimated. Patients admitted to permanent long-term care after memory clinic diagnosis were retained which possibly resulted in an underestimation of some services (e.g., specialist and inpatient care). As part of this study, we considered the top 10 most frequent diagnoses during physician visits and hospitalization; however, an in-depth exploration of reasons for visits was outside the scope of the study. Investigation of sex and gender differences in people with dementia and other neurodegenerative disorders is also important; however, we did not conduct sex-based or diagnosis subtype analysis due to time and resource constraints, which limits interpretation.
CONCLUSIONS
Our study showed a pattern of increasing usage of certain health services (FP and specialist visits, hospital admission, all-type drug prescriptions) as early as four years before diagnosis in a rural/remote specialist memory clinic. Regarding average number of FP visits, all-type drug prescriptions, and dementia-specific drug prescriptions, an increasing pattern occurred across the 10-year study period. Regarding average number of specialist visits and proportion hospitalized, a decreasing pattern occurred in the five-year time span after diagnosis. Given the ongoing role of FPs for memory clinic patients and in light of a limited rural/remote FP supply, there is a pressing need to provide support in post-diagnostic management with more training, community resources, and inclusion of other health professionals in dementia care. Future studies should further investigate reasons for health service use of memory clinic patients. Longitudinal patterns in health service usage should also be examined-for instance, by extending the observation period and examining the impact of sociodemographic and clinical factors such as sex, dementia subtype, and co-morbidity on patterns.
CONFLICT OF INTEREST DISCLOSURES
We have read and understood the Canadian Geriatrics Journal's policy on conflicts of interest disclosure and declare that we have none. Includes dates of insurance coverage, and dates of birth and death.
Physician services
Physician Services Claims File: Medical Services Branch Available from 1990 onward. Includes claims for billing by physicians paid on a fee-for-service basis, and shadow billing by practitioners and primary health sites paid on an alternate non-fee-for-service basis. (35) Family physician visits (Annual pre-and post-index) Includes family medicine physicians and nurse practitioners. Visits were included regardless of location (office, home, hospital in-patient, hospital out-patient, emergency room, and other locations).
Specialist visits (Annual pre-and post-index)
Includes all specialties other than family medicine. A specialist visit requires a referral from a family physician or nurse practitioner. Visits were included regardless of location.
Type of specialist visited (Annual pre-and post-index) Family physician diagnoses (Pre-and post-index summaries) International Classification of Disease (ICD-9) codes were used to identify diagnoses. A maximum of one diagnosis code per service claim is allowed.
Specialist diagnoses (Pre-and post-index summaries) Diagnoses were identified by ICD-9 codes. One diagnosis code per service claim is allowed. a Patients admitted to permanent long-term care each year remained in the study. b Patients who died each year were removed from the study the year following. | 2023-09-04T15:09:16.390Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "ddb61b36c564b883a3e1350ab0dddd7046c8e172",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bb9971dde9097e9799d60855ee1b8a43feac7728",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269526468 | pes2o/s2orc | v3-fos-license | Preparation and characterization of artemether-loaded niosomes in Leishmania major-induced cutaneous leishmaniasis
Cutaneous leishmaniasis is the most prevalent form of leishmaniasis worldwide. Although various anti-leishmanial regimens have been considered, due to the lack of efficacy or occurrence of adverse reactions, design and development of novel topical delivery systems would be essential. This study aimed to prepare artemether (ART)-loaded niosomes and evaluate their anti-leishmanial effects against Leishmania major. ART-loaded niosomes were prepared through the thin-film hydration technique and characterized in terms of particle size, zeta potential, morphology, differential scanning calorimetry, drug loading, and drug release. Furthermore, anti-leishmanial effect of the preparation was assessed in vitro and in vivo. The prepared ART-loaded niosomes were spherical with an average diameter of about 100 and 300 nm with high encapsulation efficiencies of > 99%. The results of in vitro cytotoxicity revealed that ART-loaded niosomes had significantly higher anti-leishmanial activity, lower general toxicity, and higher selectivity index (SI). Half-maximal inhibitory concentration (IC50) values of ART, ART-loaded niosomes, and liposomal amphotericin B were 39.09, 15.12, and 20 µg/mL, respectively. Also, according to the in vivo study results, ART-loaded niosomes with an average size of 300 nm showed the highest anti-leishmanial effects in animal studies. ART-loaded niosomes would be promising topical drug delivery system for the management of cutaneous leishmaniasis.
Promastigote
Leishmaniasis is an infectious disease caused by protozoan parasites from different species of Leishmania.Leishmaniasis has three main clinical forms including cutaneous leishmaniasis (CL), visceral leishmaniasis (VL), and mucocutaneous leishmaniasis, among which the CL is the most common form 1 .Cutaneous leishmaniasis is mainly caused by Leishmania tropica, Leishmania major, and Leishmania aethiopica 2 .Leishmania major (L.major) is considered as the most common cause of cutaneous leishmaniasis in the Middle East area 3 .Disease severity can be varied from a self-limited skin lesion (cutaneous leishmaniasis; CL) to lesions spread from the initial skin lesion to the mucosa (mucosal leishmaniasis; ML), or lesions spread through the body uncontrollably (disseminated or diffuse cutaneous leishmaniasis; DCL).DCL causes a potentially fatal systemic disease with multi-organ failure including the spleen, liver, and bone marrow (kala-azar or visceral leishmaniasis; VL) 4,5 .According to the World Health Organization (WHO) reports, 700,000 to 1 million cases of leishmaniasis are diagnosed annually.Moreover, about 200,000 new cases of CL are annually reported to WHO, while since a large number of infected patients usually do not refer to the physician, the real incidence rate is more than 600,000 to 1 million cases annually 6 .
The main therapeutic agents in the management of all clinical forms of leishmaniasis are pentavalent antimonial drugs including meglumine antimoniate (Glucantime®) and sodium stibogluconate (Pentostam®) 7 , which are the drugs of choice for leishmaniasis management based on the WHO report 8 .Other commonly used and clinically available drugs for leishmaniasis management are miltefosine, amphotericin B, pentamidine, and paromomycin.The two major drawbacks of recruitment of these therapeutic agents in leishmaniasis management are their potential toxicities 9 and the risk of drug resistance occurrence 10,11 .
Artemisinin (from Artemisia annua plant) and its derivatives, known as anti-malarial agents, have recently been considered as potential anti-leishmanial agents.Amongst all, artemether (ART) is the most commonly used artemisinin drug for leishmaniasis treatment 12,13 .The anti-parasitic effect of ART is attributed to the endoperoxide bridge in its structure 14 .At first, ART is activated by the intraparasitic heme-iron and cleavage of the endoperoxide bridge which can result in the production of reactive oxygen species which in turn can induce mitochondrial dysfunction and apoptotic-like death of leishmania parasites 14,15 .It has been reported that ART is capable of inhibition of both intracellular and extracellular growth of L. major 16 .ART would be used as an efficient drug in leishmaniasis management, however, due to its short half-life (about 3 h), low water solubility, and low bioavailability 17 due to pre-systemic metabolism, it should be administered frequently which can result in enhanced systemic adverse reactions 13 .Therefore, topical administration of ART for CL management is desirable.Since ART is a highly lipophilic drug, with a logP value of 3.07 18 , and has limited water solubility and low skin permeation, therefore, recruitment of suitable lipid-based nanocarriers including solid lipid nanoparticles (SLNs), nanostructured lipid carriers (NLCs), liposomes, and niosomes with high drug loading capacities for topical drug delivery purposes would be promising 19,20 .To date, ART has been loaded in NLCs 20 , polyvinyl alcohol (PVA) nanoparticles 12 , and nanoemulsions 21 for CL management.Among various drug delivery systems, niosomes are able to target deeper skin layers and can be used in the treatment of CL, therefore, they were selected in the current study for ART delivery.Moreover, niosomes are capable of targeting macrophages in the deeper skin layers 22 .Niosomes are non-ionic surfactant vesicles with high encapsulation efficiency potential for both hydrophilic and lipophilic drugs.Therefore, they are considered as suitable nanocarriers for drug delivery purposes, especially for topical drug delivery 23 .Numerous anti-leishmanial drugs including amphotericin B 24 , pentamidine 25 , tioxolone 26 , miltefosine 27 , dapsone 28 , and ketoconazole 29 have been incorporated in niosomes for the purpose of CL management.The main advantages of niosomes over traditional liposomes, as topical drug delivery systems, are their higher encapsulation efficiency, higher physicochemical stability, higher solubilization capacity, enhanced skin penetration due to their more flexible structure, and higher cutaneous permeation capability.In addition, niosomes, can act as a depot for extended drug release purposes.Furthermore, they can induce enhanced therapeutic efficacy of the loaded drug through the delivery to the site of action and also reduce drug clearance 23 .These properties, along with enhanced drug deposition within the target area, sustained and controlled drug release, and reduced systemic absorption, result in reduced adverse drug reactions 30,31 .Various non-ionic surfactants including polyoxyethylene lauryl ether (Brij 35), sorbitan monostearate (Span 60), polyoxyethylene stearyl ether (Brij 72), and polyoxyethylene (80) sorbitan monooleate (Tween 80) are commonly used in vesicular nanoparticles preparation, especially niosomes.The results of a recent study revealed that the niosomes containing Brij 35 showed an enhanced dissolution profile of tacrolimus as a lipophilic agent 32 .The results of another study indicated that the types of surfactants used in niosomes preparation can significantly affect the particle size of the niosomes.In this regard, niosomes containing Tween 60 showed significantly higher particle sizes in comparison to those containing Span 60 or Brij 72.Therefore, niosomes containing surfactants with higher HLB values can result in larger particle sizes 33 .
In this study, ART-loaded niosomes were designed and characterized for topical drug delivery to the deeper skin layers, especially infected macrophages for CL management.At first, the prepared niosomes were optimized in terms of particle size and drug loading.After that, the fabricated optimum niosomal formulations were characterized.Moreover, cytotoxicity and general toxicity assessment were performed on leishmania promastigotes and J774 macrophage cell lines, respectively.In addition, the particle size of ART-loaded niosomes was tuned to achieve optimal cellular intake with specific cell targeting potential with high selectivity index (SI) values.Finally, the in-vivo effect of niosomal ART gel on leishmania lesions was assessed using animal models.The main goal of this study was to enhance the anti-leishmanial effects of ART through its encapsulation within the niosomes as suitable drug delivery systems, especially for topical delivery purposes.To the best of our knowledge, to date, there is no published data regarding the design and development of ART-loaded niosomes for L. major-induced CL management.
Quantitative analysis
As shown in Supplementary Fig. S1, the results of HPLC method validation for ART analysis revealed that this method could specifically detect and quantify ART at a retention time of 6.2 min at λ max of 205 nm.The results of the logistic regression assessment revealed an R square value of 0.9996, indicating sufficient linearity between ART concentration and area under the curve (AUC).In addition, the method validation results showed acceptable sensitivity for ART analysis in pharmaceutical matrices with a limit of quantification (LOQ) and limit of detection (LOD) of 5 µg/mL and 1.66 µg/mL, respectively.Furthermore, this method presented sufficient intraand inter-day accuracy and precision which is consistent with the FDA guideline 34 .Both intra-and inter-day accuracy values were between 90 to 110%; also, the intra-and inter-day precisions were < 10%.
Optimization of ART-loaded niosomes
The results of the preliminary study to obtain the optimum ratio of lipid matrix including triolein, capryol PGMC, and cholesterol are summarized in Supplementary Table S1.Based on the results, the lipid matrix of the optimum formulation (F6) consisted of 25% Capryol PGMC, 25% Triolein, and 50% cholesterol.In this regard, this formulation was considered for further assessment.The amount of ART in the optimum formulation was set to 5% of the total mass of the lipid matrix in which drug expulsion did not occur.Based on the independent variables introduced, the Design Expert software suggested 27 runs.The results of response variables including particle size and %EE are summarized in Table 1.As shown, the particle size of the prepared niosomes was between 94 and 689 nm, and their %EE was between 93.58 and 100%.According to the results, the suggested model (two-factorial interactions (2FI) model) was not significant (P-value of 0.27) for particle size; however, the lack of fit was not significant (P-value of 0.09), indicating the fitness of the model.Moreover, based on the optimization results, there was a significant correlation between the concurrent effect of Brij 72 and Span 60 and the particle size of the niosomes.Based on the optimization findings, two different particle sizes of 100 and 300 nm with the desired %EE were targeted through the Design-Expert and considered as the optimum formulations for further assessments of the effect of niosomes particle sizes.The suggested independent variables by Design-Expert for these targeted particle sizes (100 and 300 nm) and the obtained response variables including particle size and drug loading are summarized in Table 2. Based on the results, the targeted niosomes suggested by the software (with a desirability value of 1) showed acceptable predictability for the particle size.Therefore, although the suggested model for size optimization by Design-Expert was not significant, it showed sufficient www.nature.com/scientificreports/repeatability for the suggested runs.In addition, the targeted particle sizes confirmed the predictability of the model.Since the selected ranges for independent variables were narrow, the reported proposed model was not significant.This means that these variables do not have a significant effect on the particle size.Furthermore, the optimum percentage of drug loaded in the lipid matrix (5% of the total mass of the lipids) was accompanied by no drug expulsion or rise in the particle size.Therefore, the suggested model was not significant for %EE, as well.
Characterization of ART-loaded niosomes
Particle size, size distribution, and zeta potential analysis Particle size and size distribution of two optimum ART-loaded niosomes formulations were assessed by static light scattering (SLS) and dynamic light scattering (DLS) techniques.As shown in Supplementary Fig. S2, particle size analyzer (PSA) results revealed that the prepared niosomes had an average particle size of 103 ± 2 nm with a span index of 0.9 and 314 ± 1.5 nm with a span index of 0.266.DLS results were displayed in Fig. 1A, which indicated the polydispersity index (PDI) of 0.266 ± 0.033, showing the homogeneity of the prepared niosomes.
The zeta potential of the prepared ART-loaded niosomes was − 26.10 ± 3.03 mV, as shown in Fig. 1B.This negative zeta potential could induce electrostatic repulsion among the prepared niosomes which could in turn result in higher physical stability profile of the niosomes during storage and avoid aggregation over time.
TEM
As shown in Fig. 1C, morphological assessment of the prepared ART-loaded niosomes through TEM revealed that the optimum niosomal formulation was homogenous in size and spherical in shape.In addition, the obtained particle size was compatible with the PSA and DLS results.
Drug loading assessment
The results of the drug loading assessment revealed that the %EE of ART within the niosomes with average diameters of 100 and 300 nm were 100 ± 0% and 99.46 ± 0.13%, respectively.Moreover, %LC of ART within the niosomes with average diameters of 100 and 300 nm were 3.22 ± 0.02% and 1.67 ± 0.03%, respectively.
Stability assessment
The results of stability assessment in terms of particle size, %EE, and %LC of ART-loaded niosomes with an average diameter of 100 and 300 nm were shown in Fig. 2. Based on the results, the prepared ART-loaded niosomes had acceptable stability, and no significant changes were observed in drug loading (P-values of 0.463 and 0.510 for 100 nm and 300 nm formulations, respectively) and particle size (P-values of 0.585 and 0.418 for 100 nm and 300 nm formulations, respectively) during this period.Furthermore, no drug expulsion occurred during this period of stability assessment.
Differential scanning calorimetry (DSC) analysis
As shown in Fig. 3, the DSC thermogram of the physical mixture of lipids showed a glass transition temperature (T g ) of 55.25 °C, while this endothermic peak disappeared in the niosomal formulation, indicating that the prepared niosomes were amorphous and no crystallization was observed.Using surfactants in niosomes formulation profoundly reduces the T g .Also, the DSC thermogram of ART was shown in Fig. 3C with an endothermic peak at 85 °C indicating the melting point of ART.
Drug release assessment
The results of cumulative ART release from niosomes and its comparison to the free drug were shown in Fig. 4. Based on the results, about 87.50 ± 4.02% of the total drug was released from the niosomes in 24 h.Drug release from the niosomes followed a biphasic pattern, an initial burst release within the first 2 h followed by a sustained release within 24 h.On the other hand, free drug revealed 100% release within 2.5 h.Therefore, it seems ART encapsulation within the niosomes could result in a more sustained drug release pattern.The results of drug release kinetics from niosomes with an average diameter of 100 nm are summarized in Supplementary Table S2.
According to these results, ART release firm the niosomes was best fitted to the First-order model.In addition, the Korsmeyer-Peppas equation was used to distinguish the most probable drug release mechanism from the niosomes.Since the obtained "n" value of the Korsmeyer-Peppas equation (n = 0.169) is < 0.5, therefore, the main mechanism of ART release from the niosomes is Fickian diffusion.
In vitro anti-leishmanial effects against the promastigotes of L. major
The anti-leishmanial effects of ART solution, drug-free noisomes, ART-loaded niosomes, and liposomal amphotericin B (L-AMB) at concentration ranges of 2.5 to 1000 µg/ml against the promastigotes of L. major (Supplementary Fig. S3) and the percentage of viability were assessed.The half-maximal inhibitory concentration (IC50) values of ART, ART-loaded niosomes with an average particle size of 100 and 300 nm, and L-AMB on promastigotes after 24 h were 39.09, 21.48, 15.12, and 20 µg/mL, respectively, as shown in Fig. 5C and Table 3.Based on the results, the ART-loaded niosomes with an average particle size of 300 nm showed the highest specific cytotoxicity potential against the L. major promastigotes in comparison to L-AMB.In this regard, the percentages of cell viability after 24 h were about 25% and 10% for L-AMB and ART-loaded niosomes with an average diameter of 300 nm, respectively.Moreover, the results of 2-way ANOVA statistical analysis revealed that there was a significant difference between the cytotoxic effect of free ART and ART-loaded niosomes in all assessed concentrations against L. major promastigotes (P-value < 0.0001).In addition, as shown in Fig. 5, the anti-leishmanial effects of ART-loaded niosomes were significantly higher than those of free ART and drug-free niosomes (P-value < 0.0001 for both).Since there were significant differences in cell viabilities assessed at various concentration ranges from 2.5 to 1000 µg/mL (P-value of < 0.05 for all concentrations), the cytotoxic effect of all evaluated formulations was concentration-dependent and the percentages of cell viability were significantly reduced with concentration enhancement.The IC50, half-maximal cytotoxicity concentrations (CC50), and selectivity indices (SI) are presented in Table 3. IC50 showed the minimum concentration of the drug required for half-maximal toxicity against L. major promastigotes.CC50 was considered as the minimum concentration of the drug required for half-maximal general toxicity against the intact macrophage cells.SI values showed the selective toxicity potential of the assessed formulation for the promastigotes in comparison to macrophage cells.According to the obtained results, the IC50 values of ART-loaded niosomes with an average diameter of 100 nm and 300 nm were significantly lower than that of ART against the promastigote of L. major after 24 h of assessment with P-values of 0.0001 and < 0.0001, respectively.The IC50 value of ART-loaded niosomes with an average diameter of 300 nm was significantly lower than L-AMB as the positive control (P-value of 0.008), while there was no significant difference between the IC50 values of ART-loaded niosomes with an average diameter of 100 nm and L-AMB (P-value of 0.210) against L. major promastigotes after 24 h.Moreover, based on the results, the IC50 value of ART-loaded niosomes with particle size of 300 nm was significantly lower than ART-loaded niosomes with particle size of 100 nm (P-value of 0.003) which indicated the size-dependent cellular uptake of the prepared niosomes.
Cytotoxicity assessment using J774 cell line
The general toxicity of the free drug, ART-loaded niosomes with particle sizes of 100 and 300 nm, drug-free noisomes, and L-AMB as a positive control at concentration ranges of 2.5-1000 µg/ml against macrophage cells (Supplementary Fig. S3) at time points of 24 and 48 h were assessed, and the results were shown in Fig. 5A,B.According to Fig. 5A, the percentage of cell viability at a concentration of 1000 µg/ml for niosomal formulations with particle sizes of 100 and 300 nm after 24 h was about 85%, while L-AMB at the same concentration showed the cell viability of less than 40%.Therefore, the prepared ART-loaded niosomal formulation with the www.nature.com/scientificreports/lowest general toxicity in comparison to free ART and L-AMB can be considered as the safest preparation.The CC50 and SI values for ART, ART-loaded niosomes with a particle size of 100 and 300 nm, and L-AMB against macrophages after 24 h are presented in Table 3.The obtained CC50 value of L-AMB was significantly lower than the CC50 values of free ART and ART-loaded niosomes with particle sizes of 100 nm and 300 nm (with P-values of < 0.0001 for all formulations) which confirmed the higher general toxicity of L-AMB.Based on the results, ART-loaded niosomes with an average diameter of 300 nm showed the lowest IC50 of 15.12 µg/mL and the highest CC50 of 4882 µg/mL which resulted in the highest SI value of 322.88.However, the L-AMB had an IC50 value of about 20 µg/mL, CC50 of 244.70 µg/mL, and SI value of 12.23 which was significantly lower than the SI value of ART-loaded niosomes indicating that this new formulation has a more selective anti-leishmanial effect.Therefore, the designed ART-loaded niosomal formulation could specifically induce the highest toxicity against L. major promastigotes with the lowest general toxicity potential in comparison to L-AMB and free ART.Although L-AMB showed a suitable IC50 of 20 µg/mL, due to the lower SI value and higher general toxicity, would be inferior to the ART-loaded noisomal formulation.
In vivo anti-leishmanial effect
According to the results of the in vitro cytotoxicity assay, the ART-loaded niosomes with an average particle size of 300 nm provided the lowest IC50 and the highest SI values; therefore, in the animal study, this niosomal www.nature.com/scientificreports/formulation was considered for further assessments.In addition, it seems that these tuned-sized nanoparticles can result in better cellular uptake and internalization by the infected macrophages, which in turn can lead to the higher clinical efficacy of the niosomal formulation.
The in-vivo anti-leishmanial effects of conventional ART gel 1%w/w, ART-loaded niosomal gel 1%w/w, drugfree niosomal gel, drug-free conventional gel, and topical nanoliposomal amphotericin B gel 0.4%w/w (SinaAm-pholeish®) were assessed, and the results were presented in Fig. 5D.Based on the results, drug-free niosomal gel and drug-free conventional gel were accompanied by an increase in wound size during one month of treatment (Supplementary Fig. S4).While conventional ART gel, ART-loaded niosomal gel, and liposomal amphotericin B resulted in a reduction in the wound size during one month of treatment (Supplementary Fig. S5).Moreover, based on the results, ART-loaded niosomal gel could induce the highest therapeutic effect and result in the smallest wound size after one month of topical treatment.In addition, there was a significant difference between the anti-leishmanial effect of ART-loaded niosomal gel and both conventional ART gel (P-value < 0.0001) and topical liposomal amphotericin B formulation (P-value of 0.01).According to Fig. 5D, after one month of treatment with ART-loaded niosmal gel, the lesion size was reduced to less than 50% of the baseline lesion size which indicates a more efficient anti-leishmanial effect than conventional ART gel (P-value < 0.0001).Therefore, it seems that nisomes due to their flexible structure and the presence of non-ionic surfactants as permeation enhancers, can result in better skin deposition and higher cellular uptake that cause improved anti-leishmanial effect.
Discussion
The optimum formulation of ART-loaded niosomes with desired particle size was homogenous in size with a spherical shape.The spherical shape of the obtained niosomes was confirmed, using TEM analysis, and the results were compatible with those of simvastatin-loaded niosomes 35 and also human growth hormone-loaded niosomes 36 .It has been reported that the addition of oils including Triolein and Capryol PGMC can result in a reduction in the particle size of the nanoparticles 37 , therefore, these ingredients were also used to fabricate niosomes with tuned particle sizes.In addition, surfactants can have a pivotal effect on nanoparticles size, as it has been reported, niosomes consisting of Span 60 and Brij 72 (i.e.non-ionic surfactants with lower HLBs) showed significantly smaller particle sizes in comparison to those consisting of Tween 60 with higher HLBs 33 , therefore, it the current study, a mixture of Span 60, Brij 35, and Brij 72 was used to prepare niosomes with the desired particle sizes.The results of the current study revealed that there was a significant correlation between the concurrent effect of Brij 72 and Span 60 and the particle size of the niosomes.Furthermore, it has been reported that niosomes containing Span 60, Span 40, or Brij 72 could induce localized delivery and drug depot within the skin layers for topical drug delivery 38 which was the main purpose of the current project.
The negative zeta potential of − 26.10 ± 3.03 mV supported the possible higher physical stability of the prepared niosomes due to the induction of electrostatic repulsion and prevention of particle aggregation during the storage period.The negative zeta potential of the prepared niosomes can be attributed to the presence of Span 60 in their formulation which can adsorb hydroxyl ions from the aqueous medium to its surface and induce negative zeta potential 39 .Moreover, it has been reported that cholesterol can significantly affect the zeta potential and electrostatic behavior of the niosomes, in this regard, an increase in cholesterol concentration was associated with a decrease in negative zeta potential value 40 .This negative zeta potential was comparable to the results of previous studies on pilocarpine hydrochloride-loaded niosomes that were fabricated using non-ionic surfactants such as Span 60 41 .In another study, negative zeta potential was obtained for cyclosporine-loaded niosomes for topical drug delivery purposes 42 .
Due to the lipophilic characteristics of ART, with a logP of 3.07, high entrapment efficiency values of 98.78% and 100% for niosomes with an average particle size of 100 and 300 nm, respectively, were predictable.The results of the current study were compatible with those of a previous study reported by Mirzaei-Parsa et al. focused on ART-loaded niosomes for breast cancer management which showed an %EE of 82% 43 .The high %EE of ART within the prepared niosomes can also be attributed to the presence of Span 60, as a non-ionic surfactant, in their structure.It has been reported that niosomes comprising Span 60 can induce drug partitioning within the bilayers of Span 60, especially for lipophilic drugs, that in turn can result in enhanced drug loading and increased %EE 44 .Moreover, it has been reported that the relative amounts of Span 60 and cholesterol can significantly affect the drug loading and %EE of lipophilic drugs within the niosomes.In this regard, higher surfactant concentration and lower cholesterol concentration in niosomal formulation were associated with enhanced %EE 45 .The results of our previous study on ART-loaded NLCs showed an %EE of 89.57% 20 ; therefore, it seems that the niosomal Table 3.The IC50, CC50, and SI values for free artemether (ART), ART-loaded niosomes with average diameters of 100 nm and 300 nm, and liposomal amphotericin B (L-AMB) as a positive control. 1 Artemether. 2 Liposomal amphotericin B. 3 Half-maximal inhibitory concentration. 4Half-maximal cytotoxicity concentrations. 5 www.nature.com/scientificreports/formulation could enhance %EE of ART in comparison to the NLC formulation.Moreover, the results of this study indicated that the particle size could affect the amount of %EE which was compatible with the findings of a previous study on hydroxycamptothecin-loaded niosomes which reported that an increase in the particle size from 82 to 204 nm could enhance the drug loading capacity of the prepared niosomes 46 .
The stability study revealed no significant changes in %EE, %LC, and particle size of niosomes during one month of storage.These results were comparable to the findings of a previous study reported by Temprom et al. described melatonin-loaded niosomes that were composed of Span 60 and cholesterol 47 .
Transmission electron microscopy results revealed that the optimum formulation of ART-loaded niosomes was spherical in shape and homogenous in particle size, being compatible with the data obtained from the PSA and DLS techniques.
According to the results, ART release from niosomes was significantly slower than the free drug.About 100% of free ART was passed through the Amicon filter tubes within 2.5 h, while for ART-loaded niosomes, a biphasic release pattern has been observed, an initial burst release followed by a sustained drug release (a cumulative percentage of 87.5% of the drug was slowly released from the niosomes within 24 h).Therefore, it seems that niosomes are efficiently sustained the drug release pattern in comparison to the free drug, although probably due to the presence of 20% ethanol in the release medium, ART release would be faster than normal condition but due to the sink condition the presence of ethanol in the release medium is essential.The obtained biphasic drug release pattern can be accompanied by longer drug deposition in the skin layers and thereby enhanced localized effect at the wound site, less frequent drug administration, along with minimal adverse drug reactions would be expected.The results of a previous study on topical delivery of adapalene-loaded niosomes revealed an initial burst release (26%) within 1 h followed by a sustained release up to 12 h with a cumulative percentage of 73% drug release 48 .Drug release kinetics revealed that ART release from the niosomes was best fitted with the First-order equation.Moreover, the n < 0.5 in the Korsmeyer-Peppas equation revealed that the Fickian diffusion is the most probable mechanism of ART release from the niosomes.The obtained results were compatible with the results of a previous study on 8-methoxypsoralen-loaded niosomes for topical administration in psoriasis management in which drug release followed a biphasic pattern and the kinetics of drug release was best fitted to the First-order model, also, the according to the obtained "n" value, the drug release mechanism was mainly Fickian diffusion 49 .The same results for drug release kinetics and mechanism of drug release were also reported for topical ethionamide and D-cycloserine-loaded niosomes for tuberculosis management 50 .
According to the results of the MTT assay, ART-loaded niosomes with a particle size of 300 nm showed the highest specific toxicity against L. major promastigotes at a concentration of 1000 µg/ml during 24 h of treatment.The SI value for ART-loaded niosomes with a particle size of 300 nm was higher than 100 nm, and SI values for both of these formulations were much higher than that of L-AMB and free ART, indicating that the prepared ART-loaded niosomal formulation can target the L. major more specifically and efficiently.Therefore, the cell toxicity of the assessed formulation was concentration-dependent and size-dependent.MTT test results on different therapeutic regimens against L. major promastigotes after 24 h indicated a significant difference between ART-loaded niosomes, in both particle sizes and free ART (P-value < 0.05).The IC50 value of the optimum formulation (ART-loaded niosomes with an average diameter of 300 nm on promastigotes after 24 h was calculated as 15.12 µg/mL, however, the results of a previous study on glucantime encapsulated in noisomes against cutaneous leishmaniasis (CL) revealed an IC50 value of 690.8 ± 37.9 μg/mL 24 .
J774 macrophage cell line was used to investigate the general toxicity of different formulations.ART-loaded niosomes showed significantly lower general toxicity in comparison to free ART and L-AMB.Based on the results, ART-loaded niosomes with a particle size of 300 nm showed the lowest level of general toxicity with cell viability of more than 85% after 24 h (Fig. 5A).The SI value for the current study was 322.88 for ART-loaded niosomes with a particle size of 300 nm, while the results of a previous study by Mostafavi et al. who prepared amphotericin B in combination with selenium-loaded noisomes revealed the SI value of 288.43 51 .In addition, the results of our previous study on ART-loaded NLCs at lower concentrations showed an SI value of 56.03 after 24 h 20 , while the current study on ART-loaded niosomes resulted in a much higher SI value of 322.88.Therefore, these results can support the superiority of the noisomes over the NLCs for both drug loading and selective and efficient ART delivery to L. major promastigotes.
Based on the quantitative assessment of the wound size during the animal study, there was no difference in the wound size between the groups during the first week of treatment (P-value ˃0.05).However, from the second week of treatment, the group that received ART-loaded niosomal gel showed a significant reduction in the wound size compared to the other groups.On the other hand, the mean sizes of the wounds in the negative control groups (drug-free conventional gel and drug-free niosomal gel) were significantly increased (P-value < 0.0001 for both negative control groups) during the treatment courses.In general, the group treated with ART-loaded niosomal gel was significantly accompanied by the highest wound healing capability in comparison to the free drug (P-value < 0.0001) and L-AMB (as positive control) (P-value of 0.01).In the current study, liposomal amphotericin B was considered as the positive control since, according to the WHO guideline, liposomal amphotericin B is one of the approved drugs in leishmaniasis treatment 8 .Moreover, the results of a previous study on the effect of topical liposomal amphotericin B 0.4%w/w against the murine model of cutaneous leishmaniasis revealed a significant reduction in the size of lesions 8 weeks after initiation of treatment in comparison to the negative control group 52 .
In conclusion, a novel topical drug delivery system consisting of ART-loaded niosomes were designed, optimized, and characterized.The optimum niosomal formulations showed the desired particle size of 300 nm with negative zeta potential and reasonable %EE.The in-vitro study of the ART-loaded niosomes by MTT assay showed the superiority of niosomal formulation with an average particle size of 300 nm in comparison to the free drug (ART) and L-AMB.The niosomal formulation showed the highest specific toxicity against L. major promastigotes with the lowest general toxicity against the intact macrophage cells.Moreover, in-vitro drug release assessment revealed a sustained release pattern which can result in less frequent drug administration.The invivo animal study results revealed that the ART-loaded niosomal gel formulation with a particle size of 300 nm was the most effective formulation for decreasing the leishmanial infected wound sizes.Therefore, it seems that the designed ART niosomal gel, as a novel topical drug delivery system, could be promising in the treatment of cutaneous leishmaniasis caused by L. major.However, further clinical assessment is needed to show the clinical superiority of this niosomal ART formulation in cutaneous leishmaniasis management.
Methods
All experiments were performed in accordance with relevant guidelines and regulations.
Materials
Cholesterol was purchased from Merck, Germany.Brij
Statistical analysis
In this study, statistical analysis was performed using Design-Expert software (version 10.0.7,Stat-Ease Inc., Minneapolis, USA) and SPSS software (version 26); A P-value of < 0.05 was considered as significant.
Quantitative determination of Artemether
Artemether analysis was performed through a validated reverse-phase high-performance liquid chromatography (RP-HPLC) method, using Agilent HPLC instrument (Agilent technology 1260 Infinity, USA) equipped with a UV detector.The mobile phase consisted of acetonitrile: water (with a ratio of 75:25%v/v).The flow rate was 1 mL/min, column temperature was fixed at 25 °C, and λ max was set at 205 nm.
Preparation of ART-loaded niosomes
Niosomes were prepared through a modified thin film hydration technique 14,53 .In this regard, lipid mixture containing triolein, Capryol PMGC, and cholesterol (with a ratio of 25:25:50%w/w); surfactants including Brij 35, Brij 72, and Span 60 (with a ratio of 1.2:1.2:0.5%w/w, and a surfactant/lipid ratio of 0.5:1); and ART (5% w/w of lipid mixture) were dissolved in a mixture of methanol and chloroform (1:1%v/v).Then, the solvent was evaporated at 50 °C and 60 rpm through the vacuum of a rotary evaporator.After that, a dried thin film layer was formed on the inner wall of the flask.The dried thin film was then hydrated using 10 mL of phosphate buffer saline (PBS pH 7.4) at 50 °C and 100 rpm for 30 min to fabricate the niosomal formulation.Finally, the sample was sonicated in 3 cycles of 5 min to achieve a uniform particle size distribution 54 .
Optimization of ART-loaded niosomes
Formulation optimization was performed by the response surface optimal design using Design-Expert software (version 10.0, Stat-Ease Inc., Minneapolis, USA) according to our previous studies using the same methodology in the optimization process 14,55 .Four independent variables including (1) surfactant to lipid (S/L) ratio, (2) percentage of Brij 35, (3) percentage of Brij 72, and (4) percentage of Span 60 were considered in the optimization process.Response variables were particle size and entrapment efficiency (%EE).According to the defined independent variables, the software suggested 27 runs, as shown in Table 1.Finally, based on the optimization results, ART-loaded niosomes with average diameters of 100 and 300 nm and desired %EE values were targeted by Design-Expert® software (version 10.0.7,Stat-Ease Inc., Minneapolis, USA) for further characterization tests.The rationale for the selection of ART-loaded niosomes with particle sizes of 100 nm and 300 nm through the Design-Expert software was that according to the previous studies, nanoparticles with an average diameter of about 100-300 nm can efficiently enhance the skin permeation of the loaded drug 25,56,57 , which was the main purpose of the current study for CL management.
Characterization of ART-loaded niosomes
The optimized ART-loaded niosomes were characterized in terms of particle size, size distribution, morphology, physicochemical stability, drug loading, drug release, and differential scanning calorimetry.
Particle size, size distribution, and zeta potential analysis
The particle size and size distribution of the freshly prepared niosomes were analyzed through two different methods including: (1) static light scattering (SLS) technique using the particle size analyzer (PSA; SHIMADZU, SALD-2101, Japan) and ( 2 www.nature.com/scientificreports/Germany).The span index was calculated according to Eq. (1) to assess the polydispersity and homogeneity of the prepared nanoparticles.In addition, the zeta potential of the prepared ART-loaded niosomes was assessed using Zeta-Chek (Microtract, ZC007, Germany).
Drug loading assessment
Drug loading was assessed through the centrifugation ultrafiltration technique using Amicon filter tubes (MWCO 3KDa, Amicon Ultra-4, Millipore Co., MA, USA) 14 .In this regard, 5 ml of the prepared ART-niosomes was poured into the upper chamber of the Amicon® filter tubes.The samples were centrifuged at 4000 rpm for 15 min, and the filtrate was assessed according to validated HPLC method.Furthermore, entrapment efficiency (%EE) and loading capacity (%LC) were estimated using Eqs.( 2) and (3), respectively.
where, the loaded drug (mg) is the total drug minus the unloaded drug, the total drug is the initial amount of the drug that is used to prepare the niosomes; also, the total weight of niosome (mg) is the initial weight of lipid mixture and surfactants used in the preparation of niosomes.Unloaded drug levels were also calculated through the analysis of the obtained filtrate from the Amicon® filter tubes using the validated HPLC method.
Stability assessment
The stability of the optimum ART-loaded niosomes formulations (100 and 300 nm) was assessed in terms of particle size, drug loading (%EE and %LC), and possible drug expulsion 14 .In this regard, the samples were stored both at room temperature (25 °C) and in the refrigerator (4 °C) for up to one month and samples were assessed regularly.
Transmission electron microscopy (TEM)
The morphology and particle size of the prepared ART-niosomes were assessed by TEM (Philips, Leo 906E, Germany, voltage of 80 kV).Briefly, the prepared ART-niosomes were fixed on copper grids and stained with uranyl acetate to visualize the niosomes.
Differential scanning calorimetry (DSC) analysis
Differential scanning calorimetry (DSC-Q600, IndiaMart, India) was performed for ART, the physical mixture of the lipids, and the prepared niosomes.In this regard, 50 mg of each sample was utilized.The scan rate was set at 10 °C/min, and the samples were heated from 25 °C up to 100, 90, and 200 °C for ART, niosomes, and lipid mixture, respectively.
Drug release study
Drug release was performed through the centrifugation ultrafiltration technique 58 .In this regard, 750 μL of ART-loaded niosomes with an average diameter of 100 nm was preliminary mixed with 4250 µL of the release medium (which contained 80%v/v phosphate buffer saline (PBS), pH 7.4 and 20%v/v ethanol) to maintain the sink condition.The mixture was placed in a shaker-incubator at 100 rpm at 37 °C, and the samples were taken after 1, 3, 7, and 24 h.The prepared samples were poured into the Amicon filter tubes and centrifuged at 4000 rpm for 5 min, and the filtrate was analyzed.The amount of the released drug was calculated, and the results were compared with the permeation of the free drug.Moreover, the drug release kinetics and the most probable mechanism of drug release from the niosomes were assessed.
In vitro anti-leishmanial effects against the promastigotes of L. major L. major standard strain (Mcan/IR/07/Moheb/-gh) was obtained from the infected Balb/c mice from the Department of Parasitology and Mycology, Shiraz University of Medical Sciences, Shiraz, Iran.Since the promastigote is an extracellular form of the parasite, therefore the specific cytotoxicity of the designed formulation was assessed on promastigotes.The promastigotes were cultured in RPMI 1640 medium enriched with FBS (15%v/v), penicillin (100 IU/ml), and streptomycin (100 µg/ml) and then incubated at 24-26 °C.The stationary phase promastigotes were sub-cultured to produce a higher number of parasites 59 .The obtained parasites were specifically used for each experiment.Anti-leishmanial effects of ART solution (prepared from the standard powder of ART), drug-free niosomes, and ART-loaded niosomes with average diameters of 100 and 300 nm were assessed through the MTT assay test 60 , and the results were compared with L-AMB (AmBisome®, Gilead Sciences Co., California, USA) as the positive control and untreated well as the negative control.ART-loaded niosomes with average diameters of 100 and 300 nm were selected to assess the effect of particle size on cell internalization.For this test, 100 µL of parasite suspension at a concentration of 1 × 10 6 parasite/mL was added to a 96-well microplate; then, 100 µL of ART, 14:10073 | https://doi.org/10.1038/s41598-024-60883-0
Figure 2 .
Figure 2. (A) Entrapment efficiency (%EE) of artemether (ART)-loaded niosomes with an average diameter of 100 nm, (B) %EE of ART-loaded niosomes with an average diameter of 300 nm, (C) Loading capacity (%LC) of the ART-loaded niosomes with an average diameter of 100 nm, and (D) %LC of ART-loaded niosomes with an average diameter of 300 nm, (E) Average particle size of ART-loaded niosomes 100 nm and (F) Average particle size of ART-loaded niosomes 300 nm during one month of storage at room temperature (25 °C) and refrigerator (4 °C) (N = 3 for all experiments).
Figure 3 .
Figure 3. (A) Differential scanning calorimetry (DSC) thermogram of artemether; (B) DSC thermogram of physical mixture of lipid matrix; and (C) DSC thermogram of niosomes with an average diameter of 100 nm.
Figure 4 .
Figure 4. Artemether (Art) release from optimum niosomal formulation (Art-Niosomes) with an average diameter of about 100 nm in comparison to free drug (Art) permeation using centrifugation ultrafiltration technique (N = 3).
Table 1 .
Independent variables and response factors of 27 runs suggested by Design-Expert software.
Table 2 .
3argeted runs by design-expert software and the amount of ingredients for artemether (ART)-loaded niosomes with particle sizes of 100 and 300 nm. 1 Phosphate buffered saline.2Loadingcapacity.3Entrapmentefficiency. | 2024-05-04T06:17:08.300Z | 2024-05-02T00:00:00.000 | {
"year": 2024,
"sha1": "f9cf39ebbf6b2c27a0556ef01eb698f7df22e398",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e2a4f612c0c840bdc02e5b82ca912ba5b551c4a8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251131465 | pes2o/s2orc | v3-fos-license | Smoking in the workplace: A study of female call center employees in South Korea
Smoking among women is characteristically high among call center employees and is associated with various individual and work-related characteristics, which have been paid little attention so far. This study explored the differences in intrapersonal and interpersonal characteristics and environmental factors among Korean women working in call centers by smoking status, based on an ecological model. In this cross-sectional study, an anonymous online survey was conducted among a sample of female employees from three credit card-based call centers (N = 588). Differences in intrapersonal (social nicotine dependence, smoking attitudes, emotional labor), interpersonal (smoking among family or friends, social support), and environmental factors (smoking cessation education, and perceived and preferred smoking policy at work) were compared according to smoking status (smokers, ex-smokers, and never smokers). Approximately 20% (n = 115) were smokers. Smokers were younger, mostly unmarried, had lower education, and had poorer perceived health status than ex- and never smokers. The mean scores for social nicotine dependence and smoking attitude were the highest among smokers, indicating their tendency to underestimate the negative effects of smoking. They also reported the highest level of emotional labor, with about half (50.4%) and almost all (95.7%) reporting smoking behaviors in their families and friends, respectively. Smokers took a lenient stance on the smoking ban policy. The results indicated the necessity to develop tailored smoking cessation programs to motivate female call center employees to quit smoking. As call centers may have a smoking-friendly environment, comprehensive smoking prevention programs considering multilevel factors are required to support smoking cessation.
Introduction
Globally, smoking is one of the most serious public health threats, killing more than 8 million people annually [1]. Although the total number of smokers in South Korea has been decreasing, the number of female smokers, especially young ones, has increased from 5.5% to 7.5% between 2015 and 2018 [2]. Recent studies have reported that women in call centers have a higher prevalence of smoking, ranging from 20 to 37% [3,4]. However, owing to the negative cultural and social atmosphere surrounding female smokers in South Korea, these rates may be underreported [5][6][7]. Despite this negative cultural and social atmosphere, the relatively high rate of smoking among female call center workers may be due to various factors such as work environment, job-related stress, and personal benefits from smoking [3,4]. Telecalling jobs, commonly perceived as a women's profession, are characterized by a low wage rate, insecure employment status, and emotional labor in dealing with customer hostility and verbal abuse [8,9]. Studies have demonstrated that women are more affected by negative psychological consequences of emotional labor, such as burnout and low job satisfaction [10][11][12]. The generally low socioeconomic status and vulnerable job conditions may affect health behaviors, such as a high prevalence of smoking among female call center employees [9,13]. Call center work requires employees to control their feelings and reactions to satisfy their customers, even when the customers are hostile and harass them verbally-this is known as emotional labor [3,4].
Smoking behaviors are very complex and comprehensive phenomena which are associated with various personal, social, and environmental factors. Additionally, the ecological model suggests that health-related behaviors are influenced by social and environmental factors [14] -this can be useful for comprehensively understanding behaviors and developing interventions for promoting healthy lifestyles. This study, which aimed to comprehensively understand the factors related to smoking behaviors among female call center employees, was guided by the ecological model (Fig 1). Intrapersonal factors in this model are individual characteristics that influence behavior [14]. In this study, social nicotine dependence, smoking attitudes, and emotional labor were selected to represent intrapersonal factors among female call center employees. Female smokers are more likely to be influenced by non-nicotine factors of smoking [15,16]. The concept of social nicotine dependency describes the psychosocial linkage with smoking. As pointed out by Kano [17], smokers tend to underestimate the negative effects of smoking and have a positive perception of favorable effects. Smoking attitudes indicate the degree of positive beliefs about smoking [18]. The belief that smoking has harmful health effects may reduce the risk of smoking among women, which is characteristically high in specific occupations [19], and is related to emotional labor, as suggested in previous studies [8,20].
Interpersonal factors, such as family or friends' smoking status, may be important risk factors for female smokers, and smoking behavior can spread via such social network members [21]. A previous study demonstrated that smoking cessation among family and close friends was also related to smoking cessation in female call center employees [22]. These studies suggest that family and friends are important influencers of smoking behavior. Studies have also demonstrated that increased levels of social support are associated with reduced health-risk behaviors such as smoking, and emotional support has an overall positive effect on an individual's health regardless of stressful events [23,24]. Feeling supported may have a positive emotional buffering effect on call center employees. Although studies have demonstrated the link between social support and smoking status among the general population and patient groups [23,25,26], few studies have addressed this relationship in female call center employees as a vulnerable group.
The perception and preference of smoking policy at work were selected as environmental factors for this study. Call centers are described as a "paradise" for female smokers in South Korea, who expressed that they could smoke freely and comfortably in the smoking rooms without any social discrimination [3]. A recent study demonstrated that about 16% of smokers started smoking after they started working at a call center [4]. These findings suggest that there may be favorable environmental factors, such as smoking policies at work, that encourage women to smoke. Therefore, it is necessary to investigate the perception of current smoking policies at work and the preferred smoking policy for female call center employees.
It is essential to understand the characteristics of smoking-related factors to identify effective strategies for smoking cessation. Comparing the intrapersonal, interpersonal, and environmental differences between smokers and never smokers within the same occupational group can help identify smoking-related risk factors, which can be crucial in quitting smoking. Although the number of female smokers is steadily increasing in South Korea, little is known about female smoking with respect to ecological models. This study aims to 1) describe the intrapersonal, interpersonal, and environmental factors influencing smoking among female employees in call centers; and 2) explore the differences in those factors based on smoking status.
Study design and participants
A cross-sectional study using an anonymous online survey was conducted from February to April 2021. A priori computation of the sample size using G � Power version 3.1 revealed that 567 participants were required for a three-group plan with an effect size (f) of 0.15, an alpha value of 0.05, and an actual power of 0.90. Potential participants were recruited from three leading South Korean credit card companies' call centers. Each call center had approximately 1000 employees. The authors contacted unit managers to explain the study's purpose and procedures, and to distribute the research flyers. The flyers included a cover letter containing the summary of the research as well as a link to the survey which can be completed anonymously. When potential participants clicked on the survey link, they were taken to a webpage that contained detailed descriptions of the research, including data collection procedure and the study's voluntary and anonymous nature. After reading the detailed information regarding the study and consenting to participate, they were guided to click the research consent button to proceed to the online survey.
The inclusion criteria were female call center employees with at least six months of working experience. Eligibility screening questions were located at the beginning of the survey. Eligibility was determined through self-reports. The online survey was programmed to close automatically for those who did not meet the inclusion criteria. Of the 618 call center employees who completed the initial assessment, 30 did not meet the inclusion criteria yielding a final sample of 588 women. Those who completed the online survey were given a mobile gift voucher worth approximately $10. The study protocol was reviewed and approved by the appropriate ethics committee (AJIRB-SBR-SUR-20-561), and the study was conducted in accordance with the Declaration of Helsinki.
Measures Smoking status.
Smoking status was assessed based on self-reported current smoking status and the number of cigarettes smoked per day. Participants were considered smokers if they reported smoking 100 cigarettes in their lifetime and smoked presently. Ex-smokers had smoked more than 100 cigarettes in their lifetime but did not smoke presently. Never smokers were those who had smoked less than 100 cigarettes in their lifetime and did not smoke presently.
Intrapersonal factors. Intrapersonal factors included social nicotine dependence, smoking attitudes, and emotional labor. Social nicotine dependence was assessed using the 10-item Korean version of the Kano Test for Social Nicotine Dependence questionnaire [17,27]. Each item was scored on a 4-point Likert scale, ranging from 0 (strongly disagree) to 3 (strongly agree). The total scores were calculated by summing the item scores and ranged from 0 to 30. Higher scores indicated a high level of psychosocial dependence on smoking. In this study, Cronbach's alpha was 0.88.
The 7-item Attitude of Smoking scale was used to measure smoking attitudes. The scale was used for the Teenage Attitudes and Practice Survey by the National Center for Health Statistics in the US and was translated by Lee into Korean [18,28]. The participants were asked to rate their general perception about smoking and its health effects on a 4-point Likert scale, ranging from 0 (strongly disagree) to 4 (strongly agree). The total smoking attitude score was calculated by averaging the item scores. Higher scores indicated a positive attitude toward smoking. In this study, Cronbach's alpha was 0.82.
The participants' level of emotional labor was measured using the Emotional Labor Scale, consisting of 14 items with five subscales: frequency, intensity, variety, surface acting, and deep acting. Surface acting refers to faking and suppressing emotions, while deep acting (DA) implies controlling internal feelings and thoughts [29,30]. For instance, a surface-acting item was "Pretend to have emotions that I don't really feel," while a deep-acting item was "Really try to feel the emotions I have to show as part of my job." All emotional labor items were measured on a 5-point Likert scale, ranging from 1 (not at all) to 5 (always). Summary scores were calculated by averaging the item scores and ranged from 1 to 5. Higher scores indicated greater emotional labor. This scale was found to have good internal reliability for South Korean emotional laborers [30], and the Cronbach's alpha in this study was 0.84.
Interpersonal factors. In this study, interpersonal factors included family or friends' smoking status and social support. The smoking behavior of family members who live together was assessed with a yes/no question. Likewise, friends' smoking behavior was evaluated as a response (yes/no) to whether there were smokers among friends frequently met.
Levels of social support from family, friends, and other significant persons were assessed using the 12-item Korean version of the Multidimensional Scale of Perceived Social Support [31], Each item was scored on a 7-point Likert scale, ranging from 1 (very strongly disagree) to 7 (very strongly agree). The total score was calculated by summing the item scores, with higher scores indicating higher levels of social support. The Cronbach's alpha was 0.88 at the time of scale development [31] and was 0.96 in this study.
Environmental factors. Perception of the current smoking policy at work, smoking policy preferences at work, and smoking cessation education were selected as environmental factors. Perception and preferences for smoking policy at work were self-reported using the questions used in a study by Willemsen et al. [32]. The following question was used to assess the current smoking policy: "How is smoking by employees regulated at your workplace?" The Data analysis. The data were analyzed descriptively using IBM SPSS software (version 23.0; IBM Corp., Armonk, NY, USA). Before the analysis, data were inspected for suspected errors, missing data, and outliers, and no issues were identified during the screening. The study variables were summarized as frequencies and percentages for categorical variables and as means (± standard deviations) for continuous variables. Chi-square tests and ANOVAs were used to compare the differences in the study variables according to smoking status. The level of significance was set at p < 0.05.
Results
The distribution of participants' characteristics and intrapersonal, interpersonal, and environmental factors are summarized in Table 1. About 20% were smokers, 12.1% were ex-smokers, and 68.4% were never smokers. The average age was 41.36 (±8.85) years, and approximately 58% of the participants were married. Approximately 40.1% perceived their health status as good.
For intrapersonal factors, the average levels of social nicotine dependence, smoking attitude, and emotional labor were 12.38 out of 30, 6.37 out of 21, and 3.28 out of 5, respectively. Among the subscales of emotional labor, the mean score for frequency (3.56) was the highest, while the mean score for intensity (2.80) was the lowest. The prevalence of family and friends' smoking was 41.2% and 69.0%, respectively. Overall, 27.7% of participants perceived no explicit smoking policy at work, while 29.4% reported a complete smoking ban.
Differences in the participants' characteristics and intrapersonal factors according to smoking status are presented in Table 2. Smokers were younger (p < .001), mostly unmarried (p < .001), had lower education (p = .001), and had poorer perceived health status (p < .001) than ex-or never smokers. The mean scores for social nicotine dependence were the highest (20.57) in smokers and lowest in never smokers (9.82), indicating higher levels of psychological and psychosocial dependence on smoking among smokers (p < .001), who also experienced higher levels of emotional labor, especially in the subscales of intensity (p < .001), variety (p < .001), and surface acting (p = .001).
The distribution of interpersonal and environmental factors by smoking status is graphically depicted separately in Figs 2 and 3. The prevalence of family (p = .010) and friends' smoking (p < .001) differed significantly according to the smoking status. It was the highest among smokers, with about half (50.4%) and almost all (95.7%) reported smoking behaviors in their
PLOS ONE
Smoking in the workplace: A study of female call center employees family and friends, respectively (Fig 2). The prevalence of smoking among never smokers' family and friends were 38.6% and 58.5%, respectively. However, the level of social support was lowest among smokers (63.76) and highest among never smokers (67.98) (p = .019). The perceptions and attitudes about smoking policy at work differed according to smoking status ( Fig 3). Smokers tended to think that there was no smoking policy at work (37.4%) or that there was a moderate restriction (38.3%) (p < .001). Their stance on the smoking ban policy was also lenient: 14.8% of the smokers reported that no explicit policy is needed and 13% preferring smoking ban only applied to public areas. However, 75.6% of never smokers preferred to allow smoking in designated areas, while 19.4% of them preferred a complete smoking ban (p < .001). There was no statistical difference in smoking cessation education according to smoking status (p = .736).
Discussion
The purpose of this study was to describe smoking behaviors according to the ecological model and explore the differences in these factors based on the smoking status. Approximately 20% of female call center employees, in this study, were smokers. Considering the social taboo regarding female smoking, the rate might be higher than self-reported rates [33,34]. In this study, female smokers generally had poor health and lower levels of education, and our finding proves their vulnerable status [33,35].
Regarding the intrapersonal factors, the levels of social nicotine dependence, smoking attitude, and emotional labor were explored. The average scores of social nicotine dependence among smokers (20.57) were significantly higher than those of ex-smokers (13.66) and never smokers (9.82). Interestingly, the level of social nicotine dependence in this study population was very high compared to the general South Korean population [27] or other ethnic groups
PLOS ONE
Smoking in the workplace: A study of female call center employees [36] in previous study samples. The high social nicotine dependence implies difficulties in quitting smoking [17] and little interest in smoking cessation interventions [37]. Female smokers among call center employees had little intention to quit smoking and seemed to be in a
PLOS ONE
Smoking in the workplace: A study of female call center employees nicotine-dependent culture. In addition, the participants showed a high level of positive beliefs about smoking. Positive smoking attitudes toward smoking were found to be almost three times higher in smokers (11.73) than in never smokers (4.65). We found that smokers, compared to never smokers and ex-smokers, experienced higher intensity, variety, and surface action in emotional labor. The scores for surface acting were higher in smokers, suggesting a greater tendency to hide or suppress their emotions to satisfy customers [29]. A recent study reported that surface acting is strongly associated with stress responses, and the relationship between emotional labor and occupational stress differed according to smoking status [4]. The results are consistent with previous studies where female employees used smoking as a stress management method [3,16]. The development of adaptive emotional regulation skills, such as stress management in stress-or anxiety-induced situations, is an essential element of smoking cessation programs for female employees.
Regarding interpersonal factors, the proportion of smokers among family and friends was approximately twice as high for female smokers, referring to a favorable environment and a positive attitude toward smoking. A recent qualitative study on female smoking behavior, attitudes, and experience reported that most female smokers grew up with and were accompanied by their smoker fathers or male friends, thus tending to positively perceive smoking as a social norm and communication tool [38]. Studies have demonstrated that increased levels of social support are associated with reduced health-risk behaviors [39]-this may help female call center employees engage in healthier behaviors. In this study, smokers scored lower than never smokers for social support, consistent with the findings of previous studies [25]. It has been reported that cancer survivors with better mental health and frequent social support are less likely to be smokers [25], implying that abstinence-related social support may be effective in smoking cessation interventions.
Smoking cessation education, perception, and preference regarding the smoking policy at work were explored as environmental factors. The perception and preference regarding the smoking policy at work differed among smokers, ex-smokers, and never smokers. Smokers tended to think that there was no smoking policy at work or that there was a moderate restriction. Their stance on the smoking ban policy was also lenient. According to Kim's study [3], call centers may provide a favorable environment for female smokers. The study found that a few female call center employees began smoking to fit in their workplace culture. Perceptions and attitudes of female call center employees about smoking policy and their association with smoking rates should be further studied.
For female smokers, call centers may provide a cause and place to smoke simultaneously, which can be considered an obstacle to quitting. Female call center employees go to designated smoking areas, which are completely hidden from outside, to take a break or befriend people while they smoke [3]. Subsequently, efforts to prevent and quit smoking should go beyond individual-level interventions, and workplace-level interventions to identify and manage social factors must be considered. The literature demonstrates that women face various barriers to smoking cessation [40], and there are occupation-specific characteristics of smoking attitudes and behaviors. Therefore, specific strategies considering gender and occupation are needed to help women quit smoking in the future.
While this study revealed some novel findings and supported some previous research results regarding female smoking, two limitations need to be acknowledged to interpret the results appropriately. First, this was a cross-sectional study; therefore, causality could not be implied. Second, smoking status was self-reported and given the social taboo against female smoking in South Korea, it may have been underestimated. For example, one study reported that the prevalence of smoking from the objective measures of smoking and urine cotinine is approximately 5 to 6 times higher than self-reported data [33].
Conclusions
Smoking-related factors were explored among female call center employees, suggesting the requirement of developing smoking cessation programs based on the needs of female smokers. As the workplace environment can be favorable for smoking, organizational-level smoking cessation-supportive environments and interventions are required while addressing workrelated smoking factors.
Thus, it is important to develop comprehensive smoking prevention programs considering these multilevel factors. The literature suggests that individual and work factors should also be considered to meet the individual healthcare needs of female smokers.
Supporting information S1 File. The data file of 588 female call center employees. (XLSX) | 2022-07-29T06:17:44.783Z | 2022-07-28T00:00:00.000 | {
"year": 2022,
"sha1": "9c3384ad2dec7259b32035e8691f8ad3f373a06c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7412ccd15b82653d3b736cdb4583f904322969b5",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236563920 | pes2o/s2orc | v3-fos-license | Assessment of Sperm Quality - A Light Microscope Study
BACKGROUND Semen analysis is an integral part of work up for infertility in men, with sperm morphology being an important qualitative parameter. Qualitative defects can affect any part of the sperm and are classified as defects in the head, middle piece, and tail, based on morphology. The focus of the study was to assess qualitative defects in sperms by light microscopy, in semen with normal sperm counts. METHODS This study is hospital based, descriptive, retrospective study. Of the semen samples received in the clinical laboratory, fifty with normal sperm counts were included in the study and processed according to standard protocol. For evaluation of qualitative defects by sperm morphology, smears were fixed in ethanol, stained with Papanicolaou stain [PAP], and assessed under light microscope. The 50 semen samples included in the study had sperm counts ranging from 15 to 80 million / ml. Thirty samples had less than 10 % abnormal forms, fourteen samples had 11 - 20 % abnormal forms, five samples had 21 - 30 % abnormal forms and one sample had 40 % abnormal sperms. Qualitative defects were classified as morphological abnormalities in head, neck, and tail. Of the fifty cases, most defects were found in the head, followed by those in the neck and tail. Common defects noted were double head (44 %), abnormal sized heads, and bent neck (48 %). Coiling was a common defect noted in the tail (10 %). Most sperms showed a combination of defects.
B A C K G R O U N D
Infertility affects 15 % of couples globally, of which 20 -30 % are due to defects in sperms. 1 Semen analysis is an important and routine investigation in the workup for infertility in men and is done using the World Health Organization (WHO) criteria for quantitative and qualitative examination. Qualitative examination of semen is a vital parameter and includes assessment of morphology by microscopic examination of sperms. Morphologic features of sperm is the result of a highly complex process of cellular modifications occurring during spermatogenesis. 2 A sperm consists of a head, neck, middle piece (midpiece), principal piece and endpiece. The endpiece is difficult to see with a light microscope, so sperms are considered to comprise head (and neck) and tail (midpiece and principal piece). For a spermatozoon to be considered normal, both its head and tail must be normal. All borderline forms should be considered abnormal. 3 Morphological defects can be found in any or all of these parts. Accordingly, sperm defects are categorised as those in head, middle piece, and tail. 1 Evaluation of sperm morphology and quality is confusing and time-consuming. The difficulty in assessment is due to lack of objectivity, variation in interpretation or poor performance. WHO recommends a simple normal / abnormal classification, tallying the location of abnormalities in abnormal spermatozoa. Studies have shown that the percentage of normal spermatozoa and the mean number of abnormalities per spermatozoa correlates more closely with the fertilization rate than sperm count and motility. Environmental and lifestyle factors such as smoking and alcohol use are known to affect the sperm morphology and are associated with specific abnormalities. 2 Defects due to stress or medication are reversible, whereas those due to genetic defects are severe and incapable of fertilization. 2 An increased percentage of spermatozoa with abnormal shapes is commonly associated with defective spermatogenesis. Fertilizing potential of abnormal spermatozoa decreases, depending on the types of anomalies. They may also have abnormal deoxyribonucleic acid (DNA). 3 Successful fertilization and early embryonic development in assisted reproductive techniques depends on the morphology of spermatozoa. Sperm morphology is better assessed by staining, using one of the many staining methods such as Papanicolaou (PAP), Haematoxylin-Eosin, Giemsa, and Diff-Quik stains. Papanicolaou stain is one of the preferred methods for evaluating sperm morphology and quality in routine laboratory practice. Different staining methods may cause some changes in the morphometric values of spermatozoa because fixatives can induce slight cell shrinkage. Some authors recommend the use of combination of stains to overcome this limitation for morphometric measurements. 3,4 The process of slide smear preparation from the semen sample is time consuming. Assessment of sperm morphology requires semen smear preparations to be of high quality. Small artefacts also might influence the appearance of the sperm. Technique dependent source of errors can be minimized with standardized and controlled methods. Good quality smears depend on the quantity of the stain used, time allowed for the mixture to stand and preparation of the smear. Certain precautions like use of minimal force with slides while making smears manually help prevent broken tails. The slides should be preferably cleaned with 95 % or absolute alcohol before use. 5
O b j e c t i v e s
This study intends to assess qualitative defects in sperms by light microscopy, in semen with normal sperm counts, in men attending infertility clinic.
ME T H O D S
This is a hospital based, descriptive, retrospective study conducted in the Department of Pathology, Rajarajeswari Medical College and Hospital, Bangalore. A total of 80 semen samples were received over a period of six months from January 2018 to June 2018, of which 50 samples with normal sperm counts were included in the study and evaluated for qualitative defects in sperms. The study was approved by ethics committee and informed consent was obtained. Details were recorded in standard proforma and samples processed according to protocol as follows: Samples were collected in a clean-capped plastic container and assessed when fresh After liquefaction, volume, colour, appearance and viscosity were noted
E x c l u s i o n C r i t e r i a
Semen samples diluted with urine and those with abnormal counts.
S t a t i s t i c a l A n a l y s i s
Detailed data was recorded in MS Excel and statistically analysed using IBM SPSS v20 software. Data was expressed in numbers, percentage, and tables. . Assessment of sperm morphology is an important parameter in the work-up of infertile men. Due to increasing use of in vitro fertilization techniques, studies are being focused on the role of sperm morphology in fertilization. Sperm quality is important for successful fertilization and early embryonic development in assisted reproductive techniques. 4 It is considered to be one of the best discriminators for fertilization potential. 3 Until the 20th century, little attention was given to assessment of sperm morphology. 5 There was also lack of uniformity in evaluation of morphology. In 1999, World Health Organisation published guidelines for semen analysis. Fourth and fifth editions of WHO manual are the most recommended by fertility physicians. 2 Qualitative defects are assessed by microscopic examination of sperm morphology and are an important parameter in semen analysis. Head, middle piece, and tail form parts of a sperm. Each of these parts can present with different morphological abnormalities.
R E S U L T S
Studies have shown that some of these defects are irreversible, while those due to acquired/environmental factors can be reversible. 6 Lifestyle factors such as smoking and alcohol use, are thought to affect the morphologic features of sperm. 2 Evaluation is done by staining spermatozoa from fresh semen and examination under light microscope. Morphology of sperms can be assessed better when they are stained and Papanicolaou stain gives good staining of spermatozoa. 3,4 Aksoy E et al. 4 used different staining methods to assess morphometric measurements and morphology of spermatozoa under light microscopy, in 67 patients. They found changes in the morphometric values of spermatozoa because the fixatives induced slight cell shrinkage. They concluded that for morphological assessment of spermatozoa, Papanicolaou, Haematoxylin-Eosin (HE), toluidine blue and Shorr stains are the best dyes for staining quality. Tygerberg Classification Criteria described by Kruger [1986] and the WHO classification are the most important morphological classifications of spermatozoa. Spermatozoa consist of a head, neck, middle piece (midpiece), principal piece and endpiece. With a light microscope, the cell can be considered to comprise a head (and neck) and tail (midpiece and principal piece). 3 For a spermatozoon to be considered normal, both its head and tail must be normal. All borderline forms should be considered abnormal. According to Kruger, in a normal sperm, the boundaries of the head should be smooth, regularly contoured and oval. There should be a well-defined acrosomal region coating 40 -70 % of the head area. The acrosomal region should contain no large vacuoles. Sperm acrosome size and staining abnormality is one of the important criteria identified for sperm morphology evaluation based on sperm functionality. 6 Midpiece should be slender, long, regular and about the same or 1.5 times the length as the sperm head, with the axis of the midpiece aligned with the axis of the sperm head. The principal piece or tail should be thinner than middle piece and should have a uniform calibre along its length. It should be flat, without wrinkles, and not contain broken parts and cytoplasmic debris. It should be approximately 45 µm long (about 10 times the head length). 3,4,7 Most morphological abnormalities occur in combination. Multiple defects cause defective development of embryo and are associated with increased chances of spontaneous abortions. Morphologically abnormal spermatozoa and semen leucocytes generate reactive oxygen species, which may damage sperm structure, leading to reduced motility and DNA damage. This may interfere with early embryo development, resulting in spontaneous abortions. 1,8 The fifth edition of WHO manual proposes a very low cutoff value of 4 % for morphologically normal spermatozoa. On its own, this value may not provide a strong predictive value for a male's fertility potential. The same can however be obtained with a holistic strict approach for sperm morphology evaluation with additional morphology parameters. 6 Three indices can be derived from the detailed assessment of morphological abnormalities of the head, midpiece and principal piece: the multiple anomalies index (MAI), the teratozoospermia index (TZI), and the sperm deformity index (SDI). These indices have been correlated with fertility in vivo (MAI and TZI) and in vitro (SDI) in various studies and may be useful in assessment of exposures or pathological conditions. A TZI of 1.6 or more is associated with a lower pregnancy rate, and an SDI value of 1.6 or more is the cut-off for failure of in vitro fertilization. These indices can only be derived by using the manual method. 2, 3, and 7,9,10 Menkveld R et al. have described defects such as tapering and megaloheads to be reversible, possibly induced by stress or medication, and revert on withdrawal of precipitating factor. Others such as head-neck attachment, misaligned neck, short tail syndrome or abnormal neck insertions are genetically determined and have poor prognosis. 1,11 Defects in the head are the most common and include tapering, large, round, short, amorphous, and bifid forms. Defects such as amorphous head and globozoospermia are genetically determined. Goyal R et al. described tapering head to be the most common defect, followed by middle piece defects. 1 In the present study, midpiece defects were more common.
Neck defects such as bent neck is genetic and carry poor prognosis. Cytoplasmic residue or excess is associated with sperm immaturity and production of reactive oxygen species implying ongoing stress. 7 Short tail syndrome is genetically determined and has very poor chance of future fertility. Coiled tails result in defective propulsion. Tail abnormalities are increased in smokers. Coiled tail was the least common abnormality detected in the study by Goyal et al. 1 In the present study too, tail abnormalities were least common, with coiled tails seen in 10 % of cases.
C O N C L U S I O N S
Quality of spermatozoa has a direct influence on fertilization and developmental competence of embryos. Cytomorphologic analysis of sperm quality by light microscopy is a useful initial screening test for the evaluation of sperm. It helps clinicians in making decisions for in vitro fertilization. Lifestyle modifications may be recommended as a measure to improve sperm morphology in patients with fertility problems.
Data sharing statement provided by the authors is available with the full text of this article at jemds.com. Financial or other competing interests: None. Disclosure forms provided by the authors are available with the full text of this article at jemds.com. | 2021-08-02T00:05:58.973Z | 2021-05-10T00:00:00.000 | {
"year": 2021,
"sha1": "50d833396da0852530dd3d7e20710f6c2f9c297a",
"oa_license": null,
"oa_url": "https://doi.org/10.14260/jemds/2021/299",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a6bc27430f146c1ef09049757c2251f4cf7cc083",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
17902310 | pes2o/s2orc | v3-fos-license | Potentials-Attract or Likes-Attract in Human Mate Choice in China
To explain how individuals’ self-perceived long-term mate value influences their mate preference and mate choice, two hypotheses have been presented, which are “potentials-attract” and “likes-attract”, respectively. The potentials-attract means that people choose mates matched with their sex-specific traits indicating reproductive potentials; and the likes-attract means that people choose mates matched with their own conditions. However, the debate about these two hypotheses still remains unsolved. In this paper, we tested these two hypotheses using a human’s actual mate choice data from a Chinese online dating system (called the Baihe website), where 27,183 users of Baihe website are included, in which there are 590 paired couples (1180 individuals) who met each other via the website. Our main results show that not only the relationship between individuals’ own attributes and their self-stated mate preference but also that between individuals’ own attributes and their actual mate choice are more consistent with the likes-attract hypothesis, i.e., people tend to choose mates who are similar to themselves in a variety of attributes.
Introduction
Two old Chinese adages ''lang cai nv mao'' and ''men dang hu dui'' may represent the classic standards of long-term human mate choice in traditional Chinese culture. The adage ''lang cai nv mao'' means that women should choose talented men as long-term partners, and men should choose young and physically attractive women as long-term partners. The adage ''men dang hu dui'' means that a couple of partners should have the similar family backgrounds. Clearly, the adage ''lang cai nv mao'' matches the theoretical framework of evolutionary biology based on potential reproductive success, which is largely founded on Trivers' [1] theory of parental investment. The theory of parental investment predicts that the relative parental investment should be a key factor for the sexual selection and that the mating strategies of males and females should differ. Many studies have shown that (i) women exhibit a stronger preference than men for traits of ambition, social status, financial wealth and commitment in a partner, i.e. men's reproductive potential as good providers [2], [3], [4], [5], [6], [7], [8], [9]; and (ii) men exhibit a stronger preference than women for characteristics of youthfulness, health, and physical attractiveness in a partner, i.e. women's reproductive potential as fertile mates [2], [4], [5], [6], [7], [9], [10], [11]. The adage ''men dang hu dui'' may imply that similarity gives rise to attraction, as widely found in human society (the assortative mating in a trait-by-trait way). For example, some studies have shown that people would like to choose mates who are similar to themselves in variety of attributes, such as height, weight, religion, race, education, and income, etc. [12], [13]; and that established couples tend to be similar to each other on a lot of dimensions, such as age, race, religion, education, physical attractiveness and personality [14], [15], [16], [17], [18].
In addition to the reproductive potential of an individual's partner, the stability of the partnership (or duration of the relationship) may also influence the reproductive output of their partnership [19], [20], [21]. According to this perspective, the factors concerning the stability of the partnership should be also important in human mate choice. To establish a stable long-term partnership, individuals should adjust their mate preference according to their own relative quality, other than choose the most preferred partner available [19]. Thus, both sexes should look for the traits they desire in the other sex by offering the desirable traits that they themselves possess [22], [23], [24]. There are some evidences to show that selectivity of mate preference should be conditional on self-perception in Western societies [2], [9], [25]. For example, Waynforth and Dunbar [9] showed that women offering cues of physical attractiveness (and men offering resources) make overall higher demands in lonely hearts advertisements; Bereczkei et al. [2] showed that female offering better physical conditions required higher financial and occupational status in potential mates, and men having more resources made more demands about the potential partner's physical attractiveness.
Based on undergraduates' self-reports of mate preferences for various attributes and self-perceptions of their own levels on those attributes, Buston and Emlen [19] investigated two alternative hypotheses regarding the relativistic rule of human mate preference in Western society: (i) individuals relate self-perception on sex-specific indicators of reproductive potential to selectivity of mate preference for sex-specific indicators of reproductive potential in the opposite sex; or (ii) individuals relate selfperception on a certain trait to selectivity of mate preference for the same trait. For these two relativistic rules, the first is called the ''potentials-attract'' hypothesis by Buston and Emlen [19], which means that individuals prefer partners with reproductive potential similar to their own. The second is called the ''likes-attract'' hypothesis, which means that individuals prefer partners with traits similar to their own. Obviously, the potentials-attract hypothesis emphasizes the difference between the strategies of the sexes (as the adage ''lang cai nv mao'' implies), and it is the mechanism implicitly assumed in previous evolutionary studies of conditional human mate choice [2], [9], [25]. On the other hand, the likes-attract hypothesis emphasizes the similarity of the strategies of the sexes [4], [5], as the adage ''men dang hu dui'' implies. It indicates an assortative mating based on a trait-by-trait way. Buston and Emlen [19] investigated 10 attributes and grouped them into four evolutionarily relevant categories (indicative of wealth and status, family commitment, physical appearance, and sexual fidelity). Their main results showed that in Western society, humans do not use a potentials-attract rule in their choice of long-term partners, but rather a likes-attract rule based on a preference for partners who are similar to themselves across a number of characteristics.
Todd et al. [24] argued that although Buston and Emlen [19] found that the modern human mate choices do not reflect predictions of the potentials-attract hypothesis but instead follow the likes-attract hypothesis, the verbally reported mate preferences do not correspond to actual mate choices [24]. Based on a speeddating data, Todd et al. [24] also obtained a similar result to Buston and Emlen [19] in a pre-event questionnaire, but they found that the self-reported mate preferences did not predict actual mate choices made during the speed-dating. Todd et al.'s [24] main results showed that in actual mate choices, men chose women based mainly on women's physical attractiveness but not on their own attributes, whereas women chose men whose overall desirability as a mate matched the women's self-perceived physical attractiveness. This means that the pattern of actual mate choices can be predicted by potentials-attract hypothesis. Kurzban and Weeden [26] also attained a similar result in analysis of participants' choices made in speed-dating events.
Buston and Emlen's [19] study was based on 978 undergraduates' mate preferences for various attributes and selfperceptions of their own levels on these attributes. In Todd et al.'s [24] study, only 46 participants were invited to take part in a research-oriented speed-dating event, and each couple had only five minutes to talk to each other. We also notice that both of these two studies did not show whether individuals' ''mate choices'' are reciprocated and eventually turned into a long-term relationship. However, we think that it should be more important to use data on human's actual long-term mate choice to test both the potentialsand likes-attract hypotheses in order to understand the rules of modern human mate choice.
In this paper, following Buston and Emlen's [19] and Todd et al.'s [24] basic idea, we investigated how modern Chinese people choose the long-term partners, or which of the old Chinese adages ''lang cai nv mao'' and ''men dang hu dui'' works better as a rule of mate choice in China. Different from Buston and Emlen's [19] and Todd et al.'s [24] studies, our data was from an online dating system, which is one of the largest online dating websites in China. The website provides a business service for heterosexual people who search for long-term partnerships. Each user needs to create a personal account to share his/her personal information and mate preference. Users can visit other people's profiles freely, and contact someone easily. Moreover, the successful datings (i.e. dating or married couples) via the website are encouraged to report online (called ''the successful dating stories''), so that everyone visiting the website could read these stories. Thus, the data from Baihe website provides us a possibility to test both the potentials-attract and likes-attract hypotheses in human's actual long-term mate choice, i.e. how individuals' own traits (or selfperceptions) are translated into their stated mate preference [19] and into actual mate choice, as well as whether individuals' stated mate preference matches their actual mate choice [24]. Furthermore, this data will also show whether there are some differences between China and Western societies in human mate choice.
Data and Methods
This study was approved by Animal and Medical Ethics Committee of Institution of Zoology, Chinese Academy of Sciences. All registered users of the Baihe website agreed to the terms of use, in which the website has specified its right to analyze registered user's information and to display the results in media or research publications. Anonymous ID numbers distinguished every user in the data provided by the Baihe website, and neither the names nor any contact information of the users were provided to us so as to protect the privacy of the users. An anonymized data set of this research is freely available upon request from the authors.
The Baihe website had about 27.5 millions (27,432,239) users at the end of 2010, coming from all 34 provinces of China, whereas most of them were from some big cities and developed areas in China, such as Beijing, Shanghai, Guangzhou, Shenzhen, Suzhou and Dongguan, etc. (see the location distribution of users plotted in Figure S1 in Supporting Information (SI)). Users' average age is 29.166.2 (n = 10,984,161) for women and 28.966.5 (n = 16,448,078) for men (see the age distribution plotted in Figure S2 in SI). Tables S1-S2 in SI indicates the profile characteristics that users of the website could specify about themselves and their ideal partners. All the information of personal items and mate preference were filled when these users registered and created their personal accounts in Baihe Website.
In this paper, only the users with complete information were considered (for the details, see SI). Our data included only 27,183 users who were from 19 to 60 years old (women: n = 13,677, mean age = 30.40 years, SD = 7.03; men: n = 13,506, mean age = 30.72 years, SD = 6.81), and in these users, there were 590 paired couples who have established the long-term partnerships through the online dating system (women: mean age = 28.90 years, SD = 4.20; men: mean age = 31.59 years, SD = 4.82). The demographic data are shown in Tables S3-S4 in SI.
The attributes regarding users' personal information and selfperceptions were age, height, income (monthly), education level, self-rated physical attractiveness, desire for children, respectively; and the attributes regarding users' stated mate preference were age, height, income (monthly) and education level, respectively. The income level (monthly) was rated using 7-point scale. The education level was rated using 4-point scale: 1 for High school or below; 2 for Bachelor; 3 for Master; and 4 for Doctor. The desire for children was rated using 3-point scale: 1 = do not want children; 2 = not sure; and 3 = want children. The users' physical attractiveness was rated by themselves using a 10-point scale: 1 = extremely unattractive and 10 = extremely attractive.
For the physical attractiveness, some studies (e.g. Ref. [27]) have shown that self-reported (or self-rated) physical attractiveness is not a valid measure of actual physical attractiveness since the correlation between actual physical attractiveness and selfreported physical attractiveness is small (the correlation coefficient is about 0.25). In our data, we also found that mean value of the self-rated physical attractiveness were about 7 (SD ranged from 1.70 to 1.86, see Tables S3-S4 in SI). Although individuals' selfrated physical attractiveness may not be a good measure of their actual physical attractiveness (since it represents only individuals' self-estimation for their own physical attractiveness), it has been found that individuals' self-perception might influence their mate preference as well as actual mate choice [19], [24]. Therefore, we here consider only how individuals' stated mate preference and their actual mate choice are influenced by their self-rated physical attractiveness.
According to Buston and Emlen [19], the six attributes (i.e. age, height, income (monthly), education level, self-rated physical attractiveness and desire for children) could also be grouped into three evolutionarily relevant categories: physical appearance (height and self-rated physical attractiveness), wealth and social status (income and education), and family commitment (desire for children). However, because of the low internal consistencies between attributes of health and physical attractiveness, Todd et al. [24] analyzed them separately instead of aggregating them into physical appearance domain. We also noticed that internal consistencies of the composites were not reported by Buston and Emlen [19]. Similarly, in our analysis, since correlation coefficients between the two attributes in physical appearance or wealth and social status domain were low (see Table 1, the Pearson's r ranged from 0.0480 to 0.2977, P,0.0001 [28][29]), we analyzed each attribute separately.
Results
As pointed out in the section of introduction, the main goal of this study is to assess which of the potentials-attract hypothesis or likes-attract hypothesis works better in human's actual long-term mate choice. For both people's stated mate preference and their actual mate choice, if the potentials-attract hypothesis works, women's attributes in physical appearance should correlate positively to men's attributes in wealth and status and in family commitment; and, alternatively, if the likes-attract hypothesis works, both women's and men's attributes should be significantly positively correlated to the same attributes in their stated mate preference and to their partners' same attributes in actual mate choice. However, if both potentials-attract and likes-attract hypotheses are supported, the coefficients of determinants of different regressions were compared (partial R 2 ) so as to determining which hypothesis is better supported. Our main results are shown below.
Individuals' Own Attributes and their Stated Mate Preference
To assess how individuals' own attributes (or self-perceptions) are translated into their stated mate preference, we calculated a series of multivariate linear regressions (MLR) in which each of the attributes in individuals' stated mate preference was regressed on all of their own attributes for women and men separately. Following Buston and Emlen's [19] data analysis strategy (see also Ref. [24]), we also calculated a series of univariate linear regression for women and men separately, in which each of the attributes in individuals' stated mate preference was individually regressed on each of their own attributes (see SI).
For the MLR analysis of our data including 27,183 users, i.e. the stated mate preference for each of the four attributes (age, height, income and education) was regressed on users' own six attributes (age, height, self-rated physical attractiveness, income, education, and desire for children), the results are shown in Table 2 and Figure 1a for women and in Table 3 and Figure 1b for men. For the regressions of women's stated mate preference on their own attributes (see Table 2 and Figure 1a), there were 19 significant relationships out of 24, showing preliminary support for both hypotheses, in which the highest b-values (or coefficients of determinants, i.e. partial R 2 -values) were consistently those between the same attributes in personal items and mate preference (with P,0.0001), i.e. (i) 58.21% of the variation in women's stated age preference could be explained by their own age; (ii) 15.25% of the variation in women's stated height preference could be explained by their own height, whereas 2.6% of the variation in women's stated height preference could be explained by their own age (with negative b-value, P,0.0001); (iii) 6.98% of the variation in women's stated income preference could be explained by their own income; and (iv) 14.16% of the variation in women's stated education preference could be explained by their own education. However, it is easy to see that on average 0.41% of the variation in women's stated income preference could be explained by their own height and their own self-rated physical attractiveness (with P,0.0001) (i.e. the potentials-attract hypothesis could be only partially supported). There are also some statistical results which cannot be predicted by either potentials-attract or likes-attract hypotheses (e.g. the positive correlation between women's own income and their height preference, and the negative correlation between women's own education level and their age preference). But these effects are rather small, therefore, only few variations in the correlations between individuals' own attributes and their stated mate preference can be explained by these effects. Similarly, for the regressions of men's stated mate preference on their own attributes (see Table 3 and Figure 1b), there were 21 significant relationships out of 24. Here, we also found that all the same attribute pairs have the highest b-values (i.e. age vs. age, height vs. height, income vs. income and education vs. education, with P,0.0001), i.e. (i) 52.29% of the variation in men's stated age preference could be explained by their own age, whereas on average only 0.45% of the variation in men's stated age preference could be explained by their own self-rated physical attractiveness, income, education and desire for children (with negative b-value, P,0.0001); (ii) 13.12% of the variation in men's stated height preference could be explained by their own height; (iii) 0.62% of the variation in men's stated income preference could be explained by their own income, whereas on average 0.22% of the variation in men's stated income preference could be explained by their own age, height and education (with P,0.0001); and (iv) 9.56% of the variation in men's stated education preference could be explained by their own education, whereas on average only 0.15% of the variation in men's stated education preference could be explained by their own height, income and desire for children (with P,0.0001).
For both women and men, the similar results were also obtained using the analysis of univariate linear regression for women and men separately (see Table S5 for women and Table S6 for men in SI).
The main results in this subsection should be considered to be more consistent with the likes-attract hypothesis, i.e. similar to the results of Buston and Emlen [19], and also similar to Todd et al.'s [24] finding in their pre-event questionnaires.
Individuals' Own Attributes and their Actual Mate Choice
As pointed out by Todd et al. [24], for both potentials-attract and likes-attract hypotheses, a more challenging question is how individuals' own attributes affect their actual mate choice. Using the data of 590 couples who have established the long-term partnerships via the website, a series of MLR was calculated, in which each of attributes in users' actual mate choice was regressed on all of their own attributes. The results are shown in Tables 4 and 5 and Figures 2a and 2b representing partial R 2 values for women and men, respectively. For the regressions of women's actual mate choice on their own attributes (see Table 4 and Figure 2a), there were 12 significant relations (out of 36), in which the highest b-values (or partial R 2values) were consistently those between the same attributes, i.e. (i) women's own age could explain 34.40% of the variation in their partners' age (with P,0.0001), and it could also explain 1.42% of the variation in their partners' height (with negative b-value, P = 0.0032), and 1.52% of the variation in their partners' desire for children (with negative b-value, P = 0.0024); (ii) women's own height could explain 3.06% of the variation in their partners' height (with P,0.0001), and it could also explain 1.02% of the variation in their partners' self-rated physical attractiveness (with negative b-value, P = 0.0130); (iii) women's own self-rated physical attractiveness could explain 1.26% of the variation in their partners' self-rated physical attractiveness (with P = 0.0060), and it could also explain 0.85% of the variation in their partners' age (with P = 0.0043) and 0.61% of the variation in their partners' income (with P = 0.0465); (iv) women's own income could explain 8.04% of the variation in their partners' income (with P,0.0001), and it could also explain 0.77% of the variation in their partners' self-rated physical attractiveness (with P = 0.0308); (v) women's own education could explain 5.23% of the variation in their partners' education (with P,0.0001); and (vi) women's own desire for children could explain 1.85% of the variation in their partners' desire for children (with P = 0.0008). Similar to the analysis in the previous subsection, we also noticed that women's own income could partially explain the variation in their partners' self-rated physical attractiveness (i.e. women's own income could explain 0.77% of the variation in their partners' self-rated physical attractiveness). Clearly, this result was not predicted by either potentials-attract or likes-attract hypotheses. However, its effect was also quite small.
For the regressions of men's actual mate choice on their own attributes (see Table 5 and Figure 2b), there were only 8 significant relationships out of 36. The pattern was also more consistent with the likes-attract hypothesis, since the highest bvalues (or partial R 2 -values) were also those between the same attributes, i.e. (i) men's own age could explain 34.39% of the variation in their partners' age (with P,0.0001), and it could also explain 1.32% of the variation in their partners' income (with P = 0.0033); (ii) men's own height could explain 3.27% of the variation in their partners' height (with P,0.0001); (iii) men's own self-rated physical attractiveness could explain 0.90% of the variation in their partners' self-rated physical attractiveness (with P = 0.0200), and it could also explain 0.74% of the variation in their partners' height (with negative b-value, P = 0.0340); (iv) men's own income could explain 6.74% of the variation in their partners' income (with P,0.0001); (v) men's own education could explain 5.42% of the variation in their partners' education (with P,0.0001); and (vi) men's own desire for children could explain 1.72% of the variation in their partners' desire for children (with P = 0.0014). We also calculated a series of univariate linear regression for women and men separately (see Ref. [19], [24]), in which each of users' own attributes was individually regressed to each of their partners' attributes. The results were similar to the MLR analysis (see Table S7 in SI).
Basically, the main results in this subsection are also more consistent with the likes-attract hypothesis. This means that for the relationship between individuals' own attributes and their actual mate choice, people tend to chose mates who were similar to themselves in variety of attributes.
Individuals' Stated Mate Preference and their Actual Mate Choice
To assess whether the actual mate choice of both women and men were consistent with their stated mate preference, also using the data of 590 paired couples and following Todd et al. [24], a series zero-order Pearson correlation analysis between individuals' stated mate preference and their actual mate choice on each of same attribute pairs (i.e. age vs. age, height vs. height, education vs. education, and income vs. income) were calculated for women and men separately. The results are shown in Table 6, in which the correlation coefficients are ranged from 0.122 to 0.440 (with Pvalues ranged from 0.0029 to smaller than 0.0001). Different from Todd et al.'s [24] result, our analysis showed that for both women and men their actual mate choice matches their stated mate preference.
Discussion
In this study, using the data from a Chinese online dating website, we investigated patterns of human mate choice in China. Our main goal was to show whether the modern Chinese human mate choice can be predicted by the likes-attract hypothesis or potentials-attract hypothesis. Our study is different from the previous studies [19], [24], as it was based on the data of human's actual mate choice. Following the basic idea in the previous studies [19], [24], we analyzed the relationships between individuals' own attributes and their stated mate preferences, between individuals' own attributes and their actual mate choices, and between individuals' stated mate preferences and their actual choices. Basically, our main results support the likes-attract hypothesis more than the potentials-attract one, i.e. people tend to choose mates who are similar to themselves in a variety of attributes.
Our study provides an example to test the likes-attract and potentials-attract hypotheses in Eastern society. Our main results imply that the likes-attract rule should work better than the potentials-attract rule in human mate preference for long-term mating in both Western and Eastern societies. However, our results about the actual mate choice contradict Todd et al.'s [24] results based on the speed dating. Our analysis was more consistent with the likes-attract hypothesis in human's actual mate choice. Using the data of the paired couples, we show not only how individuals' own attributes are translated into their stated mate preference, but also individuals' actual mate choices could be predicted by the likes-attract hypothesis more than the potentialsattract hypothesis.
For the actual mate choice, the difference between our result and Todd et al.'s [24] result may arise from two possible reasons. Firstly, in Todd et al.'s [24] study, men and women have only five minutes to talk to each other before they made the decision.
Obviously, for the long-term mate choice, the five minutes should be not enough. Therefore, the participants may use the potentialsattract rule as a short-term mating strategy in a speed-dating event. Secondly, in Todd et al.'s [24] study, men's and women's mate choice were independent of each other, i.e. when a man (or woman) chose a woman (or man), he (or she) did not need to consider whether this women (or man) also likes him (or her). This may also result in that the participants use the potentials-attract rule in Todd et al.'s [24] study. In addition, the cultural difference may be also an important reason since some authors have shown people in a collectivist society tend to maintain longer relationships than individualists do [30].
In our analysis, only the data of the couples who reported their successful stories online were included (since we cannot access the data of the couples who had not reported their stories). This means that our results may miss unhappy couples whose choices might be or not be in line with the likes-attract rule. Hence, it might be important to compare the successful couples with the unsuccessful couples in the future study.
Although the likes-attract mechanism mainly determined the human mate choice in modern China, we also found that the potentials-attract hypothesis still partially worked. For example, women's stated income preference were partially influenced by their own height and self-rated physical attractiveness, i.e. if a woman is tall or she thinks that her own physical attractiveness is good, she may more prefer a man with high income level; and men's age preference could be partially explained by their own income, education and desire for children, i.e. a man with higher income level, good education background and strong desire for children may prefer younger women. For the relationship between individuals' own attributes and their actual mate choice, we also noticed that the effect of women's self-perception of their own physical attractiveness on their partners' income is almost equal to the effect of their own self-rated physical attractiveness on their partners' self-rated physical attractiveness. These results seem to match the potentials-attract hypothesis. In addition, we found that women's income also positively correlated with men's self-rated physical attractiveness, i.e. women also use their income to get more attractive men etc. This phenomenon was also found by Butson and Emlen [19] in their study. Similarly, the correlation between women's education level and their age preference was negative. This may imply that women with better education background would like also to find a younger mate, just like men do.
In the section of data and methods, we have pointed out that in our study both men's and women's physical attractiveness was measured by their self-ratings. Individuals' self-rated physical attractiveness stands for their self-perception of their own physical attractiveness, and it may influence their mate preference and mate choice. However, the self-rated physical attractiveness has been considered as a self-concept (even self-esteem) rather than a valid measure of actual physical attractiveness [27]. Recently, Weeden and Sabini [43] examined whether the self-ratings of attractiveness are significantly related to the third-party ratings. They found that the standard objective measures could predict about 25% variations in the self-ratings of physical attractiveness [43], i.e. the self-ratings of physical attractiveness should be positively related to the objective measure of physical attractiveness. Therefore, for the physical attractiveness, although our results only show the effects of individuals' self-perception on their mate preference and their actual mate choice, it still partially reflected the effect of the objective physical attractiveness. In this study, two previous studies [19], [24] were compared on the issue of how individuals' mate preference and actual mate choice are influenced by their self-perceptions. For human longterm mate choice in modern China, our main results are more consistent with the likes-attract hypothesis than the potentialsattract hypothesis. However, we also noticed that the potentialsattract hypothesis still partially works, i.e. a few variations in mate preference and actual mate choice could be explained by the potentials-attract hypothesis. Our research highlights the importance of studying human actual mate choice under different cultural backgrounds. Table S1 The profile characteristics that users could specify about themselves.
Supporting Information
(DOC) | 2017-07-03T20:17:03.494Z | 2013-04-02T00:00:00.000 | {
"year": 2013,
"sha1": "9b4e1e8dfbe86b9c2e54ce7b69bd755f3023b898",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0059457&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b4e1e8dfbe86b9c2e54ce7b69bd755f3023b898",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
80874101 | pes2o/s2orc | v3-fos-license | Factors Association with some Bacteria Cause Diarrhea Disease among Children Under 5 Years Old
Accepted 17 Oct. 2017 Diarrhea has been a common cause of morbidity and mortality in children under 5 years old. This study was intended to assessment level of personal hygiene, type of water taken by children, crowding index and some factors associated with some bacterial infection in children under 5 years.” This study enrolled 143 children under 5 years with clinical evidence of diarrheal disease through the period extending from 15/4/2016 to 30/8/2016, who were admitted to Baghdad teaching hospital, stool samples were collected from children who had diarrhea were inoculated on selective culture media using standard method. “The isolate were identified depending on morphological feature of colonies and from all media biochemically using API 20E system.” A total bacterial infection was observed (13.9%). Specific prevalence of species bacteria is as follows,”E.coli (7.7%), and Shigella spp. (2%), and Salmonella spp.(3.5%) and V. cholera (0.7%).” Finding from our study indicate that patient in the age group>5 years of age were more likely to have diarrhea than those who were younger, and Children that consumed tap water were more infected with bacteria (9.7%). In this study, crowding index were associated with diarrheal disease, "children from households with 1 or 2 people per room were (1.4%) less likely to have diarrhea compared to children from household with more than 3 people per room (30%). Our results indicate that availability of house hold sanitation facilities, access to filter and clean water, good personal hygiene and butter nutrition were all associated with lower incidence of diarrhea.”
Introduction
Diarrhea is defined as the passage of three or more loose or liquid stools per day or more frequent than normal for the individual [1].Variety of bacteria, viruses and parasites are the cause of diarrhea.Infection spreads through contaminated food or drinking water or from person to person as a result of poor hygiene.Diarrhea is both preventable and treatable disease.Fluid loss in diarrhea has fatal outcomes and it is the leading cause of malnutrition [2] [3] Diarrhea is the second leading cause of child morbidity and mortality, especially in the developing countries.It is estimated that there are 2.5 billion episodes and1.5 million deaths annually in children under-five years of age.This accounts for 21% of all the deaths in developing countries and the number has remained unacceptably high [4].Diarrhea kills young children more than Acquired Immunodeficiency Syndrome (AIDS), malaria and measles combined.It also exposes children to secondary infection.Diarrhea is a major public health problem in Iraq as evident from its increasing incidence and fatality [5].Unlike other diseases, diarrhea is generally not considered as an illness and, thus most diarrhea cases are either not managed at all or managed at home through traditional approaches [6].About one half of children under five years are not taken to any healthcare center and about one-third of the children with diarrhea do not receive any treatment at all [7].Diarrhea is not lethal itself, the improper knowledge of mother and their misdirected approach towards its management leads to high degree of mismanagement and resultant severe dehydration [8].Although diarrhea kills about four million people in developing countries each year, it remains a problem in developed countries as well.In the United States, each child will have had 7-15Episodes of diarrhea by the age of 5 years, 9% of all hospitalizations of children less than 5 years old are associated with diarrhea, and 300-500 children die each year from this potentially preventable condition [9].Twenty-four years ago, oral rehydration Organization's efforts to decrease diarrhea morbidity and mortality, and Diarrheal Disease Control Programs have been established in more than 100 countries worldwide [9].Appropriate healthcare-seeking behavior could prevent a significant number of child deaths and complications due to ill health [10].Improving mothers' care-seeking behavior could also contribute in reducing a large number of child morbidity and mortality in developing countries.Between 1990 and2000, diarrhea-related deaths decline by half thereby achieving World Summit Goal.While the cause-specific mortality is difficult to measure, it is estimated that more than one million child deaths per year have been prevented among the causative agents, the following bacteria have been reported: enter toxigenic Escherichia coli (ETEC), Shigella, Salmonella, and Campylobacter" [11]."Among the viruses, rotavirus seems to be the most common [12].In developing countries, diarrheal infections under5 years child are generally associated with rotavirus often at the time of weaning [13].The infectious agents associated with diarrheal disease are transmitted chiefly through the faecal-oral route [14].Food contamination is one major route for the transmission of enter pathogen, especially under the hygienic conditions prevailing in a rural setting.Various studies have reported that the source of enter pathogen was either water or food [15].For most people in developing countries, the major source of food is cereals, and dairy products are limited to a very small segment of affluent groups.Presumably, the reports of food as the origin of diarrhea refer to cereal-based diets, since all the cases cited came from developing countries" [15].
Study population:
During the peak diarrheal season from the period 15/4/2016 to 30/8/ 2016, stool samples were collected from 143 Children under 5 years of age who were admitted with diarrheal diseases to Baghdad Teaching hospital with clinical evidence of diarrheal disease diagnosed by physicians.Questionnaire for each patient containing the following information: "Age, family size, number of room occupied the house, education and knowledge mothers about some practices regarding diarrheal diseases, source of drinking water consumed by patients."
Stool Samples:
"Fresh stool sample were collected from 143 diarrheal patients and transferred to the microbiology laboratory on ice pack, processed within 4 hours of collection for culturing according to standard method [16].All specimens were inoculated on maconkey agar, Salmonella-Shigella agar, and Thiosulfate-Citrate bile Salts Sucrose medium."Colonies of V. cholera were streaked on gelatin agar incubated at 37c to determine the production of gelatinase and then inoculated into kligler iron agar and motility indol urea agar media.
After overnight incubation at 37c the "MacConkey agar and Salmonella-Shigella agar" plates were checked for non-lactose fermentation colonies.Suspected enteric pathogens from all media were identified biochemically using standard bacteriological method and the ApI 20E system"bio Meriux, Marcy; Etoil, France.In addition lactose fermenting and any nonlactose fermenting colonies typical of E.coli were selected from MacConkey agar plate and identify." The study was enrolled 143 children under 5 years old who were admitted to Baghdad teaching hospital, through the period extending from 15/4/2016 to 30/8/2016, with clinical evidence of diarrheal disease diagnosed by a physician.From the total 143 stool samples found 20 bacterial cell isolate as shown in table 1in percentage 13.9%, while the "E.coli had more frequent7.7%followed by Salmonella spp." 3.5% in this study the positive rate of bacterial isolate tended increase with age group (1-2), (3-4) years as shown in Table 1, and reach peak level in age group (<5) years.It can be seen from Table 2 that the high infection seen in children living with crowded family 7.7%.As well as in this study the percent of bacterial cell isolates from patients consumed tap water as source of drink water tend increase 9.8% in compression with patients consumed filtered water 4.1%.As shown in Table 3. From the Table 4, it has been shown that most of mothers wash their hands after coming from latrine71.3%,It has been also shown that few mothers 28.7% are not wash hands after coming latrine., while the most mother are not wash their hands after changing baby's diapers, washing breast before breast feeding and washing hands before cooking.Their percentage were 78.3%, 55.3%, and 72.2% respectively."Diarrheal disease is a major public health problem for children in developing countries.This study, which covered the diarrheal seasons in 2016 year to determine the bacterial infection and some factors associated with diarrheal disease in hospitalized children under 5 years of age."We detected enter pathogens in (13.9) % of patients with diarrhea."E.coli strain was the first most common group entero pathogen isolates (7.7) %.The relative prevalence of these categories diarrhea E.coli was similar to that observed among malnourished children [17]) and children with acute diarrhea in north Jorden [18]," and their presence in children with diarrhea in other developing countries has been documented [19] [20].However, the findings of this study confirm the importance of "salmonella spp.as major causes of diarrhea.We found salmonella spp.nearly (3.5) % of the patients studied, salmonella are one of the most important etiological agents of diarrhea infection in the world.Other study found that multi-antibiotic resistance in salmonella spp., has been associated with enhance virulence and excess mortality in patients compared with infection with sensitive strains [21] [22].""Alsofound that shigellosis was 2% in children under 5 years.Other study found incidence of shigellosis in all ages was 3.7% and 3.2% respectively [23] [24]."But other study reported 9-12.6% [25] [26]."Shigella spp., are highly fastidious organisms that die rapidly in an unsuitable environment, including the unavoidable temperature fluctuation encountered during transport in contrast to many other enteric infection, shigellosis is clearly not confined to childhood.One the contrary the incidence of shigellosis not only increased steadily after age 40 years, but the bacterial load of shigellosis patients increased after age 40 years suggesting that older people as well as very young children shed the highest bacterial load and many contribute disproportionally to the responsible for a large proportion of the diarrhea burden than was previously inferred from culture results or clinical diagnosis.The source of drinking water is very important for human health.In Iraq, most of the families use tap water to drink.In this study, (51.7) % of the respondents families used it as compared to (48.3) % use sterilized filtered water as source of drinking water."According to the United Nations report in 2007, shortage of safe drinking water in Iraq can lead to increased cases of diarrhea [27] [28].Findings from our study indicate that patient <5 years of age were more likely to have diarrhea than those who were younger.This is similar to previous reports from other study [29] [30]."Protection against diarrhea in the youngest age group may be conferred by several mechanisms such as maternal antibodies against enteric pathogens and current breast feeding.It is possible that after the age of <1 years, with the introduction of supplementary foods and changing nutritional habits, this protection is lost, and a high prevalence of persistent diarrhea among the young infants in our study may be related to the early exposure to heavy microorganisms and immaturity of the gut immune system at early infancy.In this study, overcrowding were associated with a history of diarrhea, other study have reported similar findings" [31].Children from households with 1 or 2 people per room were less likely to have diarrhea compared to children from households with more than 3 people per room.This may due to the fact that overcrowding living families tend to be poorer than wealth index, which impact the level of hygiene."The hands are central to many of our daily activities and the use of contaminated hands for cooking and eating enhances transmission of contaminants germs into the body through food, thereby causing ill-health.Mother serve the dual role of the children's nurse handling their faces, blowing their nostrils as well as the household chef prepares family's meals, feeds children.This coupled with poor knowledge and practice of simple hygiene increase the risk of spread of disease to the under five children.The study respondent demonstrated good hygiene of hand washing after coming latrine, handling raw meat/ poultry, this study supports several other studies" [32].The generally high prevalence of disease of them did not "changing baby's diapers, washing hand before cooking and they did not washing breast before breast feeding".This result came in agreed with another study [33].
Conclusions
The important of personal hygiene and hand washing which include the reduction of the occurrence of diarrheal disease and decontamination of the hand in order to prevent cross-transmission of infection, and may have implication for control of feco-orally transmitted communicable disease.
Table 1 :
Distribution of children according to the age group and percentage of bacterial isolate.
Table 2 :
Distribution of children according to the crowding index.
Table 3 :
"Distribution of participant's parent according to consume the source of drinking water". | 2019-03-18T13:58:54.900Z | 2018-07-03T00:00:00.000 | {
"year": 2018,
"sha1": "1a0fa3ccd58b05dbdedfa5068717b5b3668d7059",
"oa_license": "CCBYNC",
"oa_url": "http://mjs.uomustansiriyah.edu.iq/ojs1/index.php/MJS/article/download/42/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1a0fa3ccd58b05dbdedfa5068717b5b3668d7059",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226063312 | pes2o/s2orc | v3-fos-license | The Effect of Brand Associations and Brand Awareness on the Decision to Buy a Sim Card
In the rapid development of technology, the business world required to compete in this globalization era. The company expected to keep abreast of technological developments accompanied by producing goods that are under the times and are full of innovation. Consider, a cellphone is a necessity item that cannot be separated from daily life, of course, causing increasingly fierce competition between cellular operators to attract the attention of consumers to want to use and remain loyal to their products. The basis for think of this research with its longterm goal of creating a close and healthy relationship with consumers through brand equity. In the context of creating loyal customers, the task of marketers is always not focused on the effort to practice the marketing mix as the primary strategy. The short-term goal is to analyze the influence of brand awareness and brand associations. Concerning Sim cards, more and more choices of goods and services make possibilities moving brands are higher, customers are less loyal, so they need to get attention. The results obtained that three variables have a significant effect partially, namely brand associations with a significance value of probabilities as α of 0,000 <0.05 while the calculated value> t table is 8.871> 0.1996. The results of the t-test calculation of the Brand Awareness variable obtained the significance value of the probability α of 0.223> 0.05 while the cost of t count <t table is -2222 <0.1996.
INTRODUCTION
Business competition in the era of rapid technological development today is no longer a product quality war but a brand war. Product quality has become a standard that can be easily and quickly copied and owned by anyone. At the same time, the only attribute that is difficult to replicate is a strong brand that can provide guidelines, guarantees, confidence, and expectations in creating customer satisfaction.
One of the most appropriate competitive strategies is to increase brand equity to maintain customer loyalty. Brand equity is significant for marketers, and the level of brand loyalty from customers becomes its primary support. In reality, many brands are considered as an identity only to distinguish them from competitors. Therefore the company needs to sharpen its paradigm, not only trying to achieve customer satisfaction but more on achieving customer loyalty Bhote, in Dicho et all (2016).
Companies will be more easily recognized if the company is right in naming a brand, making it easier for customers to distinguish their products and to make repeat purchases. One way to win the competition is a war between brands because marketing itself not only markets the product. Still, a memorable brand will make consumers' perceptions positive about the product.
Marketers build brand equity by creating the right brand knowledge structure for the right consumer. This process relies on all brand-related contacts, whether done by marketers or not. Brand equity has several dimensions, according to Aaker in Dicho et all (2016), consisting of Brand Awareness and Brand Associations.
According to (Kotler and Keler, 2009: 240), purchasing decisions are consumers' decisions regarding preferences for brands in a collection of choices. According to Astuti and Cahyadi (2007), if a customer is not interested in a brand and buys because of the characteristics of the product, price, comfort, and with little concern for the brand, the possibility of brand equity is low. Whereas if customers tend to buy a brand even though faced with competitors who offer superior products, for example, in terms of price and practicality, then the brand has a high equity value Astuti and Cahyadi (2007).
This condition makes competition more competitive, which will make consumers become selective in making purchasing decisions and even quickly move to other brands. Influencing consumer purchasing decisions is closely related to brand equity. If the brand equity of a product is stable and preferred, the tendency of consumers to buy products repeatedly will occur. For this reason, researchers are interested in researching the title The Effect of Brand Awareness and Brand Associations on Purchasing Decisions on Sim Card.
METHOD
Respondents to be examined are regular students who are active in the 2015 and 2016 faculty of economics at UMN Al Washliyah Medan. Primary data from this study were obtained from data collection by survey method carried out by going directly to the field by distributing questionnaires to respondents. Primary data is data that is collected by an organization or individual directly from its object (Santoso and Tjiptono, 2001). Besides, secondary data obtained through literature studies from literature and journals relating to issues, economic magazines, and other information can retrieve through the on-line (internet) system.
The population in this study used all regular students of the Faculty of Economics, University of Muslim Nusantara Al Washliyah Medan class of 2015 and 2016. As the research population amounted to 244 people (according to data from the Student Affairs Department of the Faculty of Economics UMN AW). Students who are the object of research are specifically for S1 regular students of 2015 and 2016. While the sample, according to Arikunto (1998), says that the example is part of the population (part or representative of the population under study). The sample of this study is a portion of the people taken as a source of data and can represent the entire population. Sampling According to Hair et al. in Dicho at all (2016), the number of samples obtained from a ratio of 20: 1, where each one variable requires 20 respondents. This study uses five variables so that the number of respondents needed is 100 respondents. The method in sampling is a non-probability sampling; that is, a sampling technique that does not provide equal opportunity for each element or member of the population to select as a sample (Sugiyono 2010). The prerequisite for non-probabilities in this study is the UMN Faculty of Economics students who use cellulite or sim card products.
Variable Name Definition Indicator Brand Awareness
Brand awareness is the ability of a prospective buyer to recognize or recall that a brand is part of a particular brand category (Aaker, 2003) This sim card is a famous brand This sim card is the best-prepaid starter card This sim card is the leading choice when you want to buy a product
Brand associations
Brand associations reflect the bonds of consumers between brands and essential product attributes, such as logos, slogans, or well-known personalities (Grewal and Levy, 2008: 280
RESULTS AND DISCUSSION
SIM Card is a card that we use to subscribe to one cellular operator. SIM is an abbreviation of the Subscriber Identity Module. This smart card produced in the form of Integrated Circuit (IC), which stores data for GSM (SIM Card) and CDMA (RUIM Card) cellular phone customers. The SIM Card function is a small information card that contains subscription information and other personal information. Testing the effect of independent variables together on the dependent variable is done using the F test. The results of the statistical calculation show the calculated F value = 120.076 with a significance of 0.000 <0.05. This means that together brand awareness, brand association, has a significant influence on the decision to buy a sim card.
Effect of Brand Awareness on Purchasing Decisions
The hypothesis, which affirms brand awareness influences purchasing decisions, is unacceptable (rejected ). Proves that the brand awareness Based on analysis of data obtained in the t-test, the result coefficient of -0.070 so that the variables Brand Awareness is not a significant and negative effect on the decision of purchase (Y). Evidenced by the acquisition of the substantial value of the alpha probability of 0.223> 0.05 and t-value of -1.227 <0.1996.
Although the results of majority respondents' answers score both in terms of recognizing, remembering, and being aware of the existence of these products, none of that is a reason in making a purchase decision.
These results are in line with research from Iriani (2011) and Rahmadhano (2014) that the Brand Awareness variable (X 1 ) has no significant and negative effect on purchasing decisions (Y). Students' need for communication in this era of the internet has become famous in choosing which provider can provide the best quality in terms of strong signals, fast internet, and low data/pulse package prices. Based on the results of research on student respondents, it found that the need for internet data packages is far more important than the use of communication (telephone/SMS). It can show from the need for internet data quota provided by a small sim card so that respondents decide not to buy.
Effect of Brand Associations on Purchasing Decisions
Hypothesis, which states brand associations are positively influential on Purchase decisions are accepted. Proves that brand Associations Based on the results of data analysis obtained in the t-test, received the coefficient value of X 2 of 0.589 so that the variable Brand Associations (X 2 ) has a significant and positive effect on purchasing decisions (Y). Evidenced by the acquisition of a significance value of α probability of 0,000 <0.05 and a calculated t value of 8.871> 0.1996. According to Sangadji and Sopiah (2013: 324), the association is an attribute that already exists in a brand and will be higher if the customer has a lot of experience dealing with that brand.
The sim card association has various attributes and full features and is under current developments. Then it can be concluded that students who use sim cards due to the confidence that arises when using this product also have features available that suit the needs of students. Results from the four most dominant variables that affect the dependent variable are the variable Brand Associations (X 2 ), the acquisition value of t count high of 8.871> 0.1996, a decrease significance value of 0,000 <0.05 and the highest beta coefficient of 0.589.
CONCLUSION
Brand awareness does not have a significant adverse effect on purchasing decisions. It is due to the need for internet data packages offered by sim cards that do not meet the needs of students of the faculties of economics in 2015 and 2016. Providers must retain consumers to continue using sim cards and direct consumers to have data packages enough internet.
The coefficient of determination obtained was 0.835. This means that 83.5% of purchasing decisions can explain by the brand awareness and brand association variables. In comparison, the remaining 16.5% sim card purchasing decisions influenced by other variables not examined in this study.
It should be more careful in determining the choice of consumer products. Because a lot of advertisements offered by providers with the terms and conditions that apply but consumers are less aware; should establish positive communication with the provider, so that retailers or traders know consumer complaints, one of them is by utilizing after-sales activities, namely claims against the provider in the event of a reduction in consumer confidence in the product, to maintain loyalty by providing a suggestion box service provided by the provider at the counter -The closest counter | 2020-10-15T11:02:17.543Z | 2020-03-31T00:00:00.000 | {
"year": 2020,
"sha1": "ddc0faaa9fbbfc79a48430fb3494fadda51b429a",
"oa_license": "CCBYSA",
"oa_url": "https://www.ilomata.org/index.php/ijjm/article/download/48/47",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ddc0faaa9fbbfc79a48430fb3494fadda51b429a",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
214727671 | pes2o/s2orc | v3-fos-license | Mean curvature flow with generic initial data
We show that the mean curvature flow of generic closed surfaces in $\mathbb{R}^{3}$ avoids asymptotically conical and non-spherical compact singularities. We also show that the mean curvature flow of generic closed low-entropy hypersurfaces in $\mathbb{R}^{4}$ is smooth until it disappears in a round point. The main technical ingredient is a long-time existence and uniqueness result for ancient mean curvature flows that lie on one side of asymptotically conical or compact shrinking solitons.
1. Introduction 1.1. Overview of results. Mean curvature flow is the analog of the heat equation in extrinsic differential geometry. A family of surfaces M (t) ⊂ R 3 flows by mean curvature flow if where H M (t) (x) denotes the mean curvature vector of the surface M (t) at x. Unlike the traditional heat equation, mean curvature flow is nonlinear. As a result, the mean curvature flow starting at a closed surface M ⊂ R 3 is guaranteed to become singular in finite time. There are numerous possible singularities and, in general, they can lead to a breakdown of (partial) regularity and of well-posedness. A fundamental problem, then, is to understand singularities as they arise. A common theme in PDEs arising in geometry and physics is that a generic solution exhibits better regularity or well-posedness behavior than the worst-case scenario. This aspect of the theory of mean curvature flow has been guided by the following well-known conjecture of Huisken [Ilm95a,#8]: A generic mean curvature flow has only spherical and cylindrical singularities.
The implications of this conjecture on the partial regularity and well-posedness of mean curvature flow is an important field of research in itself. See Section 1.2 for the state of the art on the precise understanding of the effects of spherical and cylindrical singularities on the partial regularity and well-posedness of mean curvature flow.
The most decisive step toward Huisken's conjecture was taken in the trailblazing work of Colding-Minicozzi [CM12a], who proved that spheres and cylinders are the only linearly stable singularity models for mean curvature flow. In particular, all remaining singularity models are linearly unstable and ought to occur only non-generically. See Section 1.3 for more discussion.
In this paper we introduce a new idea and take a second step toward the genericity conjecture and confirm that a large class of unstable singularity models are, in fact, avoidable by a slight perturbation of the initial data. Roughly stated, we prove: The mean curvature flow of a generic closed embedded surface in R 3 encounters only spherical and cylindrical singularities until the first time it encounters a singularity (a) with multiplicity ≥ 2, or (b) that has a cylindrical end but which is not globally a cylinder.
Cases (a) and (b) are conjectured to not occur (see the nonsqueezing conjecture and the no cylinder conjecture in [Ilm03]). This would yield Huisken's conjecture in full.
Using a similar method, we also prove a related statement for hypersurfaces in R 4 : The mean curvature flow starting from a generic hypersurface M ⊂ R 4 with low entropy remains smooth until it dissapears in a round point.
In particular, this gives a direct proof of the low-entropy Schoenflies conjecture (recently announced by Bernstein-Wang).
Our genericity results relies on keeping simultaneous track of flows coming out of a family of auxiliary initial surfaces on either side of M . The key ingredient is the following new classification result of ancient solutions to mean curvature flow that lie on one side of an asymptotically conical or compact singularity model: For any smooth asymptotically conical or compact self-shrinker Σ, there is a unique ancient mean curvature flow lying on one side of √ −tΣ for all t < 0. The flow exhibits only multiplicity-one spherical or cylindrical singularities. See Section 1.4 for more detailed statements of our results, and Section 1.5 for a discussion of the method and the technical ingredient.
1.2. Singularities in mean curvature flow. Thanks to Huisken's monotonicity formula, if X is a space-time singular point of a mean curvature flow M, it is possible to perform a parabolic rescaling around X and take a subsequential (weak) limit to find a tangent flow M ′ [Hui90,Ilm95b]. A tangent flow is always self-similar in the sense that it only flows by homotheties. If the t = −1 slice of the flow is a smooth hypersurface Σ, then Σ satisfies H + 1 2 x ⊥ = 0, where H is the mean curvature vector of Σ and x ⊥ is the normal component of x. In this case, we call Σ a self-shrinker. The tangent-flow M ′ at a time t < 0 is then √ −t Σ, though possibly with multiplicity.
The next level of difficulty is to understand flows of surfaces in R 3 that needn't be globally mean convex, but which happen to only experience multiplicity-one cylindrical singularities. There have been major recent advances on this topic. Colding-Minicozzi [CM16] proved (using their earlier work [CM15], cf. [CIM15]) that mean curvature flows in R 3 having only multiplicity-one cylindrical tangent flows are completely smooth at almost every time and any connected component of the singular set is contained in a time-slice. More recently, Choi-Haslhofer-Hershkovits showed [CHH18] (see also [CHHW19]) that there is a (space-time) mean-convex neighborhood of any cylindrical singularity. In particular, combined with [HW17], this settles the well-posedness of a mean curvature flow in R 3 with only multiplicity-one cylindrical tangent flows.
For flows of general surfaces in R 3 , which may run into arbitrary singularities, our understanding of mean curvature flow near a singular point is quite limited at present.
The most fundamental issue is the potential for higher multiplicity to arise when taking rescaled limits around a singular point. Nonetheless, some important information is available about the tangent flows at the first singular time due to important results of Brendle [Bre16] classifying genus zero shrinkers in R 3 and of Wang [Wan16] showing that a smooth finite genus shrinker in R 3 has ends that are smoothly asymptotically conical or cylindrical. Besides the issue of multiplicity, another problem is the huge number of potential shrinkers that could occur as tangent flows, greatly complicating the analysis of the flow near such a singular point. (This issue presumably gets considerably worse for hypersurfaces in R n+1 .) 1.3. Entropy and stability of shrinkers. Huisken has conjectured [Ilm95a,#8] that cylinders and spheres are the only shrinkers that arise in a generic (embedded) mean curvature flow. This conjecture provides a promising way of avoiding the latter problem mentioned above.
Huisken's conjecture was reinforced by the numerical observation that non-cylindrical self-shrinkers are highly unstable. This instability was rigorously formulated and proven in the foundational work of Colding-Minicozzi [CM12a]. They defined the entropy λ(M ) := sup and observed that t → λ(M t ) is non-increasing along any mean curvature flow, by virtue of Huisken's monotonicity formula. Moreover, they proved that any smooth selfshrinker with polynomial area growth, other than generalized cylinders (i.e., R n−k × S k ( √ 2k) with k = 0, . . . , n), can be smoothly perturbed to have strictly smaller entropy. This result has been used fundamentally in [CIMW13,BW16] (cf. [HW19]), though we will not need to make explicit use of it in this paper.
There have been many important applications of Colding-Minicozzi's classification of entropy-stable shrinkers. First, they showed their result can be used to define a piecewise mean curvature flow that avoids non-spherical compact self-shrinkers. This idea has been used to classify low-entropy shrinkers, beginning with the work of Colding-Ilmanen-Minicozzi-White [CIMW13] who showed that the round sphere S n ⊂ R n+1 has the least entropy among all non-planar self-shrinkers. Subsequently, Bernstein-Wang extended this to show that the round sphere has least entropy among all closed hypersurfaces [BW16] (see also [Zhu20]) and that the cylinder R×S 2 has second least entropy among non-planar self-shrinkers in R 3 [BW18d]. Bernstein-Wang have recently used these classification results, along with a surgery procedure, to show that if M 3 ⊂ R 4 has λ(M ) ≤ λ(S 2 × R), then M is diffeomorphic to S 3 [BW18d] (see also [BW18a]).
1.4. Our perturbative statements. Let us describe our main perturbative results. First, we have a low-entropy result in R 4 : Theorem 1.1. Let M 3 ⊂ R 4 be any closed connected hypersurface with λ(M ) ≤ λ(S 2 ). There exist arbitrarily small C ∞ graphs M ′ over M so that the mean curvature flow starting from M ′ is smooth until it disappears in a round point.
We state and prove this ahead of our result for R 3 because its statement and proof are simpler. The low-entropy assumption allows us to perturb away all unstable singularities (in the sense of Colding-Minicozzi) and thus obtain a fully regular nearby flow. In fact, Theorem 1.1 is a special case of Theorem 10.1, which applies in all dimensions under suitable conditions. See also Theorem 10.7 and Corollary 10.8 for results showing that the above behavior is generic in a precise sense.
Theorem 1.1 immediately implies the following low-entropy Schoenflies theorem, recently announced by Bernstein-Wang (cf. [BW19b,p. 4]). 1 Corollary 1.2 (Bernstein-Wang [BW20]). If M 3 ⊂ R 4 is a closed connected hypersurface with λ(M ) ≤ λ(S 2 ), then M bounds a smoothly standard 4-ball and is smoothly isotopic to a round S 3 . 2 For generic mean curvature flow of embedded surfaces in R 3 , we show more: Theorem 1.3. Let M 2 ⊂ R 3 be a closed embedded surface. There exist arbitrarily small C ∞ graphs M ′ over M so that: (1) the (weak) mean curvature flow of M ′ has only multiplicity-one spherical and cylindrical tangent flows until it goes extinct, or (2) there is some T > 0 so that the previous statement holds for times t < T and at time T there is a tangent flow of M ′ that either (a) has multiplicity ≥ 2, or (b) has a cylindrical end, but is not a cylinder.
Note two things: • In the R 3 theorem, unlike in the low-entropy higher dimensional theorems, we need to make use of a weak notion of mean curvature flow because we are placing no entropy assumptions and are thus interested in flowing through spherical and cylindrical singularities. See Theorem 11.1 for the precise statement, which includes the notion of weak mean curvature flow that we make use of. • Both of the potential tangent flows in case (2) are conjectured to not exist (see the nonsqueezing conjecture and the no cylinder conjecture in [Ilm03]). There are two features of our work that distinguish it from previous related work: • We only need to perturb the initial condition. See [CM12a] for a piecewise flow construction that perturbs away compact singularity models (see also [Sun18]). • We are able to perturb away (certain) non-compact singularity models.
1.5. Our perturbative method: ancient one-sided flows. For a fixed hypersurface M 0 ⊂ R n+1 , one has a weak mean curvature flow t → M 0 (t) starting at M 0 . Suppose that X = (x, T ) is a singular point for t → M 0 (t). The usual method for 1 We emphasize that our proof of Theorem 1.1 relies heavily on several of Bernstein-Wang's earlier works [BW16,BW17b,BW18d] and as such our proof here of Corollary 1.2 has several features in common with their announced strategy. The key point here, however, is that our study of generic flows in Theorem 1.1 allows us to completely avoid the need for any surgery procedure or the refined understanding of expanders obtained in [BW17a,BW18c,BW18b,BW19b,BW19a]. 2 The isotopy from M to the round S 3 follows from Theorem 1.1, and the fact that M bounds a smooth 4-ball is then a consequence of the Isoptopy Extension Theorem (cf. [Hir76, §8, Theorem 1.3]).
analyzing the singularity structure at X is to study the tangent flows of t → M 0 (t) at X, i.e., the (subsequential) limit of the flows t → λ(M 0 (T + λ 2 t) − x) =: M λ 0 (t) as λ → ∞. As discussed above, by Huisken's monotonicity formula, for t < 0, this will weakly (subsequentially) converge to a shrinking flow t → M ′ (t) associated to a (weak) self-shrinker.
Our new approach to generic mean curvature flow is to embed the flow t → M 0 (t) in a family of flows by first considering a local foliation {M s } s∈(−1,1) and flowing the entire foliation, simultaneously, by mean curvature flow t → M s (t). The avoidance principle for mean curvature flow implies that M s (t) ∩ M s ′ (t) = ∅ for s = s ′ . The entire foliation can be passed to the limit simultaneously, i.e., we can consider the flows t → λ(M s (T + λ 2 t) − x) := M λ s (t) and send λ → ∞.
If we choose s ց 0 diligently as λ → 0, then after passing to a subsequence, t → M λ s (t) will converge to a non-empty flow t →M (t) that stays on one side of the original tangent flow t → M ′ (t) and which is ancient, i.e., it exists for all sufficiently negative t. If we can prove that the one-sided ancient flow t →M (t) has certain nice properties (i.e., only cylindrical singularities), then we can exploit this to find a choice of s small so that t → M s (t) is well behaved.
We proceed to give more details as to how we exploit this ancient one-sided flow, t →M (t). Assume that the tangent flow to M 0 (t) at X is smooth and has multiplicity one, so M ′ (t) = √ −t Σ for t < 0. Then, considering the rescaled flow τ → e τ 2M (−e τ ), we note that e (a priori, this could occur with multiplicity, but in practice one can rule this out by upper semi-continuity of density). In the current work, we will deal with all Σ that are: (i) compact but not spheres, or (ii) non-compact with asymptotically (smoothly) conical structure. These tangent flows encompass all the necessary ones for our aforementioned theorem statements, by virtue of L. Wang's [Wan16] characterization of the asymptotic structure of non-compact singularity models. Our definitive rigidity theorem of ancient one-sided flows is: Theorem 1.4. Let Σ n ⊂ R n+1 be a smooth self-shrinker that is either compact or asymptotically (smoothly) conical. Up to parabolic dilation around (0, 0) ∈ R n+1 × R, there exists a unique 3 ancient solution to mean curvature flow t →M (t) so thatM (t) is disjoint from √ −tΣ and has entropy < 2F (Σ).
Remark. There has recently been an outburst of activity regarding the rigidity of ancient solutions to geometric flows. We mention here [BHS11, Wan11, DHS12, DHS10, HS15, HH16, DdPS18, BK17, BC19, BC18, Bre18, ABDS19, BDS20]. In the setting at hand, Theorem 1.4 was motivated from the recent work in [CM19] on the classification of compact ancient solutions of gradient flows of elliptic functionals in Riemannian manifolds. However, this is the first time that the one-sidedness condition has been exploited so crucially, and geometrically, in the setting of ancient geometric flows. In the elliptic setting, there have been interesting exploitations of one-sided foliations by minimal surfaces; see, e.g., Hardt-Simon [HS85], Ilmanen-White [IW15], and Smale [Sma93]. Our current parabolic setting, however, presents a number of complications that come from the fact that the shrinkers Σ we are interested in are primarily noncompact, which makes the analysis resemble what it might look like in the elliptic setting for cones with singular links.
Remark. Neither of the hypothesis in Theorem 1.4 can be removed. There can be many ancient flows that intersect √ −t Σ and converge to Σ as t → −∞ after rescaling; see Theorem 6.1. Also, for a ≥ 0, the grim reaper in the slab R × (a, a + π) is a nontrivial example of an ancient flow that is disjoint from its tangent flow at −∞, 2[R × {0}].
Next, we show thatM (t) encounters only generic singularities for as long as it exists. We establish many properties ofM (t) in Theorem 9.1, and some the important ones are summarized here.
Theorem 1.5. Let t →M (t), Σ n ⊂ R n+1 be as in Theorem 1.4 and 2 ≤ n ≤ 6. Then: • The flow t →M (t) only has multiplicity-one, generalized cylindrical singularities: is an outermost expander associated to the asymptotic cone of Σ.
To prove Theorem 1.5, we show that the one-sided ancient flow t →M (t) must be shrinker mean convex ; geometrically, this means that the rescaled flow moves in one direction. This is where the one-sided property is crucially used. Indeed, we show that the evolution of a one-sided flow is dominated by the first eigenfunction of the linearization of Gaussian area along Σ, which in turn yields shrinker mean convexity due to the spectral instability of shrinkers discovered in [CM12a]. Shrinker mean convexity is preserved under the flow and can be used analogously to mean convexity to establish regularity of the flow (cf. [Smo98,Whi00,Whi03,Lin15,HW19]). We emphasize that our analysis of the flowM (t) in Theorem 1.5 is influenced by the work of Bernstein-Wang [BW17b] where they studied a (nearly ancient) flow on one side of a asymptotically conical shrinker of low-entropy. Because we do not assume that the flow has low-entropy (besides assuming the limit at −∞ has multiplicity one), we must allow for singularities (while in [BW17b], the flow is a posteriori smooth). In particular, this complicates the analysis of the flow near t = 0 significantly. 1.6. Other results. We list several other new results we've obtained in this work that might be of independent interest: • For any smooth compact or asymptotically conical shrinker Σ, we construct an I parameter family of smooth ancient mean curvature flows (where I is the index of Σ as a critical point of Gaussian area, as defined in (3.8)) that-after rescaling-limit to Σ as t → −∞; see Theorem 6.1. • We show that the outermost flows of the level set flow of a regular cone are smooth self-similarly expanding solutions. We also construct associated expander mean convex flows that converge to the given expander after rescaling; see Theorem 8.21. • We include a proof of a localized version of the avoidance principle for weak set flows due to Ilmanen; see Theorem C.3. This implies a strong version of the Frankel property for shrinkers; see Corollary C.4. • We improve known results concerning the connectivity of the regular set of a unit-regular Brakke flow with sufficiently small singular set. See Corollary F.5. • We localize the topological monotonicity of White [Whi95]. In particular, our results should be relevant in the context of the strict genus reduction conjecture of Ilmanen [Ilm03,#13]. See Appendix G and the proof of Proposition 11.4. 1.7. Organization of the paper. In Section 2 we recall some conventions and definitions used in the paper. In Section 3 we analyze the linearized graphical mean curvature flow equation over an asymptotically conical shrinker. We use this to study the nonlinear problem in Section 4. These results are applied in Section 5 to prove our main analytic input, Corollary 5.2, the uniqueness of ancient one-sided graphical flows. Section 6 contains a construction of the full I-parameter family of ancient flows. This is not used elsewhere, since we construct the one-sided flows by GMT methods allowing us to flow through singularities. We begin this GMT construction in Section 7 where we construct an ancient one-sided Brakke/weak-set flow pair. In Section 8 we establish optimal regularity of the ancient one-sided flow. We put everything together in Section 9 and give the full existence and uniqueness statement for the ancient one-sided flows.
We apply this construction to the study of the mean curvature flow of generic low entropy hypersurfaces in Section 10 and to the study of the first non-generic time of the mean curvature flow of a generic surface in R 3 in Section 11.
In Appendix A we improve some decay estimates for asymptotically conical ends of shrinkers. In Appendix B we recall Knerr's non-standard parabolic Schauder estimates. We prove Ilmanen's localized avoidance principle in Appendix C. In Appendix E we study weak set flows coming out of cones. We show that Brakke flows with sufficiently small singular set have connected regular part in Appendix F. Finally, in Appendix G we localize certain topological monotonicity results.
Preliminaries
In this section we collect some useful definitions, conventions, and useful ways to recast mean curvature flow, which we will make use of in the sequel.
2.1. Spacetime. We will often consider the spacetime of our mean curvature flows, R n+1 × R, with its natural time-projection map t : R n+1 × R → R: For any subset E ⊂ R n+1 × R we will denote 2.2. The spacetime track of a classical flow. Let us fix a compact n-manifold M , possibly with boundary. Suppose that f : is flowing by mean curvature flow. Then, we call a classical mean curvature flow and define the heat boundary of M by By the maximum principle, classical flows that intersect must intersect in a point that belongs to either one of their heat boundaries (cf. [Whi95, Lemma 3.1]).
2.3. Weak set flows and level set flows. If Γ ⊂ R n+1 × R + (where R + = [0, ∞) could be shifted as necessary) is a closed subset of spacetime, then M ⊂ R n+1 × R is a weak set flow (generated by Γ) if: (1) M and Γ coincide at t = 0 and (2) if M ′ is a classical flow with ∂M ′ disjoint from M and M ′ disjoint from Γ, then M ′ is disjoint from M. We will often consider the analogous definition with R + replaced by R in which case one should omit requirement (1).
There may be more than one weak set flow generated by a given Γ. See [Whi95]. However, there is one weak set flow that contains all other weak set flows generated by Γ. It is called the level set flow (or biggest flow). For Γ ⊂ R n+1 × R + as above, we define it inductively as follows. Set and then let W k+1 be the union of all classical flows M ′ with M ′ disjoint from Γ and ∂M ′ ⊂ W k . We define the level set flow (or biggest flow) generated by Γ as: Ilm94,Whi15] for further references for weak set flows and level set flow.
We will sometimes engage in a slight abuse of notation, referring to a weak set flow (or a level set flow) generated by a closed subset Γ 0 ⊂ R n+1 , when we really mean that it is generated by Γ 0 × {0} (or a suitable time-translate) in the sense defined above.
2.4. Integral Brakke flows. Another important notion of weak mean curvature flow is a Brakke flow (cf. [Bra75,Ilm94]). We follow here the conventions used in [Whi19].
An (n-dimensional) integral Brakke flow in R n+1 is a 1-parameter family of Radon measures (µ(t)) t∈I over an interval I ⊂ R so that: (1) For almost every t ∈ I, there exists an integral n-dimensional varifold V (t) with µ(t) = µ V (t) so that V (t) has locally bounded first variation and has mean curvature H orthogonal to Tan(V (t), ·) almost everywhere.
(2) For a bounded interval [t 1 , t 2 ] ⊂ I and any compact set K ⊂ R n+1 , We will often write M for a Brakke flow (µ(t)) t∈I , with the understanding that we're referring to the family I ∋ t → µ(t) of measures satisfying Brakke's inequality.
A key fact that relates Brakke flows to weak set flows, which we will use implicitly throughout the paper, is that the support of the spacetime track of a Brakke flow is a weak set flow [Ilm94,10.5]. 4 2.5. Density and Huisken's monotonicity. For X 0 := (x 0 , t 0 ) ∈ R n+1 × R consider the (backward) heat kernel based at (x 0 , t 0 ): For a Brakke flow M and r > 0 we set This is the density ratio at X 0 at a fixed scale r > 0. Huisken's monotonicity formula [Hui90] so in particular, we can define the density of M at X 0 by The definition of Brakke flow used in [Ilm94] is slightly different than the one given here, but it is easy to see that the proof of [Ilm94,10.5] applies to our definition as well.
2.6. Unit-regular and cyclic Brakke flows. An integral Brakke flow M = (µ(t)) t∈I is said to be • unit-regular if M is smooth in some space-time neighborhood of any spacetime point X for which Θ M (X) = 1; • cyclic if, for a.e. t ∈ I, µ(t) = µ V (t) for an integral varifold V (t) whose unique associated rectifiable mod-2 flat chain [V (t)] has ∂[V (t)] = 0 (see [Whi09]). Integral Brakke flows constructed by Ilmanen's elliptic regularization approach [Ilm94] (see also [Whi19,Theorem 22]) are unit-regular and cyclic. More generally, if M i are unit-regular (resp. cyclic) integral Brakke flows with M i ⇀ M, then M is also unitregular (resp. cyclic) by [Whi05] (2) for a.e. t, we can pass to a subsequence depending on t so that V i (t) ⇀ V (t) as varifolds. The motivation for this definition of convergence is that these are the conditions that follow (after passing to a subsequence) if we have local mass bounds for M i and seek to prove a compactness theorem (cf. [Ilm94,§7]).
where H Σ is the mean curvature vector of Σ and x ⊥ is the normal component of x.
We will always assume that Σ has empty boundary, unless specified otherwise. One can easily check that (2.4) is equivalent to any of the following properties: • t → √ −t Σ is a mean curvature flow for t < 0, • Σ is a minimal hypersurface for the metric e − 1 2n |x| 2 g R n+1 , or • Σ is a critical point of the F -functional We will say that Σ is asymptotically conical there is a regular cone C (i.e., the cone over a smooth submanifold of S n ) so that λΣ → C in C ∞ loc (R n+1 \ {0}) as λ ց 0. Remark. By considering the t ր 0 limit (in the Brakke flow sense) of the flow t → √ −t Σ, we see that lim λց λΣ is unique in the Hausdorff sense, so the asymptotic cone of Σ must be unique. Moreover, because we have assumed that the convergence is in C ∞ loc , there is no potential higher multiplicity in the limit (see, e.g., [Wan16,§5]). 2.8. Entropy and stability. Following [CM12a], one uses the backward heat kernel ρ (x 0 ,t 0 ) from (2.1) to define the entropy of a Radon measure µ on R n+1 by (2.5) λ(µ) := sup Then, one can define the entropy of an arbitrary Brakke flow M = (µ(t)) t∈I by: (2.6) λ(M) := sup t∈I λ(µ(t)).
Linearized rescaled flow equation
Let Σ n ⊂ R n+1 be a smooth properly immersed asymptotically conical shrinker. 5 3.1. Spectral theory in Gaussian L 2 space. We consider the following operator on Σ: (3.1) Lu := ∆ Σ u − 1 2 x · ∇ Σ u + 1 2 u + |A Σ | 2 u. This is the "stability" operator for the F -functional in Section 2.7 in the sense that for any compactly supported function u : Σ → R, where ρ is the Gaussian weight ρ(x) := (4π) − n 2 e − 1 4 |x| 2 , i.e., ρ := ρ (0,0) (·, −1) in the notation of (2.1). See [CM12a,Theorem 4.1]. This stability operator, (3.1), is only self-adjoint if we we work on Sobolev spaces weighted by ρ. We thus define a weighted L 2 dot product for measurable functions u, v : Σ → R: This induces a metric · W and a Hilbert space Likewise, we define the higher order weighted Sobolev spaces They are Hilbert spaces for the dot product whose induced norm is denoted · W,k . It is with respect to these weighted measures spaces that L is self-adjoint, i.e., . We have: Lemma 3.1. There exist real numbers λ 1 ≤ λ 2 ≤ . . . and a corresponding complete L 2 W -orthonormal set ϕ 1 , ϕ 2 , . . . : Σ → R such that Lϕ i = −λ i ϕ i and lim i λ i = ∞. Proof. This follows from the standard min-max construction of eigenvalues and eigenfunctions and the compactness of the inclusion H 1 W (Σ) ⊂ L 2 (Σ), in the spirit of the Rellich-Kondrachov theorem, proven in [BW17b, Proposition B.2]. 5 The analysis here also holds in the much simpler case of compact Σ.
Since λ j → ∞ as j → ∞, there exist I, K ∈ N such that For notational convenience, for any binary relation ∼ ∈ {=, =, <, >, ≤, ≥} we define the spectral projector Π ∼µ : We wish to study solutions of the inhomogeneous linear PDE 0] in all that follows.
3.2. Weighted Hölder space notation. Let Ω ⊂ Σ. For k ∈ N, α ∈ (0, 1), we will use the following notation for the standard C k norm, C α seminorm, and C k,α norm: We define the weighted counterparts of the quantities above: α;Ω . Here,r is a function defined on Σ as in [CS19, Section 2].
In any of the above, if we don't indicate the domain Ω over which the norm is taken, then it must be understood to be Ω = Σ.
We revisit the inhomogeneous linear PDE (3.25) ( ∂ ∂τ − L)u = h on Σ × R − . We will treat classical solutions of the PDE, i.e., ones that satisfy it pointwise. We use implicitly throughout the fact that regularity on h yields improved regularity on u by standard (local) parabolic Schauder theory.
Proof. It suffices to prove since the general claim will follow by translation in time.
3.4. Nonlinear error term. In graphical coordinates over Σ, the rescaled mean curvature flow is: We can rewrite (3.42) as where we take L to be precisely the operator from (3.1) and The nonlinear error term can be estimated as follows: Lemma 3.6. There exists η = η(Σ) such that for u : Σ → R with u (1) 2 ≤ η, the nonlinear error term E(u) from (3.45) decomposes as , where E 1 , E 2 are smooth functions on the following domains: Moreover, we can estimate: In the above, C = C(Σ),r is as in Section 3.2, and i, j, k, ℓ ≥ 0.
Proof. It will be convenient to rewrite (3.45) as .
. Estimates (3.47), (3.48) for E H 1 , E H 2 are a simple consequence of scaling; indeed, they are the scale-invariant manifestation of the quadratic error nature of the linearization of H on an asymptotically conical manifold where, crucially, |A Σ | +r|∇ Σ A Σ | ≤ Cr −1 .
Dynamics of smooth ancient rescaled flows
In what follows, we make extensive use of the L 2 projection notation from (3.9). Suppose that h satisfies, respectively for each binary relation >, =, <, that If δ 0 is sufficiently small depending on Σ, then and either Here, C = C(Σ, C 0 ).
The following lemma verifies that assumptions (4.3), (4.10) are met for ancient rescaled mean curvature flows that stay sufficiently close to Σ in the suitable scaleinvariant sense: , and (4.10).
Proof. First let's show that δ(τ ) satisfies (4.3) with h = E(u). We use Lemma 3.6's decomposition, (3.46). By virtue of (3.47) and (4.16), we only need to check that (4.17) We deal with the cases <, = differently than >. We can deal with < and = at the same time, and we use the symbol ≦ to denote either of these binary relations. From the asymptotics of (3.8), one easily sees that: where C depends on Σ, µ. In particular, (4.18) implies (4.17) for ≦ after integrating by parts and using (3.48) with i + j + k + ℓ ≤ 1. We now deal with the binary relation >. Since we can rewrite the left hand side of (4.17) as The second and third terms we estimate via (4.18) and then Π ≦µ u(·, τ ) W ≤ u(·, τ ) W and (3.48) with i + j + k + ℓ = 0. The first term we estimate by integrating by parts and then using Π >µ u(·, τ ) W ≤ u(·, τ ) W and (3.48) with i + j + k + ℓ = 1. This completes our proof of (4.17) and thus (4.3) with h = E(u).
Uniqueness of smooth one-sided ancient rescaled flows
In this section, we characterize smooth ancient flows lying on one side of an asymptotically conical shrinker Σ, with Gaussian density no larger than twice that of the entropy of Σ.
Corollary 5.2 (One-sided uniqueness for graphical flows). Up to time translation, there is at most one non-steady ancient rescaled mean curvature flow (S(τ )) τ ≤0 on one side of Σ satisfying (5.1).
Proof. We assume that u,ū ≡ 0 are two such solutions. It follows from Lemma 5.1 that we can translate either u orū in time so that It will also be convenient to write δ(τ ),δ(τ ) for the quantities corresponding to (4.16) for u,ū. By Lemmas 4.3 and 5.1, for a fixed C 1 . Finally, we introduce the notation so that Using (3.46) and the fundamental theorem of calculus, where, in all six instances, · · · stands for (·, u + tw, ∇ Σ u + t∇ Σ w, ∇ 2 Σ u + t∇ 2 Σ w). We take the L 2 W dot product of (5.11) with w and integrate by parts so that, in every term, we have at least two instances of w and ∇ Σ w. In particular, we will pick up derivatives of D A E 1 and D A E 2 . Using Lemma 3.6, (5.1), and (5.9), we find Here, · W,1 is the norm induced from (3.6) with k = 1.
We use (5.12) to derive two estimates on the evolution of w 2 W . First, together with (5.10) and (3.8), it implies Second, recalling the definition of L in (3.1), integrating by parts, and using (5.12), it follows that there exists a sufficiently negative τ 0 such that: with a fixed C 3 .
We next compute the evolution of ∇ Σ w 2 W . To that end, we need a couple of preliminary computations. By the Gauss equation, . From the definition of the second fundamental form and the shrinker equation, In what follows, we recall the Gaussian density ρ, defined in (3.2), which satisfies ∇ρ = − 1 2 ρx. An integration by parts, followed by the Bochner formula We can now estimate the evolution of ∇ Σ w 2 Σ . Using (5.10) and the definition of L in (3.1): We claim that this implies: with fixed C 4 , after possibly choosing a more negative τ 0 . Indeed, in the immediately preceding expression, we use Cauchy-Schwarz on the last term to absorb the ∆ Σ w − 1 2 x · ∇ Σ w into the first term. The remainder of the first term is used, via (5.17), to dominate all ∇ 2 Σ w terms in E w , which we computed in (5.11); note that these terms have small coefficients for sufficiently negative τ by virtue of (5.9). This yields (5.18).
A family of smooth ancient rescaled flows
In this section we construct an I-dimensional family (recall, I is as in (3.8)) of smooth ancient rescaled mean curvature flows that flow out of the fixed asymptotically conical shrinker Σ n ⊂ R n+1 as τ → −∞. Using the tools at our disposal, this is a straightforward adaptation of [CM19, Section 3]. For the convenience of the reader, we emphasize that this section is not used elsewhere in the paper and may be skipped on first read. It is purely of independent interest.
Remark. The characterization of one-sided flows of Section 5 will apply to the flows constructed in this section. However, it is not immediately clear at this point of the paper that we have existence of one-sided flows. If Σ were compact, this would be a simple consequence of the methods in this section, but asymptotically conical shrinkers present some challenges to the techniques used here. Rather than address those head on, we defer the issue of constructing one-sided flows until Section 7, where it will be dealt with using geometric measure theory. (It will be important later in the paper that our one-sided flows be continued through singularities, so a smooth construction wouldn't suffice for our applications in any case.) 6.1. The nonlinear contraction. We continue to fix δ 0 ∈ (0, −λ I ), α ∈ (0, 1). It will be convenient to also consider the operator Theorem 6.1. There exists µ 0 = µ 0 (Σ, α, δ 0 ) such that, for every µ ≥ µ 0 , there exists a corresponding ε = ε(Σ, α, δ 0 , µ) with the following property: 2,α ≤ µ|a| 2 and the terminal condition Π <0 (S (a))(·, 0) = ι − (a)(·, 0).
Existence of a smooth ancient shrinker mean convex flow
In this section, we construct a smooth ancient shrinker mean convex flow on one side of an asymptotically conical shrinker Σ n ⊂ R n+1 . It would be possible to prove this more in the spirit of the previous section, but thanks to the uniqueness statement from Corollary 5.2, we can construct such a flow by any method that is convenient. As such, we use methods that will also apply to construct a (generalized) eternal flow which is smooth for very negative times. We will do so by modifying techniques used in [BW17b] to the present setting.
We fix a component Ω of R n+1 \ Σ and assume that the unit normal to Σ points into Ω. Note that by Colding-Minicozzi's classification of entropy stable shrinkers, [CM16, Theorems 0.17 and 9.36], asymptotically conical shrinkers are entropy unstable and thus the first eigenvalue of the L operator (see Lemma 3.1) satisfies λ 1 < −1.
The surface Σ ε is strictly shrinker mean convex to the interior of Ω ε in the sense that The following lemma is essentially [BW17b,Proposition 4.4]. Note that because Σ ε has uniformly bounded curvature (along with derivatives) the time interval for which [EH91] guarantees short-time existence is independent of ε → 0.
The flow remains strictly shrinker mean convex with the bound We now begin the construction of an eternal weak flow that we will later prove to have the desired properties. Fix R > 0 so that for all ε ∈ (0, ε 0 ) and ρ ≥ 1, Σ and Σ ε intersect ∂B ρR transversely.
Proof. Let {Σ a ε,ρ } a∈(−1,1) denote a foliation of smooth surfaces close to (Σ ε ∩ B ρR ) ∪ (∂B ρR ∩ Ω ε ) chosen so that as ρ → ∞, each Σ a ε,ρ converges smoothly on compact sets to Σ ε . For all but countably many a, the level set flow of Σ a ε,ρ does not develop a spacetime interior (i.e., does not fatten); see [Ilm94,11.3]. Write Γ a ε,ρ (t) := {x : u(x, t) = a} for the corresponding level set flow. We can arrange (after re-labeling a and changing u if necessary) that the level set flow of the pre-compact open set bounded by Σ a ε,ρ is {x : u(x, t) > a}. On the other hand, for a.e. a ∈ (−1, 1), [Ilm94,12.11] guarantees that 6 (7.1) The second equality is proven as in [Ilm94, 11.6(iii)], the third is (7.1) and the final equality follows from non-fattening of Γ a ε,ρ . This completes the proof. Note that we could have used the work of Evans-Spruck [ES95] instead of Ilmanen's approach [Ilm94] in the previous proof.
Lemma 7.4. There is r 0 = r 0 (Σ) > 0 so that for r > r 0 2 , we can take ρ sufficiently large depending on r to conclude that in the space-time region we have that ∂K ε,ρ and M ε,ρ agree with the set flow and Brakke flow associated to the same smooth mean curvature flow of hypersurfaces. Moreover, there is C = C(Σ) > 0 independent of r so that this flow has second fundamental form bounds We can now pass to a subsequential limit ρ i → ∞ to find a Brakke flow M ε (resp. weak set flow K ε ) with initial conditions H n ⌊Σ ε (resp. K ε , the closed region above Σ ε ; in other words, K ε is the unique closed set with K ε ⊂ Ω and ∂K ε = Σ ε ).
The monotonicity formula thus guarantees that X ∈ supp M ε . The other claim follows directly from the fact that K ε is closed.
Note that the smooth flows from Lemmas 7.2 and 7.6 agree when they are both defined, so naming this flow Σ ε (t) is not a serious abuse of notation.
Proof. The smoothness and curvature estimates follow by passing the curvature estimates in Lemma 7.4 to the limit along a diagonal sequence r → ∞. Since ∂K ε ⊂ supp M ε ⊂ K ε , we see that the smooth flows must agree. Finally, transverse intersection follows from [EH91, Theorem 2.1] applied to balls far out along Σ ε = Σ ε (−1).
both ∂K ε and M ε agree with the smooth mean curvature flow Σ ε (t) from Lemma 7.2.
Proof. Because Σ ε,ρ i are converging smoothly to Σ ε on compact sets, pseudolocality and interior estimates guarantee that for any r > 0, there is a uniform δ > 0 so that taking i sufficiently large, one component of is a smooth mean curvature flow with uniformly bounded curvature (and similarly for Small spherical barriers show that for i large, no other component of As such, sending i → ∞, we can pass the curvature estimates to the limit to find that ∂K ε ∩ t −1 ([−1, −1 + δ]) (and similarly for M ε ) are both smooth mean curvature flows with uniformly bounded curvature that agree with Σ ε at t = −1. The assertion thus follows from ∂K ε ⊂ supp M ε ⊂ K ε as before, or alternatively from the uniqueness of smooth solutions to mean curvature flow with bounded curvature, [CY07, Theorem 1.1].
We define the parabolic dilation map The following result is a consequence of Lemma 7.2 and relates the analytic property of shrinker mean convexity to the behavior of the flow under parabolic dilation. It is convenient to define for all λ ∈ (1, λ 0 ).
Proof. By Lemma 7.2, the family of hypersurfaces defined by λ → λΣ ε (−λ −2 ) has normal speed given by This is strictly positive, which proves the first statement. Moreover, the speed is strictly bounded below on B r , which proves the second statement.
. Below, we will write K 1 ε (and similarly M 1 ε ) (as opposed to. K ε and M ε ), the difference being that the time parameter has been restricted to −1 ≤ t ≤ 1.
Lemma 7.9. There is r 1 = r 1 (Σ) > r 0 so that for any λ ∈ (1, λ 0 ), λΣ ε (λ −2 t) \B r 1 can be written as the normal graph of a function f t defined on the end of Σ ε (t) for all t ∈ [−1, 1]. The function f t satisfies Proof. This follows from the argument in [BW17b,Proposition 4.4]. Indeed, we first observe that by taking r 1 sufficiently large, λΣ ε (λ −2 t) and Σ ε (t) are locally graphs of some functions u, u λ over B η|z| (z) ⊂ T z C for η = η(Σ) > 0 and |z| > r 1 sufficiently large. Differentiating the mean curvature flow equation as in [Wan14, Lemma 2.2] yields curvature estimates that prove that f t exists and satisfies the asserted estimates. Finally, the fact that f t satisfies the given equation follows by considering the quadratic error terms when linearizing the mean curvature flow equation; a similar argument can be found in [Ses08, Lemma 2.5].
Proposition 7.10. The support of the Brakke flow supp M 1 ε is disjoint from the scaled weak set flow K λ ε , for all λ ∈ (1, λ 0 ).
Using Theorem C.3 (recall that, by [Ilm94,10.5] the support of a Brakke flow is a weak set flow), we find that . Now, observe that Σ ε (t) and λΣ ε (λ −2 t) are smooth flows with the curvature estimates from Lemma 7.6 and so that the second is graphical over the first by Lemma 7.9 (with appropriate curvature estimates). Moreover, at t = −1, the two surfaces are disjoint, so the graphical function is initially positive. Now, the Ecker-Huisken maximum principle [EH89], specifically the version in Theorem D.1 (which applies because the graphical function satisfies the PDE given in Lemma 7.9), to conclude that the graphical function remains non-negative for t ∈ [−1, T + η] (over the flow Σ ε (t)∩(R n+1 \B 3λ 0 r 1 )). Now, the strong maximum principle implies that the graphical function is strictly positive in Σ ε (t) ∩ (R n+1 \B 3λ 0 r 1 ) for t ∈ [−1, T + η]. Applying Theorem C.3 again, we conclude that This contradicts the choice of T .
Finally, we can repeat the same argument to show that the flows cannot make contact at t = 1. This completes the proof.
Proof. This follows from combining Proposition 7.10 with Lemma 7.5.
Intuitively, this corollary proves that K λ ε lies inside of K 1 ε (since it has moved away from its boundary). We make this intuition precise below. Write B • for the interior of a set B and B c for its complement. ∈ (1, λ 0 Proof. We will prove that K ε ∩ t −1 ((−λ −2 , λ −2 )) is connected for any λ ∈ (1, λ 0 ). By Lemma 7.6, we have that is the space-time track of the region above Σ ε (t). Hence, if K ε ∩ t −1 ((−λ −2 , λ −2 )) is disconnected, then there is a connected component Note that R ∩ t −1 ((−λ −2 , −λ −2 0 )) = ∅ by Lemma 7.7. Thus, the component R "appears from nowhere." This easily leads to a contradiction. Indeed, we have shown that there is a point (x, t) ∈ R with minimal t-coordinate and because R is a closed connected component of K ε , there is r > 0 so that B 2r (x)×{t−r 2 } is disjoint from K ε . This contradicts the avoidance property of K ε .
This follows by combining Corollary 7.11 with Lemmas 7.12 and 7.13.
Proof. This follows adapting of the argument [BW17b, Proposition 4.4] to the present setting (using Theorem C.3); as we have already given similar arguments in the proof of Proposition 7.10, we omit the details.
We now rescale the flow as ε → 0 to obtain an ancient solution. We consider F λ (K ε ) for ε small and λ large (the precise relationship to be quantified in (7.3) below) and consider this a weak set flow with initial condition λΣ ε × {−λ 2 }.
This can be iterated to show thatM for t < 0. This proves the claim.
is continuous. Thus, for i sufficiently large, we can choose λ i so that Taking a subsequential limit as i → ∞, we find a weak set flow K and Brakke flow M.
We summarize the basic properties of (M, K) in the following proposition. √ −t Σ(t) converges smoothly on compact sets to Σ as t → −∞, and (9) there is a continuous function R(t) so that, for any t ∈ R, are the same smooth, multiplicity-one, strictly shrinker mean convex flow, which we will denote by Σ(t); moreover, there is C > 0 so that the curvature of Σ satisfies |x||A Σ(t) | ≤ C.
We now turn to (5). Consider M −∞ , any tangent flow to M at t = −∞. We know that M −∞ exists and is the shrinking Brakke flow associated to an F -stationary varifold V −∞ thanks to the monotonicity formula and the entropy bound λ(M) ≤ F (Σ). Lemma 7.15 implies that supp V −∞ ⊂ Ω. By the Frankel property for self-shrinkers (cf. Theorem C.4), it must hold that Σ ∩ supp V −∞ = ∅. By the strong maximum principle for stationary varifolds [SW89,Ilm96] (either result applies here because Σ is smooth), there must exist a component of supp V −∞ which is equal to Σ. By the constancy theorem (and Frankel property again) we find that V −∞ = kH n ⌊Σ, for some integer k ≥ 1. By the entropy bound in (2), k = 1. Thus, by Brakke's theorem (c.f. [Whi05]), there is T > 0 large so that M(t) is the multiplicity one Brakke flow associated to a smooth flow Σ(t) (and 1 √ −t Σ(t) converges smoothly on compact sets to Σ as t → −∞). Since ∂K ⊂ supp M, we see that ∂K(t) = Σ(t) as well. This completes the proof of (5); note that we have proven (8) as well.
By Lemma 7.15, Σ(t) ⊂ √ −tΩ. Since √ −t Σ and Σ(t) are both smooth (for t < −T ), they cannot touch unless Σ(t) = √ −t Σ for all t < −T . This cannot happen by an argument along the lines of Lemma 7.16. This proves (6). Now, we note that (4') implies that Σ(t) is weakly shrinker mean convex. By the strong maximum principle (see [Smo98,Proposition 4] for the evolution equation for the shrinker mean curvature), Σ(t) is either a shrinker for all t < −T or strictly shrinker mean convex. The first case cannot occur (by the argument used for (6)), proving (7). By Lemma 7.18 proven below, we know that for t sufficiently negative, 1 √ −t Σ(t) is an entire graph over Σ of a function with small · (1) 3 norm. From this, we can use pseudolocality to prove (9) exactly as in [BW17b, Proposition 4.4(1)] (the exterior flow M(t)⌊(R n+1 \ B R(t) ) = Σ(t) is weakly shrinker mean convex by (4') and thus strictly so by the strong maximum principle).
Finally, we prove (4). Strict shrinker mean convexity of the exterior flow guarantees that for λ > 1, supp M and F λ (K) are disjoint outside of a set D in space-time which has D ∩ t −1 ([a, b]) compact for any a < b. Thus, we may apply Ilmanen's localized avoidance principle, Theorem C.3, to show that supp M and F λ (K) are indeed disjoint. Using (3) and (4'), this completes the proof of (4).
The following lemma was used above, and we will also use it again when proving uniqueness of ancient one-sided flows.
Long-time regularity of the flow
In this section, we analyze further the flow (M, K) from Theorem 7.17. We must separate our analysis into three time scales, t < 0, t = 0, t > 0.
8.1. Regularity for t < 0. Here, we show that White's regularity theory [Whi00,Whi03] for mean-convex flows applies to the flow (M, K) for t < 0.
We define the rescaled flowK (and analogously forM) bỹ for τ ∈ (−∞, ∞). It is easy to see thatK is still a closed subset of space-time. Indeed, it is the image of a closed set under the diffeomorphism denote the time-translation map.
Proof. By definition, there is r > 0 sufficiently small so thatM(τ )⌊B r (x) = H n ⌊M (τ ) forM (τ ) a smooth rescaled mean curvature flow in B r (x). Thus, This proves the first statement. The second statement follows from Lemma 8.1 and Proposition 8.2.
As such, we can (and will) unambiguously write ∂K(τ 0 ) for either of these sets.
Proof. It is clear that Considering a small shrinking ball from a slightly earlier time, as in the proof of Proposition 8.2, we see that (x, τ ) ∈K • , a contradiction. Recall that the F -area of a measure µ (with µ(B r ) r k for some k > 0) is (cf. Section 2.7.) Set also F (A) := F (H n ⌊A) when it is defined. We have the following proposition, which is a straightforward modification of the corresponding result in the mean-convex case.
Proposition 8.6 (cf. [Whi00, Theorems 3.5, 3.8, and 3.9]). Suppose that V is a locally F -area minimizing hypersurface (integral current) contained in Ω with boundary iñ K(τ ). Then V ⊂K(τ ). In particular, ∂K(τ ) has locally finite H n -measure and for any At this point, we have no guarantee that the Brakke flowM hasM(τ ) = H n ⌊∂K(τ ) as in [Whi00,§5]. As such, we cannot immediately deduce regularity following [Whi00,Whi03]. Instead, we must use a continuity argument: consider the set in space-time By upper semi-continuity of density, it is clear that D is closed. Moreover, by (5) in Theorem 7.17, it is clear that the projection of D onto the τ -axis is bounded from below, and the projection on R n+1 -factor is bounded. As such, if D is non-empty, we can choose an elelementX = (x,τ ) ∈ D with smallest possible τ -coordinate.
As usual, we can show that ∂K ′ ⊂ suppM ′ ⊂K ′ . On the other hand, by [Whi97,§9], almost every X ∈ suppM ′ has a tangent flow that is a static or quasi-static plane. By definition ofτ , these must be static and multiplicity-one (by unit regularity). Thus, Corollary 8.3 implies that there must be points in the complement ofK ′ that are arbitrarily close to X, since (M i ,K i ) converges smoothly near X. This implies that a dense subset of suppM ′ is contained in ∂K ′ . This completes the proof.
Lemma 8.8 ([Whi00, Theorem 7.2]). If (M ′ ,K ′ ) is a static or quasi-static limit flow at (x, τ ) with τ <τ , thenM ′ is a stable minimal hypersurface whose singular set has Hausdorff dimension at most n − 7. In particular, a non-flat static or quasi-static limit flow cannot exist when n < 7.
From now on, we assume that n < 7.
Proposition 8.11. The set D is empty. Moreover, for any limit flow (M ′ ,K ′ ), we have that supp M ′ = ∂K ′ and there is T ≤ ∞ so that (1)K ′ (t) is weakly convex for all t, (2)K ′ (t) has interior points if and only if t < T , (3) ∂K ′ (t) are smooth for t < T , (4)M ′ (t) is smooth and multiplicity one for t < T , (5)K ′ (t) is empty for t > T .
If (M ′ ,K ′ ) is a tangent flow, then it is a multiplicity one generalized cylinder S n−k ×R k .
Proof. By Lemma 8.1, Proposition 8.6, and Corollary 8.10, White's regularity theory 10 [Whi00, Whi03] applies to the flow (M,K) for τ ≤τ . Indeed, [Whi00, Corollary 8.5, Theorem 9.2, Theorem 12.3] imply that any static or quasi-static planar tangent cone at time τ ≤τ has multiplicity one, and any static or quasi-static planar limit flow of points with time ≤τ has multiplicity one (this point uses n < 7). Then, as in [Whi03, Theorem 10], any tangent flow atX (a point in D with smallest possible τ -coordinate) must be a multiplicity-one generalized cylinder. However, this implies that ΘM(X) < 2. This is a contradiction, so we see that D = ∅. Given this White's regularity theory applies to (M,K) for all time, completing the proof.
We now summarize the above conclusions for the non-rescaled flow.
Corollary 8.12. The non-rescaled flows (M, K) have the following properties for t < 0 (1) M(t) = H n ⌊∂K(t), (2) sing M ∩ {t < 0} has parabolic Hausdorff dimension ≤ n − 1 and for t < 0, sing M(t) has spatial Hausdorff dimension ≤ n − 1, (3) any limit flow at X = (x, t) with t < 0 is weakly convex on the regular part and all tangent flows are multiplicity one generalized cylinders, and (4) any singular point has a (strict) mean-convex neighborhood.
Proof. Everything but the last claim is proven above (in the rescaled setting). The last claim follows from the fact that all limit flows are convex so [HW17] applies. 10 Observe that when n < 7, [Whi03] does not require an a priori bound for the quantity G considered in e.g. [Whi03, Theorem 4]. It seems likely that our arguments here could be extended to n ≥ 7, but some care would need to be taken when constructing the flows in Proposition 7.3. See [Whi15, HH18, EHIZ19] for techniques related to this issue. 8.2. Regularity at t = 0. We now turn to regularity near time t = 0.
For A, B ⊂ R n+1 × R, subsets of space-time, we write for the Euclidean distance between the two sets. We emphasize that this differs from the usual parabolic distance between the sets. Note that the parabolic dilation map We now consider the geometry of hypersurfaces in space-time swept out by a meancurvature flow.
Then, M is a smooth hypersurface in spacetime R n+1 × R with unit normal 11 at (x, t) given by Moreover, the normal speed of λ → F λ (M) at λ = 1 is Proof. The given unit vector is orthogonal to This implies the expression for ν M . To prove (8.2), we may compute This completes the proof. Now, recall that by Theorem 7.17, there is a smooth flow Σ(t) so that ∂K(t) and M(t) agree with Σ(t) outside of B R(t) and on R n+1 ×(−∞, −T ). Choose R 0 sufficiently large so that R 0 ≥ R(t) for t ∈ [−4T, 0] (we will take R 0 larger in (8.3) in Proposition 8.15 below). Then, define 11 We emphasize that the unit normal is taken with respect to the Euclidean inner product on Lemma 8.14. There is c = c(R 0 , Σ), C(R 0 , Σ) > 0 and λ 1 = λ 1 (R 0 , Σ) > 1 so that for λ ∈ (1, λ 1 ).
Proof. It suffices to show that
This follows from (8.2) (and the compactness of S) since positivity of the shrinker mean curvature of Σ(t) was established as (7) and (9) in Theorem 7.17.
for λ ∈ (1, λ ′ 1 ). Proof. Given r > 0 large, we fix R 0 by requiring that is defined in Theorem 7.17). This choice of R 0 will allow us to use Theorem C.3 below. We fix c = c(R 0 , Σ) as in Lemma 8.14 and will choose c ′ ≪ c below.
For λ − 1 > 0 sufficiently small, assume that (otherwise the assertion follows) and that the distance is achieved at In particular, |s| ≤ c 2 (λ − 1). Recalling the translation map T s defined in (8.1), observe that, Lemma 8.14 and (8.4) imply that Now, we consider the weak set flows T s (F λ (∂K)) and ∂K and apply Theorem C.3 with a = −3T , b = t, R = 2R 0 , and γ small to conclude that (here and below, the implied constant in , depend on r, Σ but not on λ) where the distance d t is defined in (C.2). However, by Lemma 8.14, Putting these inequalities together, we find that This completes the proof.
Choose (x, t) ∈ reg M ∩ B r × [−1, 0] and fix a space-time neighborhood U of (x, t) so that in U , M agrees with t → H n ⌊M (t), for a smooth mean curvature flow M (t). Then, Proof. Proposition 8.15 implies that the speed of λ → λM (λ −2 t) at λ = 1 has a uniformly positive lower bound. Thus, the conclusion follows from (8.2).
Proof. By (9) in Theorem 7.17 it suffices to prove this for points in B r for some r > 0 sufficiently large. Fixing such an r, Corollary 8.16 implies that there is s > 0 so that 0]. Solving for H, we find that |H| ≤ C on reg M ∩ (B r × (−2δ, 0) for some δ ∈ (−1, 0) sufficiently small. However, by (3) in Corollary 8.12, any X ∈ sing M ∩ {t < 0} has a multiplicity-one generalized cylinder as a tangent flow. In particular, there are points X i ∈ reg M∩{t < 0} with X i → X and H(X i ) → ∞. This contradicts the mean curvature bound, completing the proof. Proof. By Corollary 8.17, we know that for t ∈ (δ, 0) and x ∈ ∂K(t), |H ∂K(t) (x)| ≤ C. Thus, by Corollary 8.16, we conclude that for r chosen as in the proof of Corollary 8.17, taking δ smaller if necessary, for t ∈ (δ, 0) we find that ∂K(t) is strictly star-shaped in B r , i.e., there is c > 0 so that x · ν ∂K(t) ≥ c for x ∈ ∂K(t) ∩ B r . In particular, this implies that ∂K(t) is locally uniformly graphical. Interior estimates [EH91, Theorem 3.1] then imply that the flow ∂K(t) remains smooth and strictly star-shaped up to t = 0 (outside of B r , the flow is automatically smooth and strictly star-shaped by (7) and (9) in Theorem 7.17).
8.3. Regularity for t > 0. Using sing M(0) = ∅ and (9) from Theorem 7.17, there is someδ > 0 so that M(t) = H n ⌊∂K(t) is smooth for t ∈ [0,δ). We can now consider the rescaled flowK andM similarly defined, exactly as in the t < 0 situation. The only difference is that the flow is moving outwards rather than inwards: for h < 0 (cf. Lemma 8.1). This does not seriously affect the arguments used above, and we find that Corollary 8.12 holds for t > 0 as well.
8.4. Long time asymptotics. We continue to use our notation from the t > 0 regularity section. Moreover, we denote with C Σ the asymptotic cone of the asymptotically conical shrinker. We will also need to consider the integral unit-regular Brakke flows t ∈ [0, ∞) → µ ± (t) constructed in Theorem E.2 whose support agrees with the inner and outer flow M ± (t) of C Σ . They can be used to prove: Lemma 8.19. For all t ≥ 0, ∂K(t) is disjoint from the level set flow of C Σ .
Proof. Using that (µ ± (t)) t≥0 is smooth with unit multiplicity outside of B √ tR 0 (0) for some R 0 > 0, the Ecker-Huisken Maximum Principle (Theorem D.1), and Ilmanen's localized avoidance principle (Theorem C.3), together with a continuity argument like the one used in the proof of Theorem 7.17, we find that ∂K(t) is disjoint from M ± (t) for all t ≥ 0. This implies the claim.
This allows to characterize the convergence of the rescaled flow for τ → ∞. We assume that M (t) lies outside the outer flow M + (t) of the level set flow of C Σ .
Theorem 8.20. The rescaled flowM(τ ) converges smoothly as τ → ∞ to an expander E, which is smoothly asymptotic to C Σ and minimizes the expander functional Proof. Since τ ∈ (0, ∞) →M(τ ) is expander mean convex, and is smooth with uniform control on all derivatives outside of B R 0 (0), it follows from the arguments in [Whi00, §11], thatM(τ ) converges smoothly to an outward minimizing minimal surface E in the expander metric g = e 1 2n |x| 2 g R n+1 . This yields the claimed regularity and the smoothness of the convergence. Note that any blow down of the flow t ∈ [0, ∞) → M(t) lies inside the level set flow of C Σ , so E has to be smoothly asymptotic to C Σ . By Lemma 8.19 the flow t → √ tE has to agree with the outer flow of C Σ .
8.5. The outermost flows of general hypercones. We consider, for n < 7, a general embedded, smooth hypersurface Γ ⊂ S n and the regular hypercone C(Γ) ⊂ R n+1 . We show in this subsection that the previous arguments can be generalized to characterize the outer and inner flows of the level set flow of C(Γ) as in Theorem 8.20. Note that Γ divides S n into two open sets S ± . We can construct smooth hypersurfaces M ± which are smooth radial graphs over S ± , smoothly asymptotic to C(Γ) with sufficiently fast decay such that x · ν M ± (with ν M ± the upwards unit normal) decaying to zero at infinity along M ± . Let (M ± (t)) t∈[0,T ± ) be the maximal smooth evolution of M ± . Note that by the maximum principle of Ecker-Huisken [EH89] together with the strong maximum principle we have that 2tH M ± (t) + x · ν M ± (t) > 0 along (M ± (t)) t∈(0,T ± ) . We can thus repeat the arguments in Section 8.3 to construct expander mean convex flows (M ± (t)) t>0 such that the corresponding rescaled flows converge to expanders, smoothly asymptotic to C(Γ). This implies Theorem 8.21. The outermost flows of C(Γ) are given by expanding solutions t ∈ (0, ∞) → √ tE ± smoothly asymptotic to C Σ . The expanders E ± minimize the expander energy (8.5) from the outside (relative to compact perturbations) and are smooth.
See also the notes of Ilmanen [Ilm95b] for the proof of smoothness in case n = 2. Furthermore by an argument of Ilmanen-White [Ilm95b] any such outermost expander has genus zero.
Uniqueness and regularity of one-sided ancient Brakke flows
We now combine the three regimes considered above with Theorem 7.17 to conclude the following existence and regularity for the flow (M, K).
Theorem 9.1 (One-sided existence). For n ≤ 6 and Σ n a smooth asymptotically conical self-shrinker, choose Ω a fixed component of R n+1 \ Σ. Then, there exists an ancient unit-regular integral Brakke flow M and weak set flow K with the following properties: (1) M(t) = H n ⌊∂K(t), (2) ∂K(t) ⊂ √ −tΩ for all t < 0, (3) there is T > 0 so that for t < −T , M(t) is a smooth multiplicity one flow Σ(t) with Σ(t) is strictly shrinker mean convex, (4) 1 √ −t Σ(t) converges smoothly on compact sets to Σ as t → −∞, (5) there is a continuous function R(t) so that for any t ∈ R, is a smooth strictly shrinker mean convex multiplicity one flow Σ(t), (6) the Brakke flow M has entropy λ(M) ≤ F (Σ), (7) sing M has parabolic Hausdorff dimension ≤ n−1 and for any t ∈ R, sing M(t) has spatial Hausdorff dimension ≤ n − 1 (8) any limit flow is weakly convex on the regular part and all tangent flows are multiplicity one generalized cylinders, (9) any singular point has a strictly mean-convex neighborhood, (10) there is δ > 0 so that ∂K(t) is completely smooth for t ∈ (−δ, δ) and ∂K(0) is strictly star-shaped, and (11) 1 √ t ∂K(t) converges smoothly on compact sets to an outermost expander coming out of the cone at infinity of Σ, as t → ∞. Now, we will combine Theorem 9.1 with Corollary 5.2 to prove uniqueness of the flow constructed above.
Theorem 9.2 (One-sided uniqueness). For n ≤ 6, fix Σ n a smooth asymptotically conical self-shrinker as in Theorem 9.1. Let (µ t ) −∞<t<∞ be a unit-regular integral Brakke flow such that After a time translation, µ t coincides with the Brakke flow from Theorem 9.1.
Proof. As in the proof of (5) in Theorem 7.17, the Gaussian density bound guarantees that the tangent flow to µ t at −∞ is the multiplicity one shrinker associated to Σ. As such, Lemma 7.18 and Corollary 5.2 imply that after a time-translation there is T > 0 so that for t ≤ −T , µ t = H n ⌊Σ(t), where Σ(t) is the smooth flow from Theorem 9.1 (4). As in Proposition 7.10, Ilmanen's localized avoidance principle (Theorem C.3) combined with Ecker-Huisken's maximum principle at infinity (Theorem D.1), we see that supp µ t is disjoint from F λ (supp M) for λ = 1. This implies that supp µ t ⊂ supp M.
Finally, since reg M is connected by (9) in Theorem 9.1 and Corollary F.5, we see that µ t = M(t) in (sing M) c (using the unit-regularity of M and µ t ). This completes the proof.
Remark. Both Theorems 9.1 and 9.2 clearly hold (with simpler proofs) in the case that Σ is a smooth compact shrinker.
Remark. We expect that the dimensional restriction in Theorems 9.1 and 9.2 can be removed (cf. [Whi15, HH18, EHIZ19]). We note that when Σ has sufficiently small F -area, Theorems 9.1 and 9.2 hold in all dimensions. See §10 for a precise statement.
Generic mean curvature flow of low entropy hypersurfaces
We recall the following notions from [BW18d]. Denote by S n the set of smooth self-shrinkers in R n+1 and S * n the non-flat elements. Let S n (Λ) := {Σ ∈ S n : λ(Σ) < Λ} and similarly for S * n (Λ). Let RMC n denote the set of regular minimal cones in R n+1 and define RMC * n , RMC n (Λ), RMC * n (Λ) analogously. We now recall the following two "low-entropy" conditions from [BW18d]: Given these definitions, we can state the following result. That is, there is X ∈ R n+1 × R so that sing M ′ = {X} and so that any tangent flow at X is a round shrinking S n .
We will prove this below. Note that (⋆ 3,λ 2 ) holds by [ We also note that Theorems 9.1 and 9.2 hold in all dimensions with the assumption that (⋆ n,Λ ) holds and F (Σ) ≤ Λ. Indeed, the dimension restriction in Theorems 9.1 and 9.2 arises due to the use of [Whi03], where it is used to rule out static cones as limit flows to a mean-convex flow (cf. [Whi03,Theorem 4]). However, in the low-entropy setting static cones cannot occur as limit flows, by assumption (⋆⋆ n,Λ ) (cf. [BW18d, Lemma 3.1]) even without assuming mean-convexity. Propositions 3.2 and 3.2]). For n ≥ 3 and Λ ∈ (λ n , λ n−1 ], assume that (⋆ n,Λ ) and (⋆⋆ n,Λ ) hold. If M is a unit-regular integral Brakke flow with λ(M) ≤ Λ then any tangent flow to M is the multiplicity one shrinker associated to a smooth shrinker that is either (i) compact and diffeomorphic to S n or (ii) smoothly asymptotically conical.
Proof. Assume there is M j (and the associated smooth asymptotically conical shrinkers Σ j ) and X j with |X j | = 1 so that Up to a subsequence, we can use Lemma 10.3 to find a Brakke flow M ∞ so that M ∞ = H n ⌊ √ −tΣ ∞ for t < 0, where Σ ∞ is a non-flat smooth asymptotically conical shrinker, and X with |X| = 1 and Θ M∞ (X) ≥ F (Σ ∞ ). Parabolic cone-splitting (cf. [Whi97] and [CHN13, p. 840-1]) implies that either Σ ∞ splits off a line or it is static or quasi-static. This is a contradiction, completing the proof.
Lemma 10.5. For integral unit regular Brakke flows M i , M, suppose that X i ∈ sing M i has M i ⇀ M and X i → X ∈ sing M. Suppose that some tangent flow to M at X is a round shrinking sphere with multiplicity one, t → H n ⌊S n √ −2nt. Then, for i sufficiently large, any tangent flow to M i at X i is a round shrinking sphere.
Proof. Assume that X = (0, 0). For any r > 0, there is η > 0, so that is a smooth, strictly convex mean curvature flow (without spatial boundary). Thus, for i sufficiently large, is a smooth, strictly convex mean curvature flow. Taking r sufficiently large, this completes the proof (using e.g., [Hui84]). Passing to a subsequence, we can assume that M i ⇀ M ∈ F(s) and The fact that X ∈ sing gen M follows from Lemma 10.5. We now rescale around X so that we can apply Theorem 9.2. Note that supp M i , supp M are all pairwise disjoint, since their initial conditions are compact pairwise disjoint hypersurfaces. We will repeatedly pass to subsequences without relabeling in the following. Rescale M i around X by |X i − X| = 0 toM i and assume thatM i ⇀M. Similarly, rescale M around X by |X i −X| = 0 toM i and assume thatM i ⇀M. SinceM is a tangent flow to M at X ∈ sing gen M, by Theorem 10.2, there is a smooth shrinker Σ n ⊂ R n+1 that is either compact or asymptotically conical so thatM(t) = √ −tΣ for t < 0. Finally, assume that after rescaling Sending r → 0 shows that λ(M) ≤ Θ M (X).
Consider a tangent flow toM at −∞. Since λ(M) ≤ Λ, Lemma 10.2 implies that any such tangent flow is the shrinking flow associated to a smooth shrinkerΣ. We claim thatΣ = Σ and thatM lies (weakly) on one side of the shrinking flow associated to Σ. Indeed, by the Frankel property for self-shrinkers (Corollary C.4), there is x ∈Σ ∩ Σ. Because Σ,Σ have multiplicity one, we can find regions inM i ,M i that are (after a common rescaling) smooth graphs over connected regions inΣ and Σ containing x. Because supp M i and supp M are disjoint, it must hold thatΣ = Σ. Applying Lemma 7.18 (and the maximum principle), we can find a sequence of times t i → −∞ so that eitherM(t i ) = H n ⌊ √ −t i Σ, orM(t i ) is a smooth graph over √ −t i of a nowhere vanishing function. In the first case, we see thatM(t) = H n ⌊ √ −tΣ for all t < 0 (cf. the proof of Lemma 7.16), while in the second case, we see that suppM(t) is disjoint from √ −tΣ for all t < 0 (by Ilmanen's localized avoidance and the Ecker-Huisken maximum principle, as in Lemma 7.15). We claim that the second case cannot occur. Indeed, Theorems 9.1 and 9.2 (and Λ ≤ λ n−1 ) imply (since λ(M) ≤ F (Σ)) that singM = sing genM , so Lemma 10.5 implies thatX i ∈ sing genMi for i sufficiently large. This is a contradiction, so the first case (i.e.,M(t) = √ −tΣ for t < 0) must hold. Now, we can apply Lemma 10.4 toM andX (the limit of the rescaled pointsX i ; note that |X| = 1) to conclude that This completes the proof.
Using this, we can prove the existence of generic flows. Proof. We claim that Emb . Let M j denote integral unit-regular Brakke flows starting at M j with non-round tangent flows at X j . Passing to a subsequence, M j ⇀ M, a integral unit-regular Brakke flow starting from M . By Lemma 10.5, a further subsequence has X j → X ∈ sing M with M having a non-round tangent flow at X. This shows G is open. Finally, the density of G follows from Theorem 10.7.
11. The first non-generic time for flows in R 3 In this section, we will study the mean curvature flow of a generic initial surface in R 3 . We will remove the low-entropy assumption considered in the previous section and study the possible singularities that generically arise.
For M an integral unit-regular Brakke flow, define T gen to be the supremum of times T so that at any point X ∈ supp M with t(X) < T , all tangent flows at X are multiplicity-one spheres, cylinders, or planes.
Theorem 11.1. Suppose that M ⊂ R 3 is a closed embedded surface of genus g. Then, there exist arbitrarily small C ∞ graphs M ′ over M and corresponding cyclic integral unit-regular Brakke flows M ′ with M ′ (0) = H 2 ⌊M ′ , so that either: (1) T gen (M ′ ) = ∞, or (2) there is x ∈ R 3 so that some tangent flow to M ′ at (x, T gen (M ′ )) is kH 2 ⌊ √ −tΣ for Σ a smooth shrinker of genus at most g and either: k ≥ 2 or Σ has a cylindrical end but Σ is not a cylinder.
We will prove this below. Note that Theorem 11.1 yields the following conditional result. Recall that the list of lowest entropy shrinkers is known to be the plane, the sphere, and then the cylinder by [Whi05,CIMW13,BW17b]. Suppose that there is Λ g ∈ (λ 1 , 2] so that any smooth shrinker Σ ⊂ R 3 with genus(Σ) ≤ g and F (Σ) < Λ g is either a plane, a sphere, a cylinder, or has no cylindrical ends. 14 Then: Corollary 11.2. If M ⊂ R 3 is a closed embedded surface with genus(M ) ≤ g and λ(M ) ≤ Λ g , then there are arbitrarily small C ∞ graphs M ′ over M and cyclic integral unit-regular Brakke flows M ′ with M ′ (0) = H 2 ⌊M ′ so that M ′ has only multiplicity-one spherical or cylindrical tangent flows, i.e., T gen (M ′ ) = ∞.
We now establish certain preliminary results used in the proof of Theorem 11.1. The proof of Theorem 11.1 can be found after the statement of Proposition 11.4. We define sing gen (M) ⊂ sing(M) as the set of singular points so that one tangent flow (and thus all of them by [CIM15]; alternatively, this follows from [Sch14,CM15] or [BW18d]) is a multiplicity-one shrinking sphere or cylinder.
First, we note the following result establishing regularity of tangent flows at T gen (see also the proof of [CHH18, Theorem 1.2]).
genus(M (t))
Moreover,Σ has finitely many ends, each of which is either asymptotically conical or cylindrical (with multiplicity one), and if (x, T gen (M)) ∈ sing(M) \ sing gen (M), then Σ has genus(Σ) ≥ 1. genus(M (t)), the genus of M right before the first non-generic singular time. This notion will be useful in the following proposition which will be the key mechanism used to perturb away asymptotically conical (and compact, non-spherical) singularities.
Proposition 11.4. Suppose that M ⊂ R 3 is a closed embedded surface of genus g and M is a cyclic integral unit-regular Brakke flow with M(0) = H 2 ⌊M . Assume that T gen (M) < ∞ and that any tangent flow at time T gen (M) has multiplicity one and that there is no non-cylindrical tangent flow at time T gen (M) with a cylindrical end.
Before proving this, we will show that it implies the full genericity result. At this point, we can iterate. Either M 1 satisfies the desired conditions, or Proposition 11.4 applies to M 1 . In the former case, we can conclude the proof, and in the latter case we find a small C ∞ perturbation M 2 of M with a Brakke flow M 2 as above. Repeating this process k times, we find that genus Tgen (M k ) ≤ genus(M ) − k, By Proposition 11.3, it must eventually hold that M k , M k satisfies one of the two desired conclusions (1) or (2) for some k ≤ genus(M ). Thus, after at most genus(M ) perturbations, we find the desired M ′ = M k and M ′ = M k . This completes the proof.
The proof Proposition 11.4, will depend on the following lemmata. are smooth mean curvature flows. Moreover, any (x, t) ∈ supp M ∩ (U 1 ∪ U 2 ) satisfies Proof. Observe that by Proposition 11.3 and the given hypothesis, any 16 tangent flow at (x 0 , T gen (M)) is associated to a smooth multiplicity-one shrinker that is either compact or asymptotically conical.
We begin by proving that the smoothness assertion holds for M 1 for any r, τ > 0 small. Indeed, suppose there are singular points X i := (x i , T gen (M)−t i ) → (x 0 , T gen (M)) with t i ≥ 0, rescaling around (x 0 , T gen (M)) to ensure that X i are a unit distance from the space-time origin, we would find a singular point in a tangent flow to M at (x 0 , T gen (M)) lying in the parabolic hemisphere However, no such point in the tangent flow can be singular (since such a flow would not be asymptotically conical).
We now consider (11.1) for points in supp M ∩ U 1 . Note that by the smoothness of M 1 , all such points are smooth points of M. We claim that there is ρ > 0 sufficiently large so that (11.1) holds in supp M ∩ U 1 , after shrinking r, τ > 0 if necessary. Choose (x 0 + x, T gen (M) − t) ∈ supp M with (x, t) → (0, 0) and 0 < ρ 2 t < |x| 2 but so that |x ⊥ | ≥ 1 10 |x|. Rescaling around (x 0 , T gen (M)) and passing to the limit, we find a tangent flow to M at (x 0 , T gen (M)) with associated shrinker Σ ρ so that for some x ρ ∈ Σ with |x ρ | ≥ ρ |x ⊥ ρ | ≥ 1 10 |x ρ |. However, this will be in contradiction to [CM12b], Proposition 11.3, and the fact that the set of tangent flows is compact. 17 Indeed, consider Brakke flows M ρ associated to Σ ρ . We consider the point (ρ −1 x ρ , −ρ −2 ) and take a subsequential limit of M ρ to find M a shrinking flow associated toΣ an asymptotically conical shrinker; however, the subsequential limit (x, 0) of the space-time points (ρ −1 x ρ , −ρ −2 ) lies on the asymptotic cone ofΣ (and is not at the origin) and thus hasx ⊥ = 0. This is a contradiction, completing the proof. 16 We emphasize that while we do not need to refer to uniqueness of the tangent flow in this proof, it does indeed hold in this setting by [Sch14] for compact tangent flows and [CS19] for asymptotically conical ones. 17 Alternatively, one may argue as follows: by [CS19], Σρ is independent of ρ, which immediately yields a contradiction since for any fixed asymptotically conical shrinker, |x ⊥ | ≤ o(1)|x| as x → ∞.
Finally, we prove both the smoothness of M 2 and (11.1) for points in supp M ∩ U 2 . If some tangent flow to M at (x 0 , T gen (M)) is compact, then by considering shrinking spherical barriers, we can choose r, τ > 0 so that M 2 is empty. As such, we can assume that there is an asymptotically conical shrinker Σ ′ associated to some tangent flow M ′ of M at (x 0 , T gen (M)). Because Σ ′ is asymptotically conical, |x||A Σ ′ (x)| = O(1) and |x ⊥ | ≤ o(1)|x| as x → ∞. Arguing as in [CS19, Lemma 9.1], we can use pseudocality (e.g., [INS19, Theorem 1.5]) on large balls along the end of Σ to find R > 0 sufficiently large so that We claim that T gen (M i ) > T gen (M ∞ ) for sufficiently large i. If not, we can pass to a subsequence so that (11.2) T gen (M i ) ≤ T gen (M ∞ ).
We claim that this leads to a contradiction using the strategy of proof from Proposition 10.6. Choose X i ∈ (sing(M i ) \ sing gen (M i )) ∩ {t = T gen (M i )}, and let X i → X ∞ . By (11.2) and Proposition 11.3 any tangent flow to M ∞ at X ∞ is associated to a multiplicity one smooth shrinker with all ends (if any) asymptotically conical (note that X ∞ cannot have a multiplicity-one cylindrical or spherical tangent flow by Lemma 11.5). In particular, Theorems 9.1 and 9.2 apply to the shrinkers associated to any tangent flow to M ∞ at X ∞ .
After rescaling by |X i − X ∞ | = 0, the flows M i converge either to a flow on one side of the tangent flow to M ∞ at X ∞ or a flow which agrees with a tangent flow to M ∞ for t < 0. By (11.2). In the first case, the limit has only multiplicity one cylindrical and spherical singularities by Theorems 9.1 and 9.2. This contradicts the choice of X i by Lemma 11.5. On the other hand, the second case cannot occur because (11.2) would imply that some tangent flow to M ∞ has a singularity at (x, t) with |(x, t)| = 1 and t ≤ 0, contradicting Proposition 11.3 and the assumption that no non-cylindrical tangent flow to M ∞ at T gen (M ∞ ) has cylindrical ends. Thus, for i sufficiently large, It remains to prove the strict genus reduction. Let us briefly sketch the idea for the reader's convenience. By the work of Brendle [Bre16], every non-generic singularity that occurs at time T gen (M) has to have positive genus. Lemma 11.6 will be used to show that this positive genus is captured in the tangent flow scale of our non-generic singularities. Our understanding of the long-time behavior of flows to one side of a nongeneric shrinker (Theorems 9.1, 9.2) and Lemma 11.6 again will then imply that, near the non-generic singularities of M ∞ , the one-sided flows M i will experience strict genus reduction. The result will follow by a localization of the well-known genus monotonicity property of mean curvature flow, given in Appendix G.
We assume that Fix the corresponding parameters r, ρ, τ as in Lemma 11.6. Define 18 we can pass to a subsequence so that as i → ∞, M ∞,i converges to a tangent flow to M ∞ at (0, T gen (M ∞ )) andM i converges to the ancient one-sided flow described in Theorems 9.1 and 9.2 associated to this tangent flow.
We begin by proving the following two claims that imply that perturbed flows M i lose genus locally around points x.
Proof Claim (A). By Proposition 11.3, any tangent flow to M ∞ at (0, T gen (M ∞ )) has multiplicity one and positive genus. Thus, by Lemma 11.6, 21 we can takeτ sufficiently small so that M ∞ (T ∞ (M ∞ ) − t)⌊B 4r (0) is smooth and has positive genus for t ∈ 18 Note that this is spatial (Euclidean) distance. 19 The restriction to a ball B of a time-t slice of a Brakke flow M, i.e., M(t)⌊B, is said to be smooth [τ /2, 3τ ]. Combined with Brakke's theorem [Whi05] and another application of Lemma 11.6 22 the remaining assertions follow.
Proof of Claim (B). We have fixed a tangent flow to M ∞ and associated one-sided flow from Theorem 9.1. Let δ > 0 denote the interval of regularity around t = 0 for the one-sided flow, as described in property (10) of Theorem 9.1. We thus definē i 2 This will ensure that when rescaling by d i , we are considering a short enough time interval to apply (10) in Theorem 9.1.
We first show that for i sufficiently large, (0) is smooth for all t ∈ [0,ε(i)). Suppose, instead, that there were some y i , t i such that Since as Brakke flows (for i → ∞), it follows by Lemma 11.6 23 that y i → 0 as i → ∞.
On the other hand, by definition ofε(i) and (10) in Theorem 9.1, In particular, rescaling M i byd i around (0, T gen (M ∞ )), the flow converges to some flowM ∞ . By (11.5), we have that (0, 0) ∈ suppM ∞ . Thus, we have that for t < 0, M ∞ agrees with a tangent flow to M ∞ at (0, T gen (M ∞ )). This is a contradiction since Proposition 11.3 implies thatM ∞ ⌊((R n+1 × (−∞, 0]) \ {(0, 0)}) is smooth. Thus, no points y i as in (11.4) will exist. This completes the proof of the regularity assertion. We finally prove that for t i ∈ [0,ε(i)), the surface M i (T gen (M ∞ ) − t i )⌊B 3r (0) has genus zero for i large. We show below that that for some R > 0 sufficiently large (independent of i), for any i large and we have |x ⊥ | ≤ 1 5 |x|. This follows from essentially the same scaling argument as above. Indeed, consider a sequence of a points y i and times t i violating this bound while still satisfying (11.6) (we will choose R > 0 large below). Rescaling M i around (0, T gen (M ∞ )) byd we claim that it now must hold that Indeed, if this fails, we can argue precisely as in the previous paragraph to rescale bỹ d i to find a tangent flow to M ∞ at (0, T gen (M ∞ )); the points (y i , t i ) converge-after 22 Specifically, (11.1) on supp M ∩ U1 23 Specifically, the smoothness of M1.
rescaling-to a point on the tangent flow at t = 0 (at a unit distance from 0). Clearly the cone satisfies the asserted bound, so this is a contradiction. Thus, (11.7) holds. In particular, the points (y i , −t i ) remain a bounded distance from (0, 0) when rescaling by d i (but lie outside of B R (0) × R). It is easy to see 24 that we can take R > 0 large so that the one-sided flow from Theorem 9.1 (scaled to have unit distance from (0, 0)) satisfies |x ⊥ | ≤ 1 10 |x| for (x, t) with |x| ≥ R and |t| < δ. Putting these facts together, we have proven the claim, since after rescaling by is a smooth flow of annuli (possibly having several connected components), intersecting ∂B r ′ (0) transversely for r ≤ r ′ ≤ 3r; this follows from Lemma 11.6 and the fact that M i ⇀ M ∞ as Brakke flows. Chooset It thus remains to establish (11.8). We will show this by combining the properties above with a localization of White's [Whi95] topological monotonicity, which we have included in Appendix G. Define B := B 2r (0).
The key observation, which makes Appendix G applicable, is that, by property (4) above, the level set flow for times in [t 1 ,t 2 ] of M ′ (t 1 ) × {t 1 } (which must agree with the restriction of M ′ ) is a simple flow (defined in Appendix G) in the tubular neighborhood . We can thus apply results of that appendix with [t 1 ,t 2 ] in place of [0, T ], and R 3 \B in place of Ω. (Certainly, we can and will also apply White's global topological monotonicity results.) We invite the reader to recall the notation W [t 1 ,t 2 ], 2) in Appendix G, which we're going to make use of here. Choose loops γt 2 1 , . . . , γt 2 2g in W [t 2 ] as in Lemma 11.7 below. That is, is linearly independent and each γt 2 i satisfies either: ∩ ∂B that has non-zero mod-2 intersection with γt 2 i , and zero mod-2 intersection with each previous γt 2 j , j < i. is linearly independent too. We now construct loops γt 1 1 , . . . , γt 1 2g so that: • Each γt 2 i is homotopic to γt 1 i in W [t 1 ,t 2 ]; see [Whi95,Theorem 5.4]. • If γt 2 i is entirely contained inB c , then so is γt 1 i and the entire homotopy between them; see Theorem G.3.
∩ ∂B that has non-zero mod-2 intersection with γt 1 i , and zero mod-2 intersection with each previous γt 1 j , j < i; this follows from the simplicity of the flow in U × [t 1 ,t 2 ] and the fact that mod-2 intersection is preserved under homotopy. We can now easily complete the proof. If (11.8) were false, then genus(M ′ (t 1 )) = g by White's global topological monotonicity [Whi95]. Applying Lemma 11.8 to γt 1 1 , . . . , γt 1 2g now says that, because genus(M ′ (t 1 ) ∩ B) > 0 by property (2) above, at least one of the γt 1 i must be contained in B, a contradiction. Lemma 11.7. Suppose that S ⊂ R 3 is a closed and embedded genus-g surface which is transverse to a sphere ∂B ⊂ R 3 . Denote W := R 3 \ S.
We can find loops γ 1 , . . . , γ 2g inside W so that {[γ 1 ], . . . , [γ 2g ]} ⊂ H 1 (W ) ≈ Z 2g is linearly independent and so that, for every i = 1, . . . , 2g, either: • γ i is contained in B or inB c , or • there is a component U i of ∂B \ S that has non-zero mod-2 intersection with γ i , and zero mod-2 intersection with each previous γ j , j < i. Moreover, we can arrange that exactly 2 genus(S ∩ B) of the γ i are contained entirely in B and that if, in H 1 (W ∩B), for some cycle β ⊂ W ∩ ∂B, then all of the coefficients n i vanish.
Proof. We induct on the number of components b of S ∩ ∂B.
First, consider b = 0. In this case, S decomposes into the disjoint union of two closed surfaces, S B := S ∩ B, SBc := S \B, which do not meet ∂B. We have genus(S B ) + genus(SBc ) = g, so by applying Alexander duality we find a linearly independent set with 2 genus(S B ) of the γ i contained in B, and the remaining 2 genus(SBc) contained in B c . Moreover, W ∩ ∂B = ∂B when b = 0. Therefore, if a linear combination of γ i ⊂ B is homologous to a cycle in W ∩ ∂B then the combination must be = 0 ∈ H 1 (W ) (since H 1 (∂B) = 0). This completes the base case. Now, we consider the inductive step. Consider the b components of S ∩ ∂B. By the Jordan curve theorem, each component of S ∩∂B divides ∂B into two regions. As such, we can find a component α of S ∩ ∂B so that there is a disk D ⊂ ∂B with ∂D = α and S ∩ D • = ∅. Form the surface S ′ by removing an annulus A = U ε/10 (α) ⊂ S and then by gluing two disks that are small deformations of D, into and out of B respectively, to cap off the boundary of S \ A. We can arrange that this all occurs in U ε (D) ⊂ R 3 (with ε > 0 small enough so that U ε (D) is contractible).
The surface S ′ now satisfies the inductive hypothesis, since S ′ ∩ ∂B has b − 1 components. Note that, by definition, (11.9) genus(S ′ ∩ B) = genus(S ∩ B), although the genus of S might be different from S ′ as we will see below.
There are two cases to consider: either α separates the component of S that contains it, or it doesn't separate it.
Separating case. Suppose that α separates the component of S that contains it. It will be convenient to give a name to this component, so let us denote it S α . In this case, S α \ A is a disconnected surface with boundary. Hence, 25 genus(S α ) = genus(S α \ A), so genus(S ′ ) = g. Applying the inductive step to S ′ (which has b − 1 < b boundary circles), we find a linearly independent set of loops γ ′ 1 , . . . , γ ′ 2g in R 3 \ S ′ satisfying the conditions of the lemma with S ′ in place of S. or inB c have associated components U ′ i ⊂ ∂B \ S ′ with the required mod-2 intersection properties, per the inductive step.
Note that we can assume that the loops γ ′ 1 , . . . , γ ′ 2g are disjoint from U ε (D). As such, they lie in W , so to prove the inductive step we can simply take For any γ i that is not contained in B or inB c , we set U i := U ′ i \ D or U i := U ′ i depending on whether D ⊂ U ′ i or not (respectively). We claim this configuration of γ 1 , . . . , γ 2g satisfies the properties we want. Note that the two bullet points are just a consequence of how our curves are disjoint from U ε (D), and that 2 genus(S ∩ B) of the γ i are contained in B in view of (11.9) and the inductive step. It remains to check two required homological properties.
By construction and the inductive step, for any γ i not contained entirely in B orB c , there is a component U ′ i ⊂ ∂B \ S ′ that has non-zero mod-2 intersection with γ i and zero mod-2 intersection with each previous γ j , j < i. Proceeding from large indices to small this implies that any γ i not contained entirely in B or inB c has n i = 0 in (11.10). The Mayer-Vietoris sequence for (W ∩B, W ∩ B c ) yields the exact sequence Let I B denote the indices i so that γ i ⊂ B and similarly for IBc . Consider Seeing as we're assuming this is sent to 0 ∈ H 1 (W ), exactness yields a [β] ∈ H 1 (W ∩∂B) so that We have already seen above, though, that n i = 0 for all i ∈ I B since β is a cycle in ∂B \ S. Thus [β] = 0 in H 1 (W ∩B). Arguing as above we can replace β by β ′ (which has no component in D), such that and Using Mayer-Vietoris as above with S replaced by S ′ , we find that The inductive step implies that the n i all vanish. This completes the proof in the separating case.
Nonseparating case. We turn to case where α does not separate the component of S that contains it. We continue to denote that component of S by S α . Observe that 26 genus(S α \ A) + 1 = genus(S α ), so genus(S ′ ) = g − 1. We apply the inductive step to S ′ (which has b − 1 < b boundary circles) to find a linearly independent set {[γ ′ 1 ], . . . , [γ ′ 2g−2 ]} ⊂ H 1 (R 3 \S ′ ) satisfying the conditions of the lemma with S ′ in place of S. For every γ ′ i that is not contained in B or inB c , there exists a component U ′ i ⊂ ∂B \ S ′ with the mod-2 intersection properties postulated by the inductive step.
As in the previous case, we can assume that the cycles are disjoint from U ε (D), and thus lie in W . So, we may take γ 1 := γ 1 , . . . , γ 2g−2 := γ ′ 2g−2 , and, as before, set U i := U ′ i \D or U ′ i depending on whether D ⊂ U ′ i or not (respectively). We further define γ 2g−1 ⊂B c to be α shifted slightly into the non-compact component of R 3 \ (S α ∪B). Finally, we define γ 2g to be a loop in the compact component enclosed by S α with the property that γ 2g intersects the disk D transversely and in precisely one point (it is easy to find such a curve thanks to the non-separating hypothesis); we take U 2g := D • .
We claim that the loops γ 1 , . . . , γ 2g satisfy the assertions of the lemma. The two bullet points are easily checked by the construction of γ 2g−1 , γ 2g and the assumption that the curves obtained via the inductive step avoid U ε (D). The other two claims in the assertion follow by essentially the same argument as in the separating case.
This completes the proof.
Lemma 11.8. Suppose that S ⊂ R 3 is a closed and embedded genus-g surface which is transverse to a sphere ∂B ⊂ R 3 . Denote W := R 3 \ S.
Assume that we are given {[γ 1 ], . . . , [γ 2g ]} ⊂ H 1 (W ) ≈ Z 2g which is linearly independent and where each γ i satisfies one of the following conditions: • γ i is contained in B or inB c , or • there is a component U i of ∂B \ S that has non-zero mod-2 intersection with γ i and zero mod-2 intersection with each previous γ j , j < i.
Then, at least one of the γ i is contained in B, provided genus(S ∩ B) > 0. 27 Proof. Note that, since genus(S ∩ B) > 0, Lemma 11.7 implies (among other things) that there is η ⊂ B \ S so that [η] = 0 in H 1 (W ) and so that for any m ∈ Z \ {0}, mη is not homologous inB \ S to a cycle in ∂B \ S. Now, assume that none of the γ i described above are contained in B. We claim that is a linearly independent set. This is impossible, so we will have proven the lemma. To this end, assume that there are coefficients so that As in Lemma 11.7, by working downwards from i = 2g and considering the intersection of each γ i with appropriate components of W ∩ ∂B, using the U i 's, we can show that n i = 0 unless γ i is contained entirely inB c . As in the proof of Lemma 11.7, applying Mayer-Vietoris to the pair (W ∩B, W ∩ B c ), we find that mη must be homologous inB \ S to a cycle in ∂B \ S. This contradicts the above choice of η unless m = 0, but in this case this contradicts the linear independence of the [γ i ]. This completes the proof.
Appendix A. Geometry of asymptotically conical shrinkers Consider a shrinker Σ n ⊂ R n+1 that is asymptotic to a smooth cone C. In [CS19, Lemma 2.3], it was shown that the function w : C \ B R (0) → R parametrizing the end of Σ, i.e., such that Here, r = |x| is the radial coordinate on the cone. The sharp asymptotics of w (which we need in this paper) are, in fact: Lemma A.1. The function w above satisfies ∇ (k) C w = O(r −1−k ) as r → ∞. 27 We do not need this here, but with minor modifications one can show that at least 2 genus(S ∩ B) curves γi are contained entirely in B.
Proof. We prove this for k = 1-higher derivatives follow by induction. The shrinker equation along Σ implies that ). By combining these equations we find We have used the fact that A C (∂ r , ·) ≡ 0, as well as that Id − wA C is an endomorphism of T C and ν C ⊥ T C. Observe that , while the other terms decay at a faster rate. For x = rp for p ∈ Γ, the link of C, choose a vector ϑ ∈ T p Γ. Extend ϑ to be parallel along γ : r → rp. Note that [rϑ, ∂ r ] = 0, so by (A.1) we find: Integrating this from infinity (cf. [ as r → ∞. Here, F : x → x + w(x)ν C (x) parametrizes the end of Σ over C.
Corollary A.3. The second fundamental form of Σ satisfies, for k ≥ 0, Here, x T is the projection of the ambient position vector x ∈ Σ to T x Σ.
We also have the following variant: Thus, (B.1) implies where C = C(n, α, ε, λ, Λ). By scaling down to parabolic balls B r × [−r 2 , 0] and also recentering in space and time, we obtain is just the second term of the right hand side of (B.3). We now apply the absorption lemma due to L. Simon, [Sim97, Lemma, p. 398] on the monotone subadditive function , with scaling exponent 2 + α. (Note that this monotone subadditive function extends trivially to convex sets.) By L. Simon's absorption lemma, we can choose ε small enough depending on n, α, such that where C ′ = C ′ (n, α, λ, Λ). This yields (B.2): the first summand of the left hand side is obtained by interpolation, and the second by reusing the parabolic PDE.
Appendix C. Ilmanen's localized avoidance principle In this section we will give a proof of Ilmanen's localized avoidance principle for mean curvature flow. The proof is a parabolic version of the barrier principle and moving around barriers in [Ilm96].
Let Ω be an open subset of R n+1 × R, and let Γ ⊂ R n+1 × R be relatively closed in Ω. We call Γ a barrier (resp. strict barrier ) for mean curvature flow in Ω provided that, for every smooth open set E ⊂ Ω \ Γ and for every ( where H ∂E(t) the mean curvature vector of ∂E(t), ν(x, t) is the inward normal of ∂E(t) at x, and f ν is the normal speed of the evolution t → ∂E(t) in a neighborhood of (x, t).
Let W ⊂ R n+1 × R be open and let u : W → R be smooth, positive, bounded and such that u vanishes on ∂W (t) for all t ∈ t(W ). For p, q ∈ W (t), define the distance (C.2) d t (p, q) := inf γ u(γ(s), t) −1 ds : γ is a curve joining p, q in W (t) .
We assume that, for each t ∈ t(W ), the distance d t is complete. We use the standard convention that inf ∅ = ∞. Note that d t is just the distance in the (complete) conformally Euclidean metric g t := u(·, t) −2 g R n+1 . More generally, we can consider the distance between two closed sets in W t defined in the usual way. For U ⊂ W , define Define the degenerate second order elliptic operator where S ranges over all n-dimensional subspaces of R n+1 .
Lemma C.1. Suppose that W \ U is a barrier in W and u : W → R is as above, with u t − Ku ≤ 0 (resp. < 0) .
Then W \ U r is a barrier (resp. a strict barrier) in W .
Proof. Let E ⋐ W be a smooth open set with E ⊂ U r and (x, t) ∈ ∂E ∩ (W \ U r ) .
We have to show that (C.1) holds. Define ThenF is compact, F ⋐ U , and ∂F meets ∂U . For τ close to t, let x(τ ) be the normal evolution of x along τ → ∂E(τ ) such that x(t) = x. Let γ(τ ) be the shortest g τ -geodesic from ∂E(τ ) to ∂U (τ ) with endpoints x(τ ) ∈ E(τ ) and y(τ ) ∈ ∂F (τ ) ∩ ∂U (τ ). The normal exponential map of ∂E(t) with respect to g t has no focal points along γ(t) \ {y(t)}. Note that this also holds for the exponential map of ∂E(τ ) with respect to g τ in a spacetime neighborhood of γ \ {y}. Therefore in a spacetime neighborhood of γ \ {y}, τ → ∂E s (τ ) is smooth and smoothly varying. Note that We denote x(τ, s) = γ(τ, s) and f τ ν ∂E s (τ ) to be the normal velocity of the evolution τ → ∂E s (τ ) in R n+1 . Furthermore, note that the g τ -length of γ(τ, ·) satisfies ℓ gτ γ(τ, [0, s]) = s and thus Differentiating the last equation in s yields Combining (C.3), (C.4) we see that ψ := f − H satisfies, along γ(t), We first assume that y(t) is not a focal point of the exponential map of ∂E(t). This implies that F is locally smooth around y and ∇ ∂F t(y, t) = 0. If ψ(0) > 0 then (C.5) implies that ψ(r) > 0 which gives a contradiction to the assumption that W \ U is a barrier. If ψ(0) ≥ 0 and u t − Ku < 0 then likewise ψ(r) > 0 which again yields a contradiction, proving that P \ U r is a strict barrier.
If the normal exponential map of ∂E(t) focuses at y(t), then we may approximate E by E ′ ⊂ E such that E ′ ∩ ∂U r = {x}, y is not a focal point and such that in the above argument we can replace E by E ′ .
where H ∂E(t) (x, t) = H ∂E(t) · ν ∂E(t) and ν ∂E(t) is the inward pointing unit normal of ∂E(t). We can furthermore assume that ∂E ∩ M = {(x 0 , t 0 )}. For small r > 0, (C.6) implies that and that ∂E(t) is C 2 -close to an n-dimensional plane for all t ∈ [t 0 −r 2 , t 0 ]. We can thus solve mean curvature flow S = (S(t)) t∈[t 0 −r 2 ,t 0 ] with the induced parabolic boundary is a barrier for S from one side, in view of (C.7). Thus S has to run into M, contradicting that M is a weak set flow. Thus, (C.6) fails, and the result follows.
Theorem C.3 (Ilmanen). Consider two closed weak set flows M, M ′ in R n+1 and constants satisfying R > 0, γ > 0, a < b < a + R 2 −γ 2n . Assume that are disjoint for t ∈ [a, b). Then, using this choice of R and x 0 along with t 0 = a and α = 0 in (C.8), we have that t → d t (M(t), M ′ (t)) is non-decreasing for t ∈ [a, b) and Before proving Theorem C.3, let us indicate how we plan to apply it. If M(a), M ′ (a) are disjoint and one knows a priori that for t ∈ [a, b], then Theorem C.3 and a straightforward continuity argument imply that In other words, if the two weak set flows are disjoint near the boundary of the comparison region, then they remain disjoint.
Appendix E. Weak set flows of cones
For this appendix, the reader might find it useful to recall the notions set forth in Section 2. We collect results of [HW17] on weak set flows and outermost flows and show that they are also applicable (with minor modifications) to the flow of hypercones.
Proposition E.1 ([HW17, Proposition A.3]). Suppose that F is any closed subset of R n+1 , and let M ⊂ R n+1 × R + be its level set flow. Set: In what follows, we consider F to be the closure of its interior in R n+1 and satisfy We call such a set F admissible. 28 Let F ′ := F c , denote the level set flows of F , F ′ by M, M ′ , and set F (t) := M(t), F ′ (t) := M ′ (t). In line with Proposition E.1, we set: Let Γ ⊂ S n denote a fixed smooth, embedded, closed hypersurface. Consider the equidistant deformations (Γ s ) −ε<s<ε of Γ ⊂ S n for some consistent choice of normal orientation. We further consider the regular hypercone C = C(Γ) and the smooth perturbations C s = C(Γ s ). Note that C s divides R n+1 into two open sets Ω ± s such that C s = ∂Ω ± s as well as C(Γ) ∩ Ω + s = ∅ for s > 0 and C(Γ) ∩ Ω − s = ∅ for s < 0. We now consider, Σ s,r := ∂ Ω + s \ B r (0)), for 0 < r < 1 and s > 0. We denote withΣ s,r a smoothing of Σ s,r that rounds off the corners near ∂B r (0). Similarly we set: for 0 < r < 1, s > 0, andΣ s,r to be a smoothing of Σ ′ s,r that rounds off its corners. Note that by using the smoothingsΣ ′ s,r we can construct compact regions F i ⊂ Ω + with smooth boundaries such that (1) For each i, F ′ i is contained in the interior of F ′ i+1 .
(3) H n ⌊∂F ′ i → H n ⌊M . By perturbing F i slightly, we can also assume that (4) the level set flow of ∂F i never fattens. We then directly generalize [HW17, Theorems B.6, B.8]. The proof extends verbatim. In this section we show that if a Brakke flow has small singular set, then the regular set is connected, provided it is connected in a neighborhood of the initial time. To prove this, we show that for a closed set S ⊂ R n+k × R, a Brakke flow (with bounded area ratios) on (R n+k × R) \ S extends across S provided S has vanishing n-dimensional parabolic Hausdorff measure. 29 Remark. In [CHH18,Claim 8.4] it was observed that the classification of low entropy ancient flows implies connectivity of the regular part of a flow in R 3 with only (multiplicityone) spherical and cylindrical singularities, by an argument similar to Kleiner-Lott's proof [KL17,Theorem 7.1] that a singular Ricci flow of 3-manifolds has only finitely many bad world lines. We show here that one can prove connectivity under considerably weaker hypothesis. We note that our approach has no hope of estimating the number of bad world lines. It would be interesting to study the Hausdorff dimension of bad world lines in a k-convex mean curvature flow in R n+1 .
We first recall a well known extension theorem for varifolds, originally considered by de Giorgi-Stampacchia [DGS65].
Lemma F.1. Let V be a rectifiable n-varifold in R n+k with bounded area ratios, i.e., V (B r (x)) ≤ Cr n . If S ⊂ R n+k is closed, H n−1 (S) = 0, and the restricted varifold V ′ := V ⌊(R n+k \S) has absolutely continuous first variation H ′ ∈ L 1 loc (R n+k ; µ V ′ ), then V has absolutely continuous first variation equal to H ′ , too.
29 See also [CESY16,Appendix D] where it is shown that an integral 2-dimensional Brakke flow in R 3 \ {0} with bounded area ratios extends across the origin.
Proof. Without loss of generality, we assume that S is compact. For δ > 0, we can find balls Choose cut-off functions 0 ≤ ξ i ≤ 1 with ξ i ≡ 1 outside of B 2r i (x i ), ξ i ≡ 0 on B r i (x i ), and |Dξ i | ≤ 2 r i . Then, set ξ δ = Π N i=1 ξ i and note that For a vector field Ξ ∈ C 1 c (R n+1 ), we have Sending δ → 0, the dominated convergence theorem implies Thus, δV is absolutely continuous with respect to dµ V and, since µ V (S) = 0, the generalized mean curvature of V also equals H ′ . This completes the proof.
We now extend this to Brakke flows (recall our conventions in Section 2.4).
Theorem F.2. Consider (µ(t)) t∈I be a 1-parameter family of Radon measures on R n+k and S ⊂ R n+k × R a closed set with H n P (S) = 0. Assume that (1) The measures µ(t) have uniformly bounded area ratios, i.e., µ(t)(B r (x)) ≤ Cr n .
(2) For almost every t ∈ I, there exists an integral n-dimensional varifold V (t) with µ(t) = µ V (t) so that V ′ (t) = V (t)⌊(R n+k \ S(t)) has absolutely continuous first variation in L 1 loc (R n+k ; dµ V ′ (t) ) and has mean curvature H orthogonal to Tan(V ′ (t), ·) almost everywhere.
Then (µ(t)) t∈I is a Brakke flow on R n+k .
Proof. It suffices to prove this for S compact. We begin by defining the relevant cutoff function. Choose a family of parabolic balls For each parabolic ball, choose a cutoff function 0 ≤ ζ i ≤ 1 so that ζ i ≡ 1 on P 2r i (x i , t i ) and ζ i ≡ 0 on P r i (x i , t i ). We can assume that |Dζ i | ≤ C/r i and | ∂ ∂t ζ i | ≤ C/r 2 i . Set ζ δ = min i ζ i and define a mollified function ζ δ,ε as follows. Choose 0 ≤ ϕ 1 , ϕ n+k ≤ 1 standard mollifiers on R, R n+k and set ζ δ,ε (x, t) = R n+k ×R ε −n−2 ϕ n+k (ε −1 (x − y))ϕ 1 (ε −2 (t − s))ζ δ (y, s) dyds.
We now estimate the derivatives of ζ δ,ε . Claim.
Proof. As usual, we can assume that S is compact. Choose parabolic balls P r i (x i , t i ) covering S with r i < δ and i r n i < δ. Set I(t) := {i : t ∈ (t i − r 2 i , t i + r 2 i )} and note that S(t) ⊂ i∈I(t) Note that This proves that |{t ∈ [t 1 , t 2 ] : H n−2 δ (S(t)) > ε}| < C δ ε .
Sending ε → 0 completes the proof. Combining White's parabolic stratification [Whi97, Theorem 9] with the previous corollary this implies: Corollary F.5. Suppose that M is a unit-regular integral n-dimensional Brakke flow in R n+k with µ(t) = H n ⌊M (t) for t ∈ [0, δ), where M (t) is a mean curvature flow of connected, properly embedded submanifolds of R n+k and δ > 0. Assume that M has the following properties: (1) If there is a static or quasi-static planar tangent flow at X, then X ∈ reg M.
(2) There are no static or quasi-static tangent-flows supported on a union of halfplanes or polyhedral cones. Then reg M = reg M is connected.
Appendix G. Localized topological monotonicity
In this appendix we localize some of the results from [Whi95]. We say a closed subset M of a spacetime R n+1 × R is a simple flow in an open set U ⊂ R n+1 with smooth boundary and over a time interval I ⊂ R, or a simple flow in U × I for short, if there is a compact n-manifold M , with or without boundary, and a continuous map (3) f (·, t), t ∈ I, is an embedding of M • into U , (4) t → f (M • , t), t ∈ I, is a smooth mean curvature flow: ( ∂ ∂t f (·, t)) ⊥ = H(·, t), and (5) f | ∂M ×I is a smooth family of embeddings of ∂M into ∂U . The following lemma is easily proven but we will use it repeatedly in the sequel. The results of [Whi95] apply precisely to these W [t], W [0, T ]. Since we wish to localize some of these results to open subsets Ω ⊂ R n+1 with smooth boundary, we introduce the following localized objects. . Thus, both X and Y must be connected in W Ω [0] to U . As such we can assume below, without loss of generality, that X, Y ∈ U .
Let us set up some notation. For each connected component V of W Ω [0], we write V U := V ∩ U (note that V U may be disconnected). between X and Y so that γ is transverse to ∂U ∪ ∂Ω. For * ∈ {X, Y }, we can assume that γ does not intersect ∂ + V ( * ) U (we might have to exchange the points * ∈ {X, Y } for some other point in V ( * ) U ). Indeed, we can simply consider the last time that γ intersects ∂ + V (X) U and the earliest time that γ intersects ∂ + V (Y ) U and truncate γ near these times (to still have endpoints in U ).
Choose a curve η ⊂ W Ω [0, T ] from Y to X so that η ∩ (U × [0, T ]) ⊂ U × {0} and consists of two arcs exiting U through ∂ + V (Y ) U ∪ ∂ + V (X) U (with a single transverse intersection with each). Concatenating γ with η, we can find a loop σ 1 in W [0, T ]. By [Whi95,Theorem 5.4], there is a homotopy of loops in W[0,T] between σ 1 and a loop σ 0 in W [0]. Perturb σ 0 slightly so it is transverse to ∂U . By construction and the simplicity of M in U × [0, T ], the loop σ 0 has the property that for * ∈ {X, Y }, the mod 2 intersection number of σ 0 with ∂ + V ( * ) U is 1. This is a contradiction. Proof. For 0 < T ≤ T 0 fixed, suppose that [C] ∈ H n−1 (W Ω [T ]) is a polyhedral (n − 1)chain so that there is P a polyhedral n-chain in W Ω [0, T ] with ∂P = C. We can assume that the support Γ of P is disjoint fromŨ ∪ {t = 0}. Consider the projection π(x, t) = (x, T ). Set π # P = P ′ and note that ∂P ′ = C. We aim to show that P ′ is homologous (relative to its boundary) to a chain disjoint from M(T ). Let M ′ be the level set flow generated by Γ. By the avoidance principle for weak set flows (cf. [Whi95, Theorem 4.1]), M ′ (t) remains a positive distance from M(t) as well as a positive distance from ∂Ω(t). In particular, we can enlarge Ω slightly to Ω ′ to ensure that M ′ avoids some tubular neighborhood U ′ of ∂Ω ′ (so in particular, it is a simple flow in U ′ × [0, T ]).
Fatten M ′ (T ) slightly to get a closed set K in R n+1 × {T } that is disjoint from U ∪ M(T ) and has smooth boundary. If γ is a loop in (Ω × {T }) \ K, then by Theorem G.3 applied to M ′ , γ is homologous in (Ω ′ ×[0, T ])\M ′ to a loop at t = 0. In particular, this means that the oriented intersection number of γ with P (and thus P ′ ) is zero. Now, assign each component of (Ω ′ × {T }) \ (K ∪ P ′ ) a multiplicity so that the multiplicity changes by n when crossing a face of P ′ with multiplicity n; we can do this consistently, since the intersection of any loop avoiding K with P ′ is zero (this is only well defined up to a global additive constant, but this will not matter). This yields a (n + 1) chain Q in Ω ′ × {T } whose boundary is a chain in K along with the part of P ′ that is disjoint from K. Now P ′ − ∂Q has ∂(P ′ − ∂Q) = C and is supported in K. As such, P ′ − ∂Q is disjoint from M(t). The result follows. | 2020-04-01T01:01:24.468Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "7dd590eb21c3425e08d2de064545e427013e54d6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "935bff03419fb876bd56ee3897ad84e1a958da3d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
241489549 | pes2o/s2orc | v3-fos-license | Simulation of deformation in thin polymer films
A model of the polymer system for computer simulation of deformation processes is presented. The model is a synthesis of analytical calculations (polymer chain in a tube) and a computer experiment (simulation of the orientational ordering of polar groups by the Monte Carlo method). Thanks to analytical calculations, this model makes it possible to analyse relatively large systems compared to other models. At the same time, the spent CPU time remains constant. The results of a simulation of stretching (stress-strain curves) a thin polymer film for various values of chain stiffness are given. The obtained curves have a characteristic “yield drop”.
Introduction
Polymeric coatings on the surface of solids are currently a subject to special requirements. Special attention is paid to the temperature patterns of development and decline of highly elastic deformation. [1]. Unfortunately, well-developed concepts of the crystal plasticity are inapplicable to disordered solids, including polymers. The most adequate model predicting the viscoelastic properties of polymer melts and concentrated solutions is the model of reptations, or a tube model [2]. There are numerous modifications of the basic model of reptations, which use additional factors and corrections (for example, [3] and [4]). Currently, innovative approaches are being actively developed to describe the deformation behavior of solids with spatial disorder [5]. One of the most common methods of computer simulation is the method of molecular dynamics (MMD). The simulation of MMD has significant limitations associated with a small volume of the system under consideration and slow relaxation when an external voltage is applied. In practice, this makes it practically impossible to use MMD for the study of poorly crosslinked grids and to obtain quasi-equilibrium stress-strain dependences. Therefore, the aim of this work is to develop a model that is a synthesis of analytical calculations and computer simulation methods. This model adequately describes the deformation processes and uses less CPU time when performing calculations. Figure 1a shows the chemical structure of polyvinylidene fluoride (PVDF). Polymer chains are located perpendicular to the plane of the figure and have polar groups (marked with p in Fig.1b), which are perpendicular to the backbone of the polymer chain. Based on the chemical structure, a model of a multi-chain polymer system has been developed. It is considered that the macromolecule is surrounded by other chains. As in the model of reptations, in this model it is assumed that, due to topological limitations, the chain does not extend beyond the limits of the tubular cylindrical region (Fig. 2). The diameter D of the tube is equal to the interchain distance and depends on the chemical structure of the 1 To whom any correspondence should be addressed. The most universal measure of the flexibility of a polymer molecule is the Kuhn segment length l. Its value is determined experimentally by light scattering. For most of the known synthetic polymers, Kuhn segment values are in the range of 15 -30 Å. That is, the Kuhn segment length and the interchain distance are comparable with each other. But the length of the Kuhn segment depends on the temperature. Therefore, in this paper we characterize the flexibility of a polymer molecule by its bending stiffness E, which we consider a constant.
Model
The stiffness is related to the length of the Kuhn segment by the formula (1) We consider the area of the polymer molecule, which lies in the plane ( Figure 2). A polymer molecule touches the tube wall at the point O and intersects the tube radius at the point A. "⃗( ) is the tangential unit vector along the chain as a function of the contour distance from the point O. Then, a correlation function has a form: (2) denotes the average cosine of the angle between the vectors and . The angle value characterizes the chain elongation. Fig. 3 shows the dependences of on the ratio of the Kuhn segment length to the tube radius. When , the macromolecule is completely extended and the dipole can rotate only in a plane perpendicular to the tube axis. The value corresponds to the polymer disorder state. In this case, the macromolecule can arbitrarily fold inside the tube. It is assumed that the main contribution to the potential energy of the system is made by the interaction energy of the polar groups. On the other hand, the chain topology has the main influence on the system entropy. The free energy of the system is calculated using the formula (3) n, m denotes the pairs of nearest-neighbor dipoles, -Lennard-Jones potential, -potential of dipole-dipole interactions, S -entropy of the system.
Each polar group interacts with eight dipoles located on adjacent chains and two dipoles of the same chain. The orientation of each dipole is characterized by two angles and (Fig. 1b). The polar angle characterizes the dipole orientation in the plane (Fig. 1b) and can take any value in the interval [0; 2 ].
The distribution of the angle (orientation of the plane relative to the plane XOY) depends on the elongation degree of the chain and its value lies in the interval [− 1 ; 1 ]. For boundary value the lattice model corresponds to the XY model, and for -the Heisenberg model. To calculate the entropy of the system under consideration (the third term in formula 3), the entropy formula for a polymer molecule in a tube is used [5].
Simulation of deformation
The calculations were carried out by the Monte Carlo method. Figures 4 and 5 shows the free energy dependences of the polymer system on the interchain distance for different values of the chain stiffness. The force-strain diagram is presented in fig. 6. with a stiffness value of . Initially (point A1), the system was in an unordered state. Section A1-A2 corresponds to the elastic stretching of the polymer chain.
For further stretching, less force is required, so a characteristic "yield point" appears on the diagram. The simulation data showed that in the A4-A5 section, a negative force is needed for the deformation, which contradicts any experiment. To avoid controversy, we made a horizontal straight line A3-A6, taking into account that the deformation work on this section is equal to zero. On the section A3-A6, a phase transition from a disordered state to an ordered one occurs. At the point A3, 1 is approximately , and at the point A6 it tends to zero with increasing deformation. Figure 7. The dependence of force on relative stretching, -system elongation
Conclusion
The presented model allows us to calculate the amount of free energy for various parameters of the polymer system. This is the advantage of this modeling method compared to MMD. In addition, because orderliness is calculated analytically, it becomes possible to use the Monte Carlo method to study the deformation of the system. | 2019-12-19T09:18:32.403Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "fb40b753d0a8854c27b772e9a8e31aa3aba79d74",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1391/1/012012",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3b0782f1e3243c14ce2d8bb4c1da7e3182e24955",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
211051070 | pes2o/s2orc | v3-fos-license | Understanding Land Subsidence Along the Coastal Areas of Guangdong, China, by Analyzing Multi-Track MTInSAR Data
: Coastal areas are usually densely populated, economically developed, ecologically dense, and subject to a phenomenon that is becoming increasingly serious, land subsidence. Land subsidence can accelerate the increase in relative sea level, lead to a series of potential hazards, and threaten the stability of the ecological environment and human lives. In this paper, we adopted two commonly used multi-temporal interferometric synthetic aperture radar (MTInSAR) techniques, Small baseline subset (SBAS) and Temporarily coherent point (TCP) InSAR, to monitor the land subsidence along the entire coastline of Guangdong Province. The long-wavelength L-band ALOS / PALSAR-1 dataset collected from 2007 to 2011 is used to generate the average deformation velocity and deformation time series. Linear subsidence rates over 150 mm / yr are observed in the Chaoshan Plain. The spatiotemporal characteristics are analyzed and then compared with land use and geology to infer potential causes of the land subsidence. The results show that (1) subsidence with notable rates ( > 20 mm / yr) mainly occurs in areas of aquaculture, followed by urban, agricultural, and forest areas, with percentages of 40.8%, 37.1%, 21.5%, and 0.6%, respectively; (2) subsidence is mainly concentrated in the compressible Holocene deposits, and clearly associated with the thickness of the deposits; and (3) groundwater exploitation for aquaculture and agricultural use outside city areas is probably the main cause of subsidence along these coastal areas.
Introduction
Guangdong Province is located on the South China Sea coast and has the longest coastline of any province in China, approximately 4300 km. Along this coastline, several large agglomerations (e.g., the Pearl River Delta (PRD) and Chao Shan Plain (CSP)), harbors (Zhujiang, Zhanjiang, and Shantou), and a wide range of aquaculture and agricultural areas are clustered, controlling the primary economic activity and ecological environment. Over the past few decades, this coastline has experienced rapid growth in both population and economy. The PRD has undergone a shift from an agricultural-based economy to an industrial-and technological-based economy, making the PRD Asia's fourth-largest economy [1,2]. To satisfy the needs of urban residents and export trade, an increasing amount of land is exploited through the expansion of aquaculture land use and reclamation of seashore. Among these exploitations, large-scale freshwater aquaculture has made Guangdong Province one of the
SAR Datasets and Landsat Datasets
To generate the surface motions of the whole Guangdong coastal area, we collected 253 archived SAR image datasets acquired from 2007 to 2011 by L-band ALOS1/PALSAR satellites, including 16 adjacent orbits covering an area of approximately 49,000 km 2 (represented by blue rectangles in Figure 1). Table 1 gives the imaging parameters of each orbit, including the number, time span, and systematic parameters. An average of 12 acquisitions per frame was used during the 3.5-year period.
SAR Datasets and Landsat Datasets
To generate the surface motions of the whole Guangdong coastal area, we collected 253 archived SAR image datasets acquired from 2007 to 2011 by L-band ALOS1/PALSAR satellites, including Remote Sens. 2020, 12, 299 4 of 20 16 adjacent orbits covering an area of approximately 49,000 km 2 (represented by blue rectangles in Figure 1). Table 1 gives the imaging parameters of each orbit, including the number, time span, and systematic parameters. An average of 12 acquisitions per frame was used during the 3.5-year period. In addition to the active remote sensing datasets, we also collected 8 optical Landsat images (represented by red dotted rectangles in Figure 1, see Table 2), aiming to generate land classification maps. Although a 2-year time span between SAR images and Landsat images exists, however, the changing rate of the agricultural land, forest land, and aquaculture land are only about 8.29%, 2.09%, and 2.99% respectively, according to the Statistics Bureau of Guangdong Province, see Table S1 [40]. Moreover, Landsat 8 image has a superior spectral resolution compared with Landsat 5 and 7, which is launched on 2013 [41,42].
In Situ Datasets
In addition to the remote sensing datasets, in situ datasets were collected in this study, including time-series datasets of 7 GPS stations and a dataset of Quaternary deposit thicknesses derived from boreholes (represented by black stars in Figure 1 and black circles in Figure 2, respectively). The GPS data were used to validate the InSAR-derived results, and the borehole data were adopted to generate the 2D deposit maps (Figure 2).
Methodology
MTInSAR is developed based on DInSAR, aiming to extract the deformation from mixed phases of series differential interferograms. That is, the residual topographic phase ( ), deformation phase ( ), orbit phase ( ), atmospheric phase ( ) and noise ( ) constitute the differential interferometric phase ( ).
where is a phase wrap operation. To generate high-precision deformation rate and timeseries, MTInSAR is mainly focus on the elimination of atmospheric phase and decorrelation noise based on wrapped or unwrapped phases. In this case, i.e., coastal areas with large coverage and serious temporal decorrelation, multi-master MTInSAR with a large multilooking number (3 and 8 looks in the range and azimuth directions, respectively) is adopted. Specifically, considering the complicated data conditions, such as frames with limited SAR image numbers and serious atmospheric effects in the coastal area, two kinds of MTInSAR are adopted. The first is SBAS-InSAR based on unwrapped phases, which is appropriate for most situations. The second method is an arcbased method, TCPInSAR, which is based on wrapped phases and aims to eliminate the influence of serious atmospheric effects. Considering the data processing efficiency, the latter is adopted in the case that the SAR image number is limited and the atmospheric effects are serious (tracks 453, 457, 462 and 463), see the comparison between pixel-based SBAS-InSAR and arc-based TCPInSAR in Figure S1. After generating deformation over the 16 orbits, the average velocity maps in the line of sight (LOS) direction are mosaiced after removing the relative offsets in the overlapped areas by a linear polynomial fitting [43], considering the approximate incident angle (38.7°, see Supplementary Table S1). Moreover, the precision of the InSAR results is evaluated, including relative precision calculation by the velocity difference in the overlapped area of adjacent orbits and analysis of the absolute precision with respect to GPS measurements. Next, the land-use classification maps along the coastal areas are obtained based on the selected optical Landsat images. Finally, the deformation features are observed, and the causes of the deformation are analyzed qualitatively and quantitatively. The flowchart is shown in Figure 3.
Methodology
MTInSAR is developed based on DInSAR, aiming to extract the deformation from mixed phases of series differential interferograms. That is, the residual topographic phase (ϕ topo ), deformation phase (ϕ de f ), orbit phase (ϕ orb ), atmospheric phase (ϕ aps ) and noise (ϕ noi ) constitute the differential interferometric phase (ϕ Dint ).
where W ϕ Dint is a phase wrap operation. To generate high-precision deformation rate and time-series, MTInSAR is mainly focus on the elimination of atmospheric phase and decorrelation noise based on wrapped or unwrapped phases. In this case, i.e., coastal areas with large coverage and serious temporal decorrelation, multi-master MTInSAR with a large multilooking number (3 and 8 looks in the range and azimuth directions, respectively) is adopted. Specifically, considering the complicated data conditions, such as frames with limited SAR image numbers and serious atmospheric effects in the coastal area, two kinds of MTInSAR are adopted. The first is SBAS-InSAR based on unwrapped phases, which is appropriate for most situations. The second method is an arc-based method, TCPInSAR, which is based on wrapped phases and aims to eliminate the influence of serious atmospheric effects. Considering the data processing efficiency, the latter is adopted in the case that the SAR image number is limited and the atmospheric effects are serious (tracks 453, 457, 462 and 463), see the comparison between pixel-based SBAS-InSAR and arc-based TCPInSAR in Figure S1. After generating deformation over the 16 orbits, the average velocity maps in the line of sight (LOS) direction are mosaiced after removing the relative offsets in the overlapped areas by a linear polynomial fitting [43], considering the approximate incident angle (38.7 • , see Supplementary Table S1). Moreover, the precision of the InSAR results is evaluated, including relative precision calculation by the velocity difference in the overlapped area of adjacent orbits and analysis of the absolute precision with respect to GPS measurements. Next, the land-use classification maps along the coastal areas are obtained based on the selected optical Landsat images. Finally, the deformation features are observed, and the causes of the deformation are analyzed qualitatively and quantitatively. The flowchart is shown in Figure 3. SBAS-InSAR, developed by Berardino et al. [16] and improved by researchers in recent years, is a pixel-based approach that has been widely used for wide-ranging deformation monitoring [16,26]. In this paper, GAMMA software is used to generate the unwrapped differential interferometric phases. SAR images of each track are first coregistered with a master image, as shown in Table 1. Interferograms with a given spatial and temporal baseline threshold (850 m and 800 days respectively), aiming to reduce the influence of decorrelation and digital elevation map (DEM) error on parameter estimation. After interferograms selection, traditional DInSAR with a multilook operation (three looks and eight looks in range and azimuth direction respectively) is applied to generate the unwrapped phases, including flatten and topographic phase removing, goldstein filtering, MCF phase unwrapping. The orbit error is removed with a polynomial fitting. Then, pixels with a mean coherence above 0.6 will be selected for the following processing procedures, including residual topographic estimation, atmosphere elimination, deformation time series calculation, and average velocity calculation. Here, we used a modified method to reduce the influence of residual topography caused by the correlation of the perpendicular baseline of ALOS1 in time dimension [44]. A spatial filtering and a temporal filtering are used to eliminate the atmosphere with a window-size of 500 m and two years respectively. A bootstrap with a parameter of 500 is adopted to generate the uncertainty of the average velocity fitting, and pixels whose uncertainty exceeds three times the standard deviation (std) are deleted. We also deleted those pixels whose coherent observation number is below two thirds of the total number.
Unlike SBAS-InSAR, TCPInSAR is a method proposed based on the wrapped phase of arcs [21,45,46]. It selected temporarily-coherence points (TCPs) and the unknown parameters, i.e., deformation rate, orbit phase, residual topographic phase, can be calculated without phase unwrapping operation. TCPInSAR proposed a local Delaunay networks with a uniform sampling in the range and azimuth directions, aiming to improve the arc density of the whole image. Unknown parameters are solved with an orbit errors joint model [46] and a phase ambiguity detection model [21] based on phase difference of arcs. Here, we used a sampling spacing of 700 m in both directions. The average velocity and residual topography of arcs are solved with an ambiguity detection method, that is, arcs with residuals over a given threshold are deleted (1.5 rad is used in this case). After generating the parameters of each arc, the parameters in each pixel are calculated based on an L1 norm least-squares estimation. The nonlinear deformation is obtained after spatial-temporal filtering SBAS-InSAR, developed by Berardino et al. [16] and improved by researchers in recent years, is a pixel-based approach that has been widely used for wide-ranging deformation monitoring [16,26]. In this paper, GAMMA software is used to generate the unwrapped differential interferometric phases. SAR images of each track are first coregistered with a master image, as shown in Table 1. Interferograms with a given spatial and temporal baseline threshold (850 m and 800 days respectively), aiming to reduce the influence of decorrelation and digital elevation map (DEM) error on parameter estimation. After interferograms selection, traditional DInSAR with a multilook operation (three looks and eight looks in range and azimuth direction respectively) is applied to generate the unwrapped phases, including flatten and topographic phase removing, goldstein filtering, MCF phase unwrapping. The orbit error is removed with a polynomial fitting. Then, pixels with a mean coherence above 0.6 will be selected for the following processing procedures, including residual topographic estimation, atmosphere elimination, deformation time series calculation, and average velocity calculation. Here, we used a modified method to reduce the influence of residual topography caused by the correlation of the perpendicular baseline of ALOS1 in time dimension [44]. A spatial filtering and a temporal filtering are used to eliminate the atmosphere with a window-size of 500 m and two years respectively. A bootstrap with a parameter of 500 is adopted to generate the uncertainty of the average velocity fitting, and pixels whose uncertainty exceeds three times the standard deviation (std) are deleted. We also deleted those pixels whose coherent observation number is below two thirds of the total number.
Unlike SBAS-InSAR, TCPInSAR is a method proposed based on the wrapped phase of arcs [21,45,46]. It selected temporarily-coherence points (TCPs) and the unknown parameters, i.e., deformation rate, orbit phase, residual topographic phase, can be calculated without phase unwrapping operation. TCPInSAR proposed a local Delaunay networks with a uniform sampling in the range and azimuth directions, aiming to improve the arc density of the whole image. Unknown parameters are solved with an orbit errors joint model [46] and a phase ambiguity detection model [21] based on phase difference of arcs. Here, we used a sampling spacing of 700 m in both directions. The average velocity and residual topography of arcs are solved with an ambiguity detection method, that is, arcs with residuals over a given threshold are deleted (1.5 rad is used in this case). After generating the parameters of each arc, the parameters in each pixel are calculated based on an L1 norm least-squares Remote Sens. 2020, 12, 299 7 of 20 estimation. The nonlinear deformation is obtained after spatial-temporal filtering for the residual phase. The final deformation time series is generated by combining the linear and nonlinear deformation.
Usually, 3D deformation, that is, deformation in both the vertical and horizontal directions, is calculated by the combination of ascending and descending images, which can help us infer hazard mechanisms. However, in this case, only descending SAR images were collected. Furthermore, according to the existing research, the horizontal deformation in the coastal areas is not obvious [47]. Hence, we assume that the horizontal displacement is negligible, deformation in the vertical direction can be generated with d v = d los /cosθ, where d v and d los are the displacement in the vertical and LOS) directions, and θ is the incidence angle. The velocity values referred to in the following sections are in the LOS direction unless otherwise specified. In addition to the deformation monitoring techniques, an object-oriented nearest neighbor classification method based on multiresolution segmentation and supervised classification is also adopted for generating the land-use classification map. The details of the object classification method, which are beyond the scope of this study, can be found in previously published research [48]. As described in Section 2.2.1, because of the low change rate of land-use and the superiority of Landsat 8, the comparison between subsidence and land-use classification map is appropriate, which can help us understand the mechanism of land subsidence. The accuracy of the classification map is compared with a 30-m resolution product (with an overall accuracy of 97.29% within the range of [20 • -30 • N, 60 • -120 • E]) obtained in 2015 derived from the National Science & Technology Infrastructure of China [49,50]. The overall accuracy of the classification map in this study is about 89.5%.
Validation of InSAR Results
To quantitatively evaluate the accuracy of the InSAR-derived results, relative and absolute accuracy methods are adopted [51][52][53]. The former is also called inner precision, which is defined as the velocity difference in the overlapping areas of adjacent tracks, while the absolute accuracy is defined as the consistency with in situ measurements. Here, we used both methods to verify the deformation accuracy. Because the occurrence frequency of the large-scale terrain deformation in the coastal areas is very low, the relative accuracy is calculated by the velocity consistency in the overlapping areas between adjacent orbits after a 1 st order polynomial operation [43]. The histograms of the average velocity difference between adjacent orbits are shown in Figure 4, and the mean and std values are shown in Table 3. It is obvious that the reference precision of velocity (std value of velocity difference) is generally less than 3.1 mm/yr, with the exception of several values approximate to 4 mm/yr. The causes of these high values may be the limited number of SAR images [54].
Remote Sens. 2020, 12, x FOR PEER REVIEW 7 of 20 for the residual phase. The final deformation time series is generated by combining the linear and nonlinear deformation. Usually, 3D deformation, that is, deformation in both the vertical and horizontal directions, is calculated by the combination of ascending and descending images, which can help us infer hazard mechanisms. However, in this case, only descending SAR images were collected. Furthermore, according to the existing research, the horizontal deformation in the coastal areas is not obvious [47]. Hence, we assume that the horizontal displacement is negligible, deformation in the vertical direction can be generated with = / , where and are the displacement in the vertical and LOS) directions, and is the incidence angle. The velocity values referred to in the following sections are in the LOS direction unless otherwise specified. In addition to the deformation monitoring techniques, an object-oriented nearest neighbor classification method based on multiresolution segmentation and supervised classification is also adopted for generating the land-use classification map. The details of the object classification method, which are beyond the scope of this study, can be found in previously published research [48]. As described in Section 2.2.1, because of the low change rate of land-use and the superiority of Landsat 8, the comparison between subsidence and land-use classification map is appropriate, which can help us understand the mechanism of land subsidence. The accuracy of the classification map is compared with a 30-m resolution product (with an overall accuracy of 97.29% within the range of [20°-30°N, 60°-120°E]) obtained in 2015 derived from the National Science & Technology Infrastructure of China [49,50]. The overall accuracy of the classification map in this study is about 89.5%.
Validation of InSAR Results
To quantitatively evaluate the accuracy of the InSAR-derived results, relative and absolute accuracy methods are adopted [51][52][53]. The former is also called inner precision, which is defined as the velocity difference in the overlapping areas of adjacent tracks, while the absolute accuracy is defined as the consistency with in situ measurements. Here, we used both methods to verify the deformation accuracy. Because the occurrence frequency of the large-scale terrain deformation in the coastal areas is very low, the relative accuracy is calculated by the velocity consistency in the overlapping areas between adjacent orbits after a 1 st order polynomial operation [43]. The histograms of the average velocity difference between adjacent orbits are shown in Figure 4, and the mean and std values are shown in Table 3. It is obvious that the reference precision of velocity (std value of velocity difference) is generally less than 3.1 mm/yr, with the exception of several values approximate to 4 mm/yr. The causes of these high values may be the limited number of SAR images [54].
In addition to the statistical offsets in the overlapped areas, we also compared the velocity and deformation time series with the data collected from GPS stations, as shown in Figures 4b and 5. It is noted that the velocity comparison between InSAR and GPS is in the vertical direction. The correlation of InSAR and GPS velocities is 0.71, confirming the high precision of InSAR results. Moreover, the std of deformation time series between the two measurements is also calculated, as shown in Figure 5. The GPS stations FOMO and HKSL are selected as reference stations in tracks 460 and 459 for the deformation time-series comparison, respectively, due to their different acquisition dates of SAR image series (see Table 1) and different coverage. The results show that the std of the deformation time series for stations DSMG, HKFN, HKKT, HKLT, and HKST are 5.5 mm, 8.2 mm, 6.8 mm 8.7 mm and 8.7 mm, respectively. The millimeter-level accuracy of the deformation series also proves the reliability of the InSAR-derived results.
Remote Sens. 2020, 12, x FOR PEER REVIEW 8 of 20 Table 3. Mean and standard deviation (std) of the average deformation rate difference in the overlapping areas of adjacent tracks.
Adjacent Tracks Mean Std Adjacent Tracks Mean Std
In addition to the statistical offsets in the overlapped areas, we also compared the velocity and deformation time series with the data collected from GPS stations, as shown in Figures 4b and 5. It is noted that the velocity comparison between InSAR and GPS is in the vertical direction. The correlation of InSAR and GPS velocities is 0.71, confirming the high precision of InSAR results. Moreover, the std of deformation time series between the two measurements is also calculated, as shown in Figure 5. The GPS stations FOMO and HKSL are selected as reference stations in tracks 460 and 459 for the deformation time-series comparison, respectively, due to their different acquisition dates of SAR image series (see Table 1) and different coverage. The results show that the std of the deformation time series for stations DSMG, HKFN, HKKT, HKLT, and HKST are 5.5 mm, 8.2 mm, 6.8 mm 8.7 mm and 8.7 mm, respectively. The millimeter-level accuracy of the deformation series also proves the reliability of the InSAR-derived results.
Results
The surface displacement map of the Guangdong coastal areas derived from MTInSAR is calculated and shown in Figure 6. Positive values (yellow colors) represent movement toward the satellite (uplift). Negative values (purple colors) indicate movement away from the satellite (subsidence). Selected areas with concentrated land subsidence rates in excess of 20 mm/yr and discrete areas with gentle deformation are analyzed to grasp the features and characteristics of surface deformation. Three obvious subsidence areas in LZP, PRD, and CSP are identified. In addition, a land use classification map of the coastal areas of Guangdong Province is also generated and shown in Figure 7. Aquaculture, agriculture, urban, forest, and water are extracted based on the collected Landsat images.
Results
The surface displacement map of the Guangdong coastal areas derived from MTInSAR is calculated and shown in Figure 6. Positive values (yellow colors) represent movement toward the satellite (uplift). Negative values (purple colors) indicate movement away from the satellite (subsidence). Selected areas with concentrated land subsidence rates in excess of 20 mm/yr and discrete areas with gentle deformation are analyzed to grasp the features and characteristics of surface deformation. Three obvious subsidence areas in LZP, PRD, and CSP are identified. In addition, a land use classification map of the coastal areas of Guangdong Province is also generated and shown in Figure 7. Aquaculture, agriculture, urban, forest, and water are extracted based on the collected Landsat images.
Results
The surface displacement map of the Guangdong coastal areas derived from MTInSAR is calculated and shown in Figure 6. Positive values (yellow colors) represent movement toward the satellite (uplift). Negative values (purple colors) indicate movement away from the satellite (subsidence). Selected areas with concentrated land subsidence rates in excess of 20 mm/yr and discrete areas with gentle deformation are analyzed to grasp the features and characteristics of surface deformation. Three obvious subsidence areas in LZP, PRD, and CSP are identified. In addition, a land use classification map of the coastal areas of Guangdong Province is also generated and shown in Figure 7. Aquaculture, agriculture, urban, forest, and water are extracted based on the collected Landsat images.
Subsidence in the Leizhou Peninsula
The LZP is located in southwest Guangdong Province. Crops are grown in more than 90% of the peninsula (see the yellow areas in Figure 7), making the LZP the granary for Guangdong Province. Following the SBAS-InSAR procedure, the result in the LZP is shown in Figure 8. The high subsidence (>20 mm/yr) in LZP is concentrated in the two bays (Zhanjiang Bay and Leizhou Bay) and the southwest of Donghai Island, as outlined by the black dotted circle in Figure 8. The subsidence area is approximately 251.5 km 2 , with an average subsidence rate of 15.1 mm/yr, and the maximum subsidence rate of 58.0 mm/yr occurs on Donghai Island. The location of these subsidence bowls is consistent with the trend of middle groundwater decline [4]. In addition to the high rates of subsidence, some other subsidence bowls with areas over 4 km 2 , located along the western coastline, are also observed. The corresponding optical images are also given, showing that obvious subsidence bowls are located in areas with aquaculture land use. Moreover, the deformation time-series of two selected points in aquaculture areas are shown in Figure 8b,c, with an approximately linear trend. Discrete subsidence with an average rate of 7.0 mm/yr is presented in inland LZP, such as the subsidence areas of Suixi and Leizhou, with a nonlinear deformation time-series trend, as shown in Figure 8e. In addition to subsidence in agricultural and aquaculture areas, small-scale displacement (with an average subsidence rate of 9.8 mm/yr) is also observed in urban areas of Zhanjiang city, especially in its industrial areas (see inset F in Figure 8a,d). In addition to the subsidence areas, a small-scale uplift (~3 mm/yr) is observed in Dengjiaolou, which is consistent with the results of traditional measurements from tectonic activity [47].
Subsidence in the Leizhou Peninsula
The LZP is located in southwest Guangdong Province. Crops are grown in more than 90% of the peninsula (see the yellow areas in Figure 7), making the LZP the granary for Guangdong Province. Following the SBAS-InSAR procedure, the result in the LZP is shown in Figure 8. The high subsidence (>20 mm/yr) in LZP is concentrated in the two bays (Zhanjiang Bay and Leizhou Bay) and the southwest of Donghai Island, as outlined by the black dotted circle in Figure 8. The subsidence area is approximately 251.5 km 2 , with an average subsidence rate of 15.1 mm/yr, and the maximum subsidence rate of 58.0 mm/yr occurs on Donghai Island. The location of these subsidence bowls is consistent with the trend of middle groundwater decline [4]. In addition to the high rates of subsidence, some other subsidence bowls with areas over 4 km 2 , located along the western coastline, are also observed. The corresponding optical images are also given, showing that obvious subsidence bowls are located in areas with aquaculture land use. Moreover, the deformation time-series of two selected points in aquaculture areas are shown in Figure 8b,c, with an approximately linear trend. Discrete subsidence with an average rate of 7.0 mm/yr is presented in inland LZP, such as the subsidence areas of Suixi and Leizhou, with a nonlinear deformation time-series trend, as shown in Figure 8e. In addition to subsidence in agricultural and aquaculture areas, small-scale displacement (with an average subsidence rate of 9.8 mm/yr) is also observed in urban areas of Zhanjiang city, especially in its industrial areas (see inset F in Figure 8a,d). In addition to the subsidence areas, a small-scale uplift (~3 mm/yr) is observed in Dengjiaolou, which is consistent with the results of traditional measurements from tectonic activity [47].
Subsidence in the Pearl River Delta
The spatial-temporal characteristic of subsidence in the PRD is much different in magnitude and range than that of the LZP, as shown in Figure 9. This result indicates that the large-scale subsidence areas with an average rate in excess of 35 mm/yr reach a total area of 263.37 km 2 , in comparison to 15.16 km 2 in LZP during the approximate period. The subsidence area in the PRD is mainly distributed along the Pearl River tributary (see Figure 9). Two large-scale subsidence bowls are observed along the Modaomen channel (outlined by B1 and B2 in Figure 9), with average subsidence rates of 32.0 mm/yr and 25.1 mm/yr, respectively, and areas of 265.43 km 2 and 387.32 km 2 , respectively. The statistical area is calculated after kriging interpolation due to the temporal decorrelation of the surrounding water. Subsidence bowls with low subsidence rates are also observed along the other three channels: the Hengmen channel, Bingqili channel, and Jiaomen channel. In addition, local subsidence areas with notable rates are distributed along the coastline, especially in reclamation areas (which are outlined between the blue and red lines during the period from 1984 to 2011 in Figure 9). For example, an artificial island located in Nansha district and a reclamation area located in Jinwan district (see dotted purple circles in Figure 9), have maximum subsidence rates of 71.9 mm/yr and 150.9 mm/yr, respectively.
Notably, some small-scale uplift is found in mountainous areas (see the black dotted circles in Figure 9). This uplift may be caused by elevation-related atmospheric effects or residual topography. Besides the average deformation map, deformation time-series of four selected points are also generated (see Figure 9b-e). The linear trend is found in both agricultural and aquaculture areas (see Figure 9f-i).
Subsidence in the Pearl River Delta
The spatial-temporal characteristic of subsidence in the PRD is much different in magnitude and range than that of the LZP, as shown in Figure 9. This result indicates that the large-scale subsidence areas with an average rate in excess of 35 mm/yr reach a total area of 263.37 km 2 , in comparison to 15.16 km 2 in LZP during the approximate period. The subsidence area in the PRD is mainly distributed along the Pearl River tributary (see Figure 9). Two large-scale subsidence bowls are observed along the Modaomen channel (outlined by B1 and B2 in Figure 9), with average subsidence rates of 32.0 mm/yr and 25.1 mm/yr, respectively, and areas of 265.43 km 2 and 387.32 km 2 , respectively. The statistical area is calculated after kriging interpolation due to the temporal decorrelation of the surrounding water. Subsidence bowls with low subsidence rates are also observed along the other three channels: the Hengmen channel, Bingqili channel, and Jiaomen channel. In addition, local subsidence areas with notable rates are distributed along the coastline, especially in reclamation areas (which are outlined between the blue and red lines during the period from 1984 to 2011 in Figure 9). For example, an artificial island located in Nansha district and a reclamation area located in Jinwan district (see dotted purple circles in Figure 9), have maximum subsidence rates of 71.9 mm/yr and 150.9 mm/yr, respectively. Notably, some small-scale uplift is found in mountainous areas (see the black dotted circles in Figure 9). This uplift may be caused by elevation-related atmospheric effects or residual topography. Besides the average deformation map, deformation time-series of four selected points are also generated (see Figure 9b-e). The linear trend is found in both agricultural and aquaculture areas (see Figure 9f-i).
Subsidence in the Chaoshan Plain
The CSP consists of three river alluvial plains, the Lianjiang River Plain (LJP), Rongjiang River Plain (RJP) and Hanjiang River Plain (HJP), represented by purple, black and red dotted circles in
Subsidence in the Chaoshan Plain
The CSP consists of three river alluvial plains, the Lianjiang River Plain (LJP), Rongjiang River Plain (RJP) and Hanjiang River Plain (HJP), represented by purple, black and red dotted circles in Figure 10, respectively. The total area of these plains reaches 2600 km 2 . The displacement results in the CSP are presented in Figure 10 and exhibit a specific spatial-temporal subsidence feature that, compared with the results of LZP and PRD, is very large and intensive. As shown in Figure 10, a large-scale subsidence bowl with an obvious boundary is observed in the LJP, outlined by B1. The area and average subsidence rate are 342.21 km 2 and 41 mm/yr, respectively. The maximum subsidence rate reaches 157.2 mm/yr. In addition to the subsidence in the LJP, some discrete small-scale subsidence areas are observed in the HJP and are marked by black circles in Figure 10, i.e., B2. The subsidence is mainly concentrated along the western side of the HJP, and the maximum subsidence rate reaches 32.9 mm/yr. In the RJP, a small subsidence area is also generated in Jieyang city, with a maximum subsidence rate of 24.2 mm/yr. Moreover, four points located in the mixed agricultural and urban areas (see Figure 10f-i) are selected to generate the deformation time-series, as shown in Figure 10b-e. The linear trend of the deformation time-series is also found.
Remote Sens. 2020, 12, x FOR PEER REVIEW 12 of 20 Figure 10, respectively. The total area of these plains reaches 2600 km 2 . The displacement results in the CSP are presented in Figure 10 and exhibit a specific spatial-temporal subsidence feature that, compared with the results of LZP and PRD, is very large and intensive. As shown in Figure 10, a large-scale subsidence bowl with an obvious boundary is observed in the LJP, outlined by B1. The area and average subsidence rate are 342.21 km 2 and 41 mm/yr, respectively. The maximum subsidence rate reaches 157.2 mm/yr. In addition to the subsidence in the LJP, some discrete smallscale subsidence areas are observed in the HJP and are marked by black circles in Figure 10, i.e., B2. The subsidence is mainly concentrated along the western side of the HJP, and the maximum subsidence rate reaches 32.9 mm/yr. In the RJP, a small subsidence area is also generated in Jieyang city, with a maximum subsidence rate of 24.2 mm/yr. Moreover, four points located in the mixed agricultural and urban areas (see Figure 10f-i) are selected to generate the deformation time-series, as shown in Figure 10b-e. The linear trend of the deformation time-series is also found.
Subsidence in other Coastal Areas
In addition to the three selected subsidence areas, other subsidence areas are distributed discretely along the coastline, especially in river estuaries, for example, Hailing Bay and the estuary of the Moyang River in track 463, represented by the insets A and B in Figure 6, respectively, the estuary of Guanghai Bay in track 461 (see inset D in Figure 6), and the estuary of the Huangjiang River in track 456 (inset G in Figure 6). In addition to the subsidence occurring along the river system, some local subsidence is also observed in forest areas, such as the discrete subsidence bowls in tracks T463 and T457 (see the insets C and F in Figure 6). Moreover, elongated subsidence located in the industrial petrochemical region is also observed in track 458 (see inset E in Figure 6).
Subsidence in Other Coastal Areas
In addition to the three selected subsidence areas, other subsidence areas are distributed discretely along the coastline, especially in river estuaries, for example, Hailing Bay and the estuary of the Moyang River in track 463, represented by the insets A and B in Figure 6, respectively, the estuary of Guanghai Bay in track 461 (see inset D in Figure 6), and the estuary of the Huangjiang River in track 456 (inset G in Figure 6). In addition to the subsidence occurring along the river system, some local subsidence is also observed in forest areas, such as the discrete subsidence bowls in tracks T463 and T457 (see the insets C and F in Figure 6). Moreover, elongated subsidence located in the industrial petrochemical region is also observed in track 458 (see inset E in Figure 6).
A positive Correlation between Sedimentary Thickness and Subsidence
We focused on a geological formation that is usually prone to natural hazards, namely, the Holocene series (Q4), to investigate the correlation of these two factors. As shown in Figure 1, in addition to some discrete distribution along the coastline, Q4 is mainly distributed in the LZP, PRD, and CSP. The areas of subsidence have a distribution similar to that of the Q4 deposits, especially in areas with notable subsidence rates. To clearly analyze this similarity, we calculated the histograms of the subsidence rate (>15 mm/yr) within and out of areas with Q4 deposits; the results are shown in Figure 11a. The percentage along the Y axis is calculated by the following equations: Rer Q4 = pixel number whose subsidence rate>15mm/yr within Q4 area sum total pixel number within Q4 area Rer no−Q4 = pixel number whose subsidence rate>15mm/yr out o f Q4 area sum total pixel number out o f Q4 area Subsidence is mainly located within the Q4 area, especially in areas with subsidence rates higher than 40 mm/yr (see the blue columns in Figure 11a).
To quantitatively analyze the correlation of these two factors, we also considered the thickness of Q4 deposits in the LZP, PRD, and CSP (see the Q4 depth map in Figure 2). The Q4 thickness maps of the LZP and CSP are generated by a kriging interpolation based on the depths collected from borehole measurements, which are represented by black circles [55]. We divided the thickness of Q4 into several sections, for example, 0 m to 10 m with an interval of 5 m, 10 m to 50 m with an interval of 10 m, and 50 m to 170 m with an interval of 20 m. Additionally, we calculated the average subsidence rate in these sections across the LZP, CSP and PRD. These results are represented by red diamonds in Figure 11b-d, respectively. We can see that an obvious positive correlation between Q4 thickness and subsidence rate is observed, that is, the greater the thickness is, the higher the subsidence rate, especially in the LZP and PRD. However, in the CSP, a positive correlation occurs when the thickness is larger than 50 m, and a negative correlation is generated when the thickness is low. This phenomenon may be due to the combined impact of soft soil compaction and groundwater recession.
Remote Sens. 2020, 12, x FOR PEER REVIEW 13 of 20 We focused on a geological formation that is usually prone to natural hazards, namely, the Holocene series (Q4), to investigate the correlation of these two factors. As shown in Figure 1, in addition to some discrete distribution along the coastline, Q4 is mainly distributed in the LZP, PRD, and CSP. The areas of subsidence have a distribution similar to that of the Q4 deposits, especially in areas with notable subsidence rates. To clearly analyze this similarity, we calculated the histograms of the subsidence rate (>15 mm/yr) within and out of areas with Q4 deposits; the results are shown in Figure 11a. The percentage along the Y axis is calculated by the following equations: Subsidence is mainly located within the Q4 area, especially in areas with subsidence rates higher than 40 mm/yr (see the blue columns in Figure 11a).
To quantitatively analyze the correlation of these two factors, we also considered the thickness of Q4 deposits in the LZP, PRD, and CSP (see the Q4 depth map in Figure 2). The Q4 thickness maps of the LZP and CSP are generated by a kriging interpolation based on the depths collected from borehole measurements, which are represented by black circles [55]. We divided the thickness of Q4 into several sections, for example, 0 m to 10 m with an interval of 5 m, 10 m to 50 m with an interval of 10 m, and 50 m to 170 m with an interval of 20 m. Additionally, we calculated the average subsidence rate in these sections across the LZP, CSP and PRD. These results are represented by red diamonds in Figure 11b-d, respectively. We can see that an obvious positive correlation between Q4 thickness and subsidence rate is observed, that is, the greater the thickness is, the higher the subsidence rate, especially in the LZP and PRD. However, in the CSP, a positive correlation occurs when the thickness is larger than 50 m, and a negative correlation is generated when the thickness is low. This phenomenon may be due to the combined impact of soft soil compaction and groundwater recession.
Quantitative Correlation between Subsidence and Land-Use Class
To analyze the correlation between subsidence and land-use class quantitatively, we generated a land-use classification map based on object-oriented segmentation, as shown in Figure 7. Four frequently used classes, agriculture, forest, urban, and water, marked by yellow, green, red, and blue shading are isolated. Notably, the urban land-use class includes manmade constructions, such as buildings and bridges, and bare soil due to their similarity in spectral information. In contrast to the previous classification results [28,33], the specific class of aquaculture is identified due to its wide range and its important role in the economy of Guangdong Province, as mentioned above. The aquaculture class is indicated by purple shading in Figure 7. The overall accuracy (OA) of the classification is about 89.5% with a land-use reference map derived from [49,50].
Comparing the subsidence map ( Figure 3) with the classification map (Figure 7), we noticed that areas with notable subsidence rate (>20 mm/yr) are mainly aquaculture and agricultural areas, as indicated by purple and yellow shading in Figure 7, respectively. The representative subsidence areas in the aquaculture areas are in the LZP ( Figure 8) and PRD (Figure 9). For example, the relevant optical images of the subsidence bowls in Leizhou Bay and Zhanjiang Bay and the other subsidence bowls along the coastline of the LZP are shown in the insets A-E of Figure 8. In the PRD, the majority of the subsidence (>20 mm/yr) is concentrated in aquaculture areas, the total area of which reaches 772.2 km 2 . However, in the CSP, the subsidence is mainly located in the mixed area with agriculture and urban classes, see the yellow and red shading in Figure 7 for the two corresponding subsidence bowls, i.e., B1 and B2 in Figure 10. In addition to these representative areas, aquaculture areas along the coastline are also undergoing subsidence but exhibit different magnitudes of subsidence (see the insets A, B, and D in Figure 6). In addition to subsidence in aquaculture and agricultural areas, notable subsidence is also observed in forest and urban areas, for example, discrete subsidence bowls in forest areas of track 463 (inset C in Figure 6) and elongated subsidence in the Nansha district of Guangzhou (inset A in Figure 9a), petrochemical area of Daya Bay (inset E in Figure 6), and Macau airport (inset B in Figure 9a).
Although the correlation between subsidence and land use is obvious after qualitative comparison, to clarify this correlation, we also give the histograms of subsidence rate in four land-use classes, aquaculture, agriculture, forest and urban, as shown in Figure 12a. Notably, pixels in water areas are ignored because of low coherence. The highest subsidence (>20 mm/yr) is mainly focused in the aquaculture areas, followed by the subsidence in urban areas, agricultural areas, and finally forest (see the red, black, blue, and green lines in the insets of Figure 12b, respectively). The total percentages of the abovementioned subsidence in the four adopted classes are 40.8%, 37.1%, 21.5%, and 0.6%, respectively. Hence, aquaculture-induced groundwater exploitation is the major cause of the notable subsidence in Guangdong Province. Moreover, we also generated the percentage of different subsidence rate sections in each land-use class (see Table 4). We can see that slow subsidence mainly occurs in agriculture areas, for example, 44.3% and 34.5% when the subsidence rate is within the range of (0, 5] and (5, 10], respectively.
Causes of Subsidence in Guangdong Province
The causes of surface subsidence are generally summarized as natural compression and artificial activities [26]. The latter is the main cause of surface subsidence in many coastal areas and deltas and includes fluid and solid resource exploitation, underground infrastructure construction and compaction of constructions overlaying the Holocene sediment. In Guangdong Province, both causes are involved and are thus qualitatively analyzed in the following points.
Subsidence Probably Caused by Groundwater Exploitation for Freshwater Aquaculture Use
The areas of subsidence rate > 20 mm/yr along the coastal area occur mainly in the aquaculture areas, as mentioned in Section 6.2. The aquaculture areas in the LZP and PRD account for 53.3% of the entire coastal area (see the purple shading in Figure 7). The corresponding subsidence areas in the LZP and PRD are 112.3 km 2 and 772.2 km 2 , respectively. The maximum subsidence rates in these two areas are 58.0 mm/yr and 150.9 mm/yr, respectively. In addition to the subsidence bowls in the LZP and PRD, local subsidence bowls are also observed in the aquaculture area, see the insets B, D, and G in Figure 6. The water sources of these aquaculture areas are mainly derived from freshwater to satisfy the water quality required by the aquatic products, such as eel and tilapia [56,57]. Moreover, the frequency of freshwater exchange is high, aiming to improve the production and quality of the products. Hence, the very high requirement for freshwater and the obvious correlation between subsidence and aquaculture land use suggest that the exploitation of groundwater for aquaculture may be the primary cause of subsidence within these areas. However, we cannot analyze the explicit relationship between subsidence and groundwater exploitation volumes due to the vast illegal exploitation. Fortunately, deformation time series analysis is commonly adopted to analyze the subsidence caused by groundwater exploitation [26,28,29,58,59]. From the results in the aquaculture area (see Figures 8b,c and 9b,c), the linear trend of the deformation time series is identified. This lack of seasonal variability is likely caused by the exploitation of groundwater from the confined aquifer instead of shallow aquifer, resulting in the phenomenon of continual subsidence.
Subsidence Probably Caused by Groundwater Exploitation for Agricultural and Residential Use
Subsidence caused by groundwater exploitation for agricultural and residential use is mainly located in the CSP and the inland area of the LZP [7,60]. Unlike the subsidence that occurs in the PRD, more concentrated subsidence at very high rates (maximum value of 157.2 mm/yr) and over large
Causes of Subsidence in Guangdong Province
The causes of surface subsidence are generally summarized as natural compression and artificial activities [26]. The latter is the main cause of surface subsidence in many coastal areas and deltas and includes fluid and solid resource exploitation, underground infrastructure construction and compaction of constructions overlaying the Holocene sediment. In Guangdong Province, both causes are involved and are thus qualitatively analyzed in the following points.
Subsidence Probably Caused by Groundwater Exploitation for Freshwater Aquaculture Use
The areas of subsidence rate > 20 mm/yr along the coastal area occur mainly in the aquaculture areas, as mentioned in Section 6.2. The aquaculture areas in the LZP and PRD account for 53.3% of the entire coastal area (see the purple shading in Figure 7). The corresponding subsidence areas in the LZP and PRD are 112.3 km 2 and 772.2 km 2 , respectively. The maximum subsidence rates in these two areas are 58.0 mm/yr and 150.9 mm/yr, respectively. In addition to the subsidence bowls in the LZP and PRD, local subsidence bowls are also observed in the aquaculture area, see the insets B, D, and G in Figure 6. The water sources of these aquaculture areas are mainly derived from freshwater to satisfy the water quality required by the aquatic products, such as eel and tilapia [56,57]. Moreover, the frequency of freshwater exchange is high, aiming to improve the production and quality of the products. Hence, the very high requirement for freshwater and the obvious correlation between subsidence and aquaculture land use suggest that the exploitation of groundwater for aquaculture may be the primary cause of subsidence within these areas. However, we cannot analyze the explicit relationship between subsidence and groundwater exploitation volumes due to the vast illegal exploitation. Fortunately, deformation time series analysis is commonly adopted to analyze the subsidence caused by groundwater exploitation [26,28,29,58,59]. From the results in the aquaculture area (see Figure 8b,c and Figure 9b,c), the linear trend of the deformation time series is identified. This lack of seasonal variability is likely caused by the exploitation of groundwater from the confined aquifer instead of shallow aquifer, resulting in the phenomenon of continual subsidence.
Subsidence Probably Caused by Groundwater Exploitation for Agricultural and Residential Use
Subsidence caused by groundwater exploitation for agricultural and residential use is mainly located in the CSP and the inland area of the LZP [7,60]. Unlike the subsidence that occurs in the PRD, more concentrated subsidence at very high rates (maximum value of 157.2 mm/yr) and over large areas (342.2 km 2 ) are observed in the mixed area of agricultural and residential land use (see B1 in Figure 10) in the CSP. Within the subsidence bowl, the percentages of manmade structures and agricultural lands are 58.6% and 30.3%, respectively, in addition to the small percentage (11.1%) of aquaculture land. Thus, groundwater exploitation for the mixture of residential and agricultural use may be the major cause of subsidence. Similarly, a linear time series instead of a seasonally variable time series is observed (see Figure 10b-e), indicating continued groundwater exploitation for residential use. In addition to the groundwater exploitation, the thick Quaternary deposits in the CSP (with a maximum thickness of 167 m) may accelerate the subsidence rate during groundwater exploitation. In addition, a wide range of slow subsidence is also observed in the agricultural land of the LZP. However, the deformation time series presents an obvious nonlinear characteristic (see Figure 8e), reflecting the seasonal irrigation in the LZP.
Subsidence Caused by Land Reclamation
In addition to the exploitation of groundwater, another dominant cause of surface subsidence is the compaction of the Q4 deposits under the overlying buildings or other infrastructures. These deposits possess a high pore ratio and low strength [61,62], resulting in subsidence due to the compaction of soft soil and aquifers. Moreover, the sea reclamation phenomenon in the PRD is widespread (see the areas between the blue and red lines in Figure 9), resulting in the accumulation and compaction of soft soil and finally leading to subsidence. For example, the deposit thickness map of the PRD in Figure 2 shows that the subsidence is mainly concentrated in the areas of soft soil, and the thicker the soft soil, the higher the subsidence rate is, as shown in Figure 11d.
Subsidence Caused by Other Reasons
In Zhanjiang and Shenzhen cities, elongated subsidence bowls with maximum subsidence rates of approximately 26.6 mm/yr and 51.4 mm/yr are observed, respectively. The bowls are located in urban areas with dense industries (see inset E in Figure 6 and the optical image in inset F of Figure 8), indicating groundwater exploitation for industrial use. Because groundwater needs for industry are high and persistent, a decline in groundwater level and continued subsidence are possible (see the corresponding deformation time series of industrial areas in Figure 8d). In addition, metro tunnel excavation is a cause of subsidence in urban areas, for example, the excavation of subway line 4 in the Nansha district of Guangzhou (see inset A in Figure).
In the LZP, discrete subsidence in a large agricultural area with a subsidence rate < 20 mm/yr is caused by not only groundwater exploitation but also natural compression, such as that due to the chemical erosion of peduncle [63].
In tracks 463 and 458, local subsidence bowls are found in forest areas and may be caused by slope instability (see insets C and F in Figure 6), potentially resulting in slow-moving landslides.
Conclusions
Through the use of two MTInSAR techniques, SBAS-InSAR and TCPInSAR, we obtained a large-scale deformation map for the coastal areas of Guangdong. We also identified three notable subsidence areas in LZP, PRD, and CSP, with maximum subsidence rates approximately 58 mm/yr, 150.9 mm/yr, and 157.2 mm/yr, respectively. The relative precision of the average subsidence velocity in the overlapped areas was generally below 3.2 mm/yr, except for several values ranging from 3.4 mm/yr to 4.3 mm/yr. The root-mean-square error (RMSE) of the velocity and deformation time series calculated between InSAR and GPS results are 1.8 mm/yr and 7.6 mm, respectively.
In addition to the subsidence map, a land-use map that identifies five classes (aquaculture, agricultural, urban, forest, and water) along the coastline was generated. The relationship between these two maps showed that a high subsidence rate (>20 mm/yr) mainly occurs in aquaculture areas, followed by urban agricultural and forest areas, accounting for 40.8%, 37.1%, 21.5%, and 0.6% of the total area with this notable subsidence rate, respectively. Subsidence due to land reclamation and manmade constructions was also monitored. Additionally, we also found that subsidence mainly occurs in the Holocene deposits, and a positive correlation between subsidence and Holocene deposit thickness is obtained after the interpolation of thickness from the selected borehole data. In addition to the average subsidence velocity, the linear deformation time series in the aquaculture and agricultural areas showed that the groundwater is probably exploited continuously, and the subsidence would continue along the coastal areas if no measures were adopted. The results highlight the aquaculture-induced subsidence, in addition to the more common agricultural-induced subsidence, along these coastal areas. | 2020-01-23T09:20:21.489Z | 2020-01-16T00:00:00.000 | {
"year": 2020,
"sha1": "bb4781865ee8770159c7f7c69a20b9bd5ff26f9d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/12/2/299/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "76f009c26b28f810e483465f632732e5f6f2557d",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Computer Science",
"Geology"
]
} |
32490186 | pes2o/s2orc | v3-fos-license | Thrombotic complications and tip position of transjugular chronic dialysis catheter scheduled into superior vena cava
Abstract Background: Catheter-related thrombotic complications(TCs) can occur during the long term use of a chronic dialysis catheter (CDC), including fibrin sheath (FS), mural thrombosis (MT), venous thrombosis (VT), and intraluminal clots (IC), which has not been reported with MRI. The aim of our study was to evaluate the determination of catheter tip position (TP) and resolution of TCs in patients with transjugular CDC scheduled into the superior vena cava using high resolution magnetic resonance cholangiopancreatography (HR-MRCP) and T2-weighted imaging (HR-T2WI). Methods: The study protocol was approved by the local Research Ethics Committee. Informed consent was obtained from all patients. In total, 41 consecutively enrolled transjugular CDC patients with suspected catheter dysfunction were scanned with HRMRCP and HR-T2WI. The distance from the top to the tip of the catheter and the presence and nature of catheter TCs were assessed by 2 experienced radiologists. Chest x-ray was taken within 1 to 2 days and CDC was withdrawn within 3 to 10 days from those patients with TCs identified by HR-MRI. Results: A total of 38 subjects successfully underwent HR-MRI, including 13 normal and 25 with TCs (fibrin sheath [FS]: n = 21, mural thrombosis [MT]: n = 7, venous thrombosis [VT]: n = 3, intraluminal clots [IC]: n = 4). There was no significant difference between HR-MRCP and chest x-ray in catheter TP determination (P = .124). Normal catheter appeared as “double eyes” on HR-T2WI and “double tracks” on HR-MRCP. TCs appeared as follows: FS displayed as a “thin ring” (<1mm) around the catheter, MT as patchy hyperintensity and VT as a “thick ring” (>5mm) on HR-T2WI. Unilateral IC appeared as a “single eye” on HR-T2WI and a “single track” on HR-MRCP (n = 3). Bilateral IC appeared as neither “eye” nor “track” (n = 1). Catheter withdrawal confirmed FS (n = 16), MT (n = 6), VT (n = 1), and IC (n = 4). Conclusion: HR-MRCP and HR-T2WI are promising methods for visualizing TP and TCs in CDC patients, and are helpful in adjusting the treatment plan and avoiding the risk of pulmonary embolism.
Introduction
For a dialysis patient with end-stage renal disease under routine dialysis, a well-functioning vascular access is essential for an efficient hemodialysis procedure. An arteriovenous fistula is known to be the best blood access due to the possibility of longterm use and the low-level of complications. [1] However, for those patients who are not a good candidate for an arteriovenous fistula or those who require dialysis during maturation of the arteriovenous fistula, establishing an effective vessel access through the internal jugular vein is the best choice, with easy visualization of the jugular vein by ultrasound and direct connection to the superior vena cava and right atrium [2] that is adequate to meet the dialysis requirement with the use a catheter of at least of 300cc per minute. [1,3] In the United States, over 60% of patients begin hemodialysis with placement of transjugular chronic dialysis catheter (CDC). [4] In China, the number of patients receiving CDC hemodialysis increases.
However, several complications can occur during the longterm use of a CDC, including sepsis, extravasations of infusions, pneumothorax, kinking, and thrombotic occlusion of the catheter, which can increase associated morbidity and mortali-ty. [5] It is, therefore, crucial to diagnose these thrombotic occlusions effectively and to determine the cause and type of thrombotic complications (TCs). Types of thrombotic complications, which can occur separately or in combination, include fibrin sheath (FS) around the catheter, intraluminal clots (IC) inside the catheter, mural thrombosis (MT) adhered to the venous wall, and venous thrombosis (VT) completely blocking the vein. [5,6] For these catheter-related thrombotic occlusions, multislice spiral computed tomography venography (MSCTV) can provide excellent depiction of MT and VT, [7] but cannot differentiate FS, or IC or distinctly display the double lumen of the catheter due to artifacts caused by barium elements in the catheter. [8] In addition, MSCTV can cause contrast-induced nephropathy due to the use of contrast medium, especially in end-stage renal disease patients. To the best of our knowledge, visualization of these TCs by MRI has not been reported. Therefore, our purpose is to demonstrate the performance of high-resolution MRI (HR-MRI) without contrast medium in displaying CDC's tip position (TP) and in the detection and differentiation of correlative TCs.
Currently, the HR-MRI technique is an increasingly essential method in the precision medicine era. In our study, to avoid blood interference and clearly present TCs, we used high-resolution T 2 WI with turbo spin echo sequence (HR-T 2 WI) giving an empty appearance to the lumen of the catheter due to the relatively long echo time. High-resolution T2-weighted MRCP (HR-MRCP) with 3D-SPACE (3-dimensional sampling perfection with application optimized contrast using different flip angle evolutions) was used to display the catheter double lumen, TP and any possible IC, which depend on the slow attenuation characteristic of water.
Chest x-ray cannot display any TC types, However, it can clearly visualize the catheter because the latter contains barium elements. [8] In addition, have maintained that chest x-ray is recommended as the first-line method for locating the catheter's TP. [9,10] Venugopal et al [11] also considered that chest x-ray was the gold standard for identifying catheter tip malposition. Therefore, in our study, chest x-ray was used as the reference standard for assessment of the tip location on HR-MRCP by measuring the distance from the top to the tip of the catheter.
Patients
This prospective study was approved by the institutional review board. Written informed consent was obtained from all participants prior to examination. From April 2014 to August 2015, all patients with CDC transjugular access and suspicion of catheter dysfunction (failure to attain a sufficient extracorporeal blood flow of ≥300 mL/min with a prepump arterial pressure more negative than À250 mm Hg for 2 weeks) [3,12] were consecutively recruited and underwent MRI and chest x-ray in our study.
A total of 41 subjects including 17 males and 24 females (mean age, 62.4 ± 14.2 years) underwent 1.5T MRI to identify the catheter TP and possible presence of TCs.
MRI
Before MRI, all transjugular tunneled dual-lumen CDC (size: 14.5F, Covidien llc, 15 Hampshire Street Mansfield, MA ) subjects were pulled out the previous installed heparin and then injected 5 mL physiological saline into each lumen. The injection was stopped at the moment to feeling appreciable resistance in the operating process. The patient was then examined using a standard clinical radiology suite with a 1.5T Magnetom Area MRI (Siemens Healthcare, Erlangen, Germany) equipped with a manufacturer's 20-channel head coil combined with a dedicated abdominal18-channel body phased-array coil. First, 3-plane localizer was performed for neck and chest, followed by coronal and sagittal T 2 -weighted True FISP (TR/TE, 39.2/1.2; slice thickness, 4 mm; slice gap, 0.8 mm) and axial T2-weighted haste sequence (TR/TE, 700/87; slice thickness, 6 mm; slice gap, 0.6 mm). HR-MRCP imaging with 3D SPACE was then obtained using a 3-dimensional navigator-triggered technique and HR-T 2 WI with 2D turbo spin echo was gained with a peripheral pulse wave gated technique under multiple breath-holds to display the catheter tip and any associated TCs. The scanning parameters are presented in Table 1. Finally, the axial T 1 -weighted vibe sequence (TR/TE, 3.47/1.27; slice thickness, 3 mm; slice gap, 1 mm) was performed. The total MRI data acquisition time was approximately 20 to 25 minutes for each patient.
Chest x-ray
Standard poster anterior chest x-rays were obtained within 1 to 2 days after MRI scan using standard digital radiographic equipment (Axiom Aristos MX, Siemens Medical Systems, Forchheim, Germany) and storage phosphor plates (Kodak PQ Elite CR direct view, Carestream Health Inc., Rochester, NY) with the following parameters: tube current, 80 kV; tube-film distance, 1.2 m; and exposure time product, 3-5mAs.
Image analysis
All chest x-ray and MRI images were transmitted to an imaging workstation (Advantage Workstation 4.4, GE Healthcare, Buc, France) for each patient. For HR-MRCP images, maximum intensity projection was performed to show the TP and doublelumen of CDC.
The magnification error on chest x-ray and HR-MRCP images was estimated by measuring the distance between one lumen tip and the other lumen tip in 1 subject, and comparing it to the actual distance as illustrated in Figure 1. The tunneled duallumen catheter was passed through the skin at the outlet point, and used to evaluate HR-MRCP reliability for showing catheter tip location relative to chest x-ray. In addition, the lengths between one lumen tip and the other tip were measured on both HR-MRCP and x-ray (circles, Fig. 1B-D, F) and used to determine the relative magnification error between x-ray and HR-MRCP by comparing with the actual length (Fig. 1D). The magnification error (ME) was then calculated as ME = (distance on chest x-ray-distance on HR-MRCP)/distance on HR-MRCP. [13] In our study, the length was 28 mm on x-ray, whereas the true length and the length on HR-MRCP was 25 mm, yielding a magnification error (ME) = (28-25)/25 = 0.12. The corrected length on x-ray = the measured distance on x-ray Ä [1 + ME]) was then used as a reference standard for assessing the accuracy of the apparent tip location on HR-MRCP. The distance from the top point of the catheter's inferior edge to the tip of the catheter for each subject was measured on chest x-ray and HR-MRCP by the consensus of 2 experienced radiologists and the accuracy of tip location shown on HR-MRCP was then assessed by comparing the measured distance on HR-MRCP with the corrected distance on chest x-ray. Tip locations in superior vena cava and right atrium were regarded as normal positions to meet the dialysis requirement of 300cc per minute. Patients with unclear catheter's TP on chest x-ray and motion artifacts on HR-MRCP and HR-T 2 WI were excluded. Finally, the presence of IC and gas was predicted based on hypointensity in the catheter lumen on HR-MRCP. IC was assessed by HR-T 2 WI combined with HR-MRCP, and the other TCs (FS, MT, and VT) were evaluated by HR-T 2 WI with the consensus of 2 experienced radiologists for each subject. Schematic pictures are shown in Figure 2. For those patients with TCs evaluated by HR-MRCP and HR-T 2 WI, CDCs were removed within 3 to 10 days after MRI. Patients without TCs revealed by HR-MRCP combined with HR-T 2 WI were continued on dialysis after adjusting the catheter tip location and direction. The evaluation of TCs on HR-MRI and the measurement of distances on HR-MRCP and on chest x-ray were performed by 2 radiologists with a blinded and randomized reading.
Statistical analysis
A paired-samples t-test was used to compare the mean differences between measurement data groups. P > .05 was considered to indicate no significant difference. In addition, mean ± standard deviation (SD) was used in measurement data and constituent ratio was used in count data. All statistical analyses were performed by using commercially available software (SPSS for Windows, version 13.0; SPSS, Chicago, IL).
Results
A total of 38 CDC patients' images were evaluated after successfully performing x-ray, HR-MRCP, and HR-T 2 WI. Three patients' images were excluded, 1 due to motion artifacts from irregular respiratory motion, and 2 due to intravenous artifacts from the inexhaustive flowing empty effect. In the 38 subjects, the reasons for renal function failure were as follows: chronic Table 2 and Figure 3. There was no significant difference in the mean catheter length derived from HR-MRCP and x-ray (P = .124) ( Table 2, Fig. 4), although the lengths were anomalously short on HR-MRCP compared to x-ray in 3 subjects (Fig. 4A). In addition, 7 patients had abnormal catheter tip locations as shown by HR-MRCP (Table 3). The accuracy of these determinations was shown to be 100% in all cases based on x-ray. On HR-MRCP, the double lumen catheter structure was clearly displayed in these 38 subjects ( Fig. 4B and C). In addition, intraluminal gas was found at the top of the catheter in 2 subjects.
In our study, TCs were not found in 13 subjects that displayed normal hyperintensity with "double-eyes" sign on HR-T 2 WI and "double track" sign on HR-MRCP ( Fig. 4B and C). For thrombotic catheter complications, the "single track" sign on HR-MRCP and "single eye" on HR-T 2 WI were found when IC happened in 1 lumen (Fig. 5), and "track" sign on HR-MRCP and "eye" sign on HR-T 2 WI could not be shown bilaterally at the level of the clot (Fig. 6). In addition, FS appeared as "thin ring" sign (<1 mm) surrounding the catheter (Fig. 6), MT showed patchy hyperintensity and VT presented as a "thick ring" (>5 mm) on HR-T 2 WI.
Among 25 subjects with TCs identified by HR-MRCP combined with HR-T 2 WI, there were 21 patients with FS (55.3%), 7 with MT (15.8%), 3 with VT (7.9%), and 4 with IC (10.5%) ( Table 4). TCs were confirmed after the removal of catheter in 21 patients, yielding findings of FS (n = 16), MT (n = 6), VT (n = 1), and IC (n = 4) ( Table 4). As shown in Table 4, the negative findings of different types of TCs on HR-MRCP and HR-T 2 WI were confirmed after removal of the catheter, with the exception of 1 case of a false negative where removal of the catheter revealed FS. In these 38 subjects, only 6 patients had dialysis stopped to withdraw heparin or injected saline when feeling resistance in 1 lumen (confirmed as 3 FS and 3unilateral IC), and 1 subject was occluded in both lumens (confirmed as a bilateral IC) before the MRI scan. Unfortunately, 2 patients died due to pulmonary embolism after withdrawal of the catheter.
Discussion
Recently, many types of catheter have been inserted into the superior vena cava or the right atrium, including dialysis Fibrin sheath appeared as a ring of hyperintensity (long arrow, B), surrounding the catheter (hypointensity) on HR-T 2 WI. The saline in the lumens showed hyperintensity like "2 eyes" on HR-T 2 WI. Intraluminal clots showed low signal with a "single eye" when the blood clots filled in 1 lumen (long arrow, C) and no "eye" sign when clots were in both lumens. Mural thrombosis appeared as patchy hyperintensity adhered to the vessel wall without completely occluding the vein (long arrow, D). Venous thrombosis appeared as a thick ring of high signal on HR-T 2 WI (long arrow, E) and occluded the whole vein. HR-T 2 WI = high-resolution T 2 -weighted imaging. Table 2 Distance from catheter's top to tip on HR-MRCP and x-ray and CDC's tip location on HR-MRCP compared to x-ray by the 2 sample paired test. catheters, peripherally inserted central catheters, and central venous catheters, whose purposes are to establish vascular access, administer therapy, and improve the quality of life. [5] However, TCs can arise from catheter placement, especially for CDC, due to long-term emplacement and the relatively large size of the catheter. These complications include FS, IC, MT, and VT, [5,6] and might lead to catheter dysfunction and even pulmonary embolism. Therefore, it is crucial to have precise imaging of these complications with MRI, as it is beneficial in decreasing the occurrence of pulmonary embolism and adjusting the treatment plan. Accurate identification of IC requires accurate display of the catheter's tip location on HR-MRCP. Therefore, in our study, HR-MRCP was used to determine the tip position. HR-T 2 WI was used to display FS, MT, and VT, whereas HR-MRCP combined with HR-T 2 WI were used to identify IC.
Paired of chest x-ray-HR-MRCP
The venous blood contains paramagnetic deoxyhemoglobin and has a short T2 relaxation time in comparison with water. [14] This can cause T2-weighted signals to decrease due to the relatively short T2 and the local magnetic field asymmetry [15] in a small lumen. Thus, the motionless blood in catheters presents hypointensity on HR-MRCP and HR-T 2 WI. Prior to MRI, 5 mL 0.9% saline was infused into each catheter lumen to prolong the T2 relaxation time and aid in locating the catheter tip, detecting IC, and identifying the double lumen structure on HR-MRCP and HR-T 2 WI. In our study, we were able to infuse saline into 1 or both catheter lumens in 37 patients with recognizable tip positions on HR-MRCP. However, there were 3 patients for whom the distance from the top to the tip on HR-MRCP was markedly less than on chest x-ray. In 1 subject with a bilateral IC, the discrepancy was due to less water content than venous blood. For the other 2 patients, the shorter apparent distance may have resulted from backflow of venous blood into the catheter after saline injection, leading to the signal loss on HR-MRCP due to the shorter T2 of venous blood. Thus, it is difficult to differentiate IC from intraluminal blood, especially in the tip of the catheter. However, IC should be suspected if there is resistance when injecting saline, with intraluminal venous blood being more likely when there is no resistance.
Nevertheless, comparing the average distance from the top to the tip of catheter in all 38 patients, there was no significant difference between HR-MRCP and chest x-ray (P = .124), indicating that HR-MRCP is a reliable method for visualizing the catheter tip location. Hence, HR-MRCP might be an alternative to chest x-ray for delineating TP without any radiation. What is more, it is very essential to determine the presence of clots in 1 or both catheter lumens. There is a distinct advantage in the ability of HR-MRCP to detect IC as compared to x-ray. In our study, 7 patients presented with abnormal positions on HR-MRCP, with the accuracy of these determination being confirmed in all cases by chest x-ray.
In our research, a normal, non-occluded catheter appeared as hyperintensity in the form of "double tracks" sign on HR-MRCP and "double-eyes" on HR-T 2 WI, due to saline filling the 2 catheter lumens. An IC will present with more hypointensity than venous blood, because it contains less water and has a shorter T2 relaxation time. Therefore, a blood clot in 1 lumen appeared as a "single track" on HR-MRCP and "single eye" on HR-T 2 WI, and the absence of any "track" sign on HR-MRCP and "eye" sign on HR-T 2 WI indicated clots in both lumens. In our results, IC were found on HR-MRCP and HR-T 2 WI in 4 subjects, located near the tip and completely obstructing 1 or both lumens of the catheter. Such clots result from the coagulation cascade. They consist of abundant red blood cells and a few platelets [16] and account for 5% to 25% of all catheter occlusions. [5] Intraluminal gas also presents with no signal due to the short T2 relaxation time and will also appear as a "single track" on HR-MRCP and "single eye" on HR-T 2 WI. However, intraluminal gas is only located at the top of the catheter because it is the highest position when the patients lying on the MRI table, and therefore the position of the hypointensity in the catheter is a vital differentiation between gas and IC. Table 3 Abnormal tip location findings on HR-MRCP confirmed by chest x-ray. Catheter-associated FS is the commonest reason for CDC failure and can be composed of thrombus, fibroblasts, endothelial cells, and collagen forming a layer about 1 mm thick around the outside of the catheter. In our study, the thickness of the FS was less than 1 mm on HR-T 2 WI in all 21 subjects identified with this TC. [17] The sheath covers the inlet and outlet holes of a catheter, acting as a 1way valve, interfering with the catheter function and preventing effective hemodialysis. [16] Oguzkurt et al [18] found that FS formation was seen in up to 76% of short-or long-term central venous catheters by pull-back venography. Shanaah et al [19] maintained that FS incidence was about 47%. In our data, FS incidence was 55.3% and presented with high signal like a "thin ring" (<1 mm) surrounding the catheter. The reason for hyperintensity may be due to extensive edema in patients with FS. [20] Catheter-related thrombosis is a relatively common complication in end-stage renal disease patients with CDC, and includes VT and MT, as a result from coagulation cascade activation and platelet aggregation on the side of a vessel. [21] Most of these thromboses occur within the first 100 days after catheter insertion. [22] VT refers to a thrombus that develops near the catheter and occludes the vein. MT is a blood clot that adheres to the vessel wall and can occlude the catheter tip, but does not completely occlude the vein. [5] Most patients with thrombosis are asymptomatic. Niers et al [23] found that approximately 14% to 18% of patients have evidence of thrombosis without clinical symptoms. Symptomatic thrombosis occurs much less frequently, in approximately 5% of cases or less. [24] In our 41 patients with suspected catheter-related complications, 10 cases of thrombus were found on HR-T 2 WI, appearing as patchy hyperintensity in the case of MT in 7 patients, and as a "thick ring" (>5 mm) for VT in 3 subjects.
Except for 1 false negative FS patient, the negative findings on HR-MRCP and HR-T 2 WI in our study were confirmed when the catheter was withdrawn. The single false negative was a case where the FS was completely obscured on the scan by a mural thrombosis filling the whole vein. The fibrin sheath was revealed because it adhered to the catheter, whereas the mural thrombosis did not. Our experiences thus show that HR-MRI is reliable to assess patients without catheter-related thromboses and can be safely used to adjust the catheter TP and direction. In our study, 13 catheters were adjusted after HR-MRI revealed no thrombotic complications, and dialysis through them was continued. Diagnosis of TCs in CDC patients by HR-MRCP combined with HR-T 2 WI was superior to surgical withdrawal because some thromboses cannot be withdrawn after the removal of catheter.
Effective management is needed to improve the survival and quality of life for CDC patients with TCs. [25,26] For MT, VT, and IC, thrombolysis with urokinase or recombinant tissue plasminogen activator (rTPA) can be undertaken to restore adequate blood flow in most patients. [27,28] For FS, catheter exchange should be performed to continue dialysis through interventional treatments. [28] Therefore, in our view, it is very important to precisely evaluate the TCs' type by HR-MRI, which cannot all be classified by MSCTV. Unfortunately, in 25 TCs subjects, 2 patients died due to FS and thrombus ultimately causing pulmonary embolism with catheter removal after treatment.
There are several limitations in our study. First, HR-MRCP is not always successful due to the interference of venous blood. The Table 4 Types of thrombotic complications identified on HR-MRI and after withdrawal of CDC.
In conclusion, end-stage renal disease patients with CDC placed via the jugular vein can develop several types of TCs, which can occur separately or in combination. HR-T 2 WI combined with HR-MRCP is a safe, non-invasive, relatively inexpensive, and reliable diagnostic method for both visualizing the position of the catheter tip and identifying related complications in these patients. These MRI techniques do not use gadolinium contrast or expose the patients to radiation as in CT scan. The resulting ability to more effectively differentiate the different types of thrombotic complications is helpful in avoiding pulmonary embolism and adjusting the management plan for the patient. | 2018-04-03T04:12:19.796Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "282e9fb6fa62a1dc6662b77e2f7dd726933df858",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.1097/md.0000000000007135",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "282e9fb6fa62a1dc6662b77e2f7dd726933df858",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
113815556 | pes2o/s2orc | v3-fos-license | A Conceptual Framework of Green Certification Impact On Property Price
Green building is one of the sustainability dimensions in built environment. The issues of green building and its impact to the society have been increasingly discussed. Green certification is one of the components in measuring sustainable development and plays an important role as an assessment system to an individual building’s performance. The question arises whether the market understand and recognized the green certification. The objectives of this research are to discuss the issue pertaining to green value and the relationship between green certification and property price. The research emphasized on the understanding of property attributes focusing on green certification and the impact to the property price. Among the attributes identified are structural characteristics, location and neighborhood, and time attributes. Thus, this paper will discusses the review of literature on green development and the significance impact on property market in term of price and value. The green building development across the country could be classified as another sector in property markets that give significant impact to the real estate industry. As a result, a conceptual framework in assessing the impact of green certification is suggested to provide a significant input in developing the model of hedonic pricing for green building. This research may contribute to extend the body of knowledge in the area of green development and a suggested significant input will give much emphasize on the new valuation technique in valuing green building properties.
Introduction
Over the past decades, significant efforts have been made to promote sustainability in property development. The introduction of National Green Technology Policy (NGTP) in 2009 indicated the government's concern on global issues of sustaining environment and resources. The green building certification embarked the initiative of sustainable development with the introduction of Green Building Index (GBI) in 2009 as first green building assessment in Malaysia property market.
Other rating systems also have been accepted internationally to assess the sustainability building in Malaysia for example LEED, Green Mark, BREAM, Green Star.
One of the most effective ways in encouraging sustainable building development is by rating system which provides a means for building owner or occupier in comparing building's greenness [6]. Launched in 2009, in line with NGTP, GBI was the first green certification established as rating building system in Malaysia. The system provides systematic assessment on how to evaluate building performance with regard to sustainable criteria.
According to Brundtland Report [23], the acceptable meaning of sustainable development is development that satisfies the needs of the present without compromising the ability of future generations to satisfy theirs. This definition is referring to three domains, namely the economy, environmental and social perception also known as The Triple Bottom Line Concept as illustrate in Figure 1. Sustainable development could only being achieved if all of these three domains integrated and treated in equal measures. Thus, the development of green building plays an important role to achieve the sustainability objectives. Green building benefits have been debated tremendously in most literature. The beneficial parties include owners, occupants, community, appraisers and lenders. One of the most significance benefit discussed is on the cost-saving benefit in green building [5,10,20]. This is to encourage the owner to grasp the sustainable impact on value of property.
As a result of growing interest in green building development, an amount of research has been done to prove the benefit of going green in the economic context. The economics of green building are being argued and amount of studies in real estate have attempted to answer the question increase. Many researches reveal the significance result especially in a matured market like United States, Australia and United Kingdom. However, little study has been done on property market in Malaysia as it is an emerging market for green building. Thus it seems crucial to identify the impact of green certification on property market price. Apart from that, the researcher will suggest a conceptual framework of the impact of green certification to the property price.
Issue on green certification
The introduction of green certification and its impact are an important issue that should be addressed. The sensitivity of green label is uncertain, unpredictable, and result may vary according to the local market [7]. In fact, much of the studies on the impact of green labeling, certification or rating focusing on commercial property [3,14,25]. Consequently, lack of significance studies done in residential property market due to the slow value implications.
From the previous research which addresses the impact of greenness to the value of property, the literature identified three arguments of on the need to evaluate the impact of green certification on property price: a) Sustainable buildings and property risks and financials are often misleading, resulted to a situation where individual property assets are mispriced, which lead to the investment opportunities for "enlightened" investors [17]. The statement supported with the term introduced in report of APB Valuation Advisory 'overlooking green features' where most appraisers and valuers fail to note green in assessing property market [7]. b) A building's value may vary and market sales information is based on standard approaches to building appraisal with regardless of performance-based cost savings [9]. For most consumers and some homebuilders, the relationship between quality home construction and sustainability is always misunderstood and lead to the misinterpretation by the market player [9]. c) In valuing low-energy houses, the expected price or rate of returns becoming challenges to the appraiser because lack of reliable modelling tools, historical statistical and case specific data [11]. Mostly, researches towards the green certification impact are involving both either a mandatory program or a voluntary program [2,3,7,8,9,24]. However, limited studies have been done on green residential properties and this is leading to the research on the greenness effect in residential market.
Even though few researches have been done, the question arised whether in Malaysia green residential properties are facing the situation of 'underpriced or overpriced'? The argument is whether the green building development in Malaysia is recognized in well manner, or the establishment of green building rating system is giving a tremendous effect in promoting sustainability. Thus, the interest of this research is to investigate the impact of green certification on residential market and green features as another importance variable in determining the property price. By recognizing the green as another factor in assessing the property market, a model to be developed is to help property market player understand the role of green. The research will utilize comparative study to compare the price between green residential building and conventional residential building.
Sustainable and green building
One of the important aims of sustainable development is to reduce the impacts of built environment on the natural environment. Hence, this goal has lead to the introduction of the term 'green building' in real estate. Green building is also known as the foundation of the sustainable construction development [22]. The issue of sustainability is controversial and much disputed subject within the field of construction.
RICS [20] defined sustainable building as green building rated by rating assessment tools. Green building also known as "sustainable building" -as building design and construction using methods and materials that are resource efficient and that will not compromise the health of the environment or the associated health and wellbeing of the building's occupants, construction workers, the general public, or future generations [16]. Thus, green building is a mechanism to achieve the sustainable development objectives.
The three pillars of sustainable development according to the Triple Bottom Line Concept are characterized as follows [17]: x Ecological sustainability: dependent on material, energy, noise emission, amount of waste products, amount of traffic, old building material separation and disposal, land use/pollution, climate change and biodiversity and means reduction of area used, conserving resources and avoidance of deleterious materials and emissions. x Social sustainability: based on the social aspects such as feeling of wellbeing, aesthetics, health & comfort, security and user satisfaction, appropriate living environment and social integration. x Economic sustainability: the minimizing lifecycle costs and value retention (material, goods and capital). Functional-aesthetic aspects such as the [4] the degree of sustainability are sustainable equal to a situation where quality remains the same or increases whereas if quality declines, the system can be regarded as unsustainable. Therefore, the existence of assessment tools is to provide better understanding of sustainability measure.
Among green building assessment tools are LEED, BREAM, GBCA, Green Star, CASBEE, Green Mark, GBI and GreenRE. Since the introduction of LEED in US, other countries have started developing their very own green rating tools. There are few amount of research in evaluating the performance and comparison between tools. So far, there are amount of researches have been discussed on green rating tools, comparing international tools between LEED, BREEAM, Green Star and CASBEE, as well as the characteristic of each tool.
The purpose of green rating systems or certifications are intended to offer market participants an understanding of label that express a building's sustainability attributes [7]. The green certification indirectly assists the market player in assessing the benefits of green building compared to non certified buildings. Any potential improvement should be assessed to determine if it could create a differential to the operational, overall performance and/or risk characteristics of the property and whether this differential constitutes a market advantage or disadvantage [7].
As we know, the purpose of green building rating tool is to promote and increase the awareness among industry player towards sustainable development. Various green building rating tools mean that the effort to build more green structure that reduce the impact on environment, increase the social benefits and optimizing the economics return would be crucial challenges for green rating service provider. Thus, with multiple option and alternatives rating tools offered in green property development, either by local or international, the agendas of United Nation for preserving the world from environment threats can be accomplished.
Value and price
Value can be distinguished in two term; value-in-use and value-in-exchange. Defining the value-in-exchange is most likely the price of property during the process of purchasing between seller and buyer. Therefore, value and price of property are closely related. When assessing the value, sale transaction or price is used to determine the market value, as described by the definition of property market value -'estimated amount of which the property should exchange…'. As such, in determining a price, an appraiser or valuer should also take into consideration the micro and macro factors that may affect the property value.
Recently, researchers have shown an increased interest in green value concept -the integration of building's greenness and value. RICS [20] defined green value for office building as a beneficial outcome to the building that practice sustainability. The concept are explained in twofold, first the building fulfill the sustainable development term and second market value i.e. the sale value and rental value increased [5].
The importance of value in green development has been described through various researches. The Vicious Circle of Blame introduced by Cadman [28] outlined the relationship between the stakeholders in property development that cause issues in sustainable development implementation. The stakeholders include the owner, the investor, the developers and the designers. However, [12] modified the figure by adding the appraiser and valuer as another important roles in determining the sustainable development (as illustrate in Figure 2). The existences of valuers or appraisers are significant in determining the green value for leading the future direction of green development. Thus, to understand the market of green building, green attributes should be one of the factors to consider in valuation process. The value of the houses is affected by various factors include controlled variables or uncontrolled variables such as the characteristics of the house itself. To foresee the price effect, the marginal price effect of a single hedonic characteristic (green-certification) and aggregate market outcomes in a partial equilibrium framework can be demonstrated. A large and growing body of literature has established the relationship between green and property market. Most of the studies highlight green certification and utilize the hedonic model to assess the effect on property price. Surveys conducted by several researchers [2,3,8,14,24] using hedonic model focus on their local green rating such as Green Mark, LEED, Energy Star and property transaction price.
Hedonic price theory
Rosen, [21] started the role of housing attributes consumer decision making process. Based on Rosen's analytical framework, which started with the assumption that any good or service consists of a variety of utilitybearing characteristics (as example z1, z2, ... , zn) making up the hedonic price function. In the context of office rent determination, these characteristics consist of various structural, locational and lease characteristics that enter into the empirical model as independent variables. The empirically determined hedonic prices are indicative of an implicit market so that demand and supply functions can be derived for both short-run and long-run competitive equilibria [8].
Statically -based comparison method used to identify the most possible prices for proposed development, thus it can compare property prices within existing housing area across selected geographical [29]. It can be done by using hedonic approach to determine the hedonic prices of selected property attributes. The hedonic model is use to understand the possible relationship between the properties attributes and property price. Hedonic modeling can be used by developers, corporate real estate groups, owners, and operators to determine which building characteristics add significant value to the potential transaction price. The results produced by hedonic model can provide important information for future decisions and help each party involve in property development better understand the economics surrounding each asset, thus improving asset underwriting [30].
Hedonic pricing is considered to be a willingness to pay technique and it tries to capture the fraction of property prices that are derived from the specific housing attributes [3]. Among of the structural characteristics like square footage, lot size, age, and bedrooms have a positive effect on selling price although the magnitudes may vary across different regions in the country [3]. However, internal features like the number of bathrooms, fireplace and air conditioning are always referring to add premium to the housing prices. One of the advantage using hedonic approach is it can fine-tune the quality changes of an individual properties, thus it can be used to measure the greenness price.
Conceptualising the framework of green certification impact
In regard to the issue of determining the impact of green certification on the residential property market, it appears that green and property price may have a significant relationship. Green or sustainability is a quality measure of a building and these criteria can be a property attributes in assessing the market price. In assessing sustainability, difference features established by green rating tools due to factor such as law and regulations, climate, types of property, geographical factors and etc [1]. GBI was developed by PAM (The Association of Architects Malaysia) with six assessment criteria for certification inclusive of energy efficiency, indoor environment quality, sustainable site planning and management, material and resources, water efficiency and innovation (Table 1) and also 4 categories of certification according to total point scoring ( Table 2). The abovementioned criteria represent the green features of green building as well as the green attributes. Thus, variables are identified as illustrate in Figure 3. The variables of independent and dependent represent the cause and effect of a phenomenon. Therefore, from the above illustration the green attributes can affect the price of property. A further investigation should be made whether the green attributes alone affect the price or other property attributes also recognized as a factor affecting the price. One question that needs to be asked, is whether in Malaysia GBI does have the significant impact to the property market or otherwise.
In neo-classical theory, price of houses is a resulting from demand and supply of the housing market. Factors influence prices and value mainly is property-specific factors and market-related factors [27]. Wyatt further explained property-specific factors are the main physical characteristics of the property such as size, age, condition, appearance, legal and location whilst the market-related factors referring to the property market as a whole. Such market-related factors are difficult to control include household income, employment rate, consumer spending cost and finance, contrary to property-specific factors.
Some recent studies began to use dummy attributes for assessing green property price effect. Based on the ideas of few research, structural attributes are size of building /floor, stories of building and age [2,3,13,14,18,24]. Another attributes that are mostly used are location and neighborhood for instance, view and facilities [2,8,25] and time attributes such as year of transaction price take place [2,3,8,13,24].
In order to explore the idea of green certified property's price, abovementioned property attributes is suggest to control the differences that may exist between the samples and to reduce the inconsistency in result. A conceptual framework (Figure 4) was introduced to answer the question. Further research to conduct will be applied on the case study approach in order to explore in depth the green certification and the effect to the price. The approach used in this research is referred to as the formulation of empirical questions to valuers and appraisers which requires this research to analyze and investigate the emerging market of green building. This research is necessary when various questions arise and there is an inability for clear explanation without the implementation of a research utilizing various sources [31]. Green buildings will be selected in few states as a case study and compare with noncertified buildings in order to generalize the research area in relation to Malaysia as a whole.
Summary
The research is significant because it addresses the issues as outlined above for comprehensive study into green certification. Presently, the impact of green development focuses primarily on the cost benefit. This study is to investigate the price differentials between GBI certified and noncertified properties in Malaysia. However, the need to highlight the property price, specific on residential market. To put it in a nutshell, the primary objective of this research is to develop a pricing model to analyse the impact of green certification in property market by using hedonic approach. Summing it up, it would be highly beneficial to property market player as the finding will lead to the success of green building implementation. People will perceive the value of green in better way as well as promote sustainable in built environment. | 2019-04-15T13:04:38.684Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "7adcec9e05bd7287a807f1c89c81f9f9c4a90298",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/29/matecconf_ibcc2016_00033.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "88126cfebefe25a3c51f817e20717be859c38430",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
216408232 | pes2o/s2orc | v3-fos-license | A Retrospective Comparison of Early Postoperative Pain after the First Vs Second TKA in Scheduled Staged Bilateral TKA
Comparing the postoperative period following the first and second TKA, there were no significant differences in WBS 24, 48, and 72 h postoperatively. The frequency of requests, and the total number of requests for analgesics did not differ when comparing the first and second TKA, at any time point. The total number of analgesic requests exhibited a moderately strong, positive correlation between the first and second TKA (p < 0.001, r = 0.623). Patients’ WBS scores and requests for analgesics showed a moderately strong, positive correlation, but only at 24 h following the second TKA (p = 0.002, r = 0.567). After both TKAs, patients required a median of 1 day to resume walking.
INTRODUCTION
The debate over the superiority of simultaneous versus staged Total Knee Arthroplasty (TKA) for patients with bilateral knee Osteoarthritis (OA) remains controversial regarding perioperative complications and economic factors [1 -7]. Ritter et al. (1997) [5] reported that neither simultaneous nor staged bilateral TKA was clinically superior, and the deci-studies evaluated pain mainly using subjective measures such as a simple Visual Analog Scale (VAS) (0 = no pain, 100 = worst pain) [13].
In this study, we objectively assessed pain using the number of analgesic agent requests issued by each patient during the first 3 days after surgery and the time needed to resume walking because we considered that more severe pain delays the time to resume walking These assessments can be used with conventional qualitative scales such as the Wong-Baker FACES pain assessment scale (WBS) (0; No pain, 1; hurts a little bit, 2; hurts a little more, 3; hurts even more, 4; hurts a whole lot, 5; hurts the worst) [13]. The purpose of this study was to subjectively and objectively compare pain intensity during the early postoperative periods following the first and second TKA surgeries. The results may inform pain management strategies for patients who undergo staged, bilateral TKA.
MATERIALS AND METHODS
Informed consent was obtained from all patients. Approval for this study was granted by the Research Board of the Healthcare Corporation Ashinokai, Gyoda, Saitama, Japan (ID number: 2018-12). Potential participants were patients with OA who underwent bilateral, scheduled, staged TKAs with either a posterior cruciate ligament (PCL)-retaining (PCLR) design or a PCL-Substituting (PCLS) design between July 2009 and October 2018. All surgeries were performed using the LCS ® Total Knee System (DePuy, Warsaw, IN, USA) under general anesthesia.
Each patient individually selected the side for their first TKA. The timing of the second TKA was also determined by the patient and depended on his or her perceived ability to tolerate postoperative pain and activity limitations. The order of implant use was quasi-randomized; patients with even medical record numbers received a PCLR implant, and patients with odd medical record numbers received a PCLS implant during the first TKA. The second TKAs were performed using the implant design not selected for the first TKA. Unexpectedly, the manufacturer discontinued the PCL-R design at the end of January 2013; therefore, all subsequent TKAs were the PCLS design.
Surgical Procedure and Postoperative Rehabilitation
A single surgeon (YI) performed all procedures using a standardized technique, as described previously [16]. The femoral components were fixed without cement; however, cement was used for tibial component fixation in all patients. No patellae were resurfaced. Proper intraoperative anteroposterior and abduction/adduction stability were confirmed manually, although these were not quantified intraoperatively. All patients were able to achieve full extension, and at least 90°o f flexion, intraoperatively, in the supine position. No analgesics (intraoperatively) or peripheral nerve blocks (postoperatively) were used for pain control.
Postoperatively, all patients received a bulky compression dressing and intra-articular drains that were usually removed at the first dressing change. Full weight-bearing was allowed, as tolerated, using a cane, on postoperative day 1 under the supervision of a therapist, and exercises were allowed. Passive Range Of Motion (ROM) exercises were performed every day beginning 1 week after surgery. Patients received at least 2 h of daily physical therapy consisting of isometric exercises, passive ROM, active-assisted ROM, quadriceps and hamstring strengthening, gait training, and stair ascension and descent. All patients received perioperative prophylactic antibiotics and analgesics for pain. No patient received anti-thrombotics, although all patients received a venous foot pump device for approximately 2 h postoperatively.
Outcomes Evaluation
We used the WBS to rate postoperative pain severity because it correlates well with VAS scores [13]. We compared WBS scores following the first and second TKA at 24, 48, and 72 h. These time points were selected according to previous studies indicating that 44%-57% of patients are awakened by pain during the first 3 postoperative days [17,18]. We asked patients to determine their WBS scores according to the most intense pain felt at each time point, regardless of the position of the knee. A combination of diclofenac sodium (50 mg; suppository), ketoprofen (15 mg; intramuscular injection), and pentazocine (15 mg; intramuscular injection) was used for postoperative analgesia, according to each patient's request. Diclofenac sodium and ketoprofen are nonsteroidal antiinflammatory drugs that inhibit prostaglandin production at nociceptors. Pentazocine is a nonnarcotic analgesic that suppresses central nervous system conduction. We compared the frequencies with which each patient requested these analgesic agents, and the time before the resumption of walking during the postoperative periods (after the first and second TKAs) to objectively evaluate post-TKA pain. We also generated correlation coefficients for the number of analgesic requests and WBS scores at each time point following the first and second TKA.
Statistical Analyses
Normality assumptions were rejected by Q-Q plot, the Kolmogorov-Smirnov test, and the Shapiro-Wilk test. We used Wilcoxon's signed rank test for between-group comparisons of continuous variables and Spearman's rank correlation coefficient to test between-variable relationships. Correlative strengths were defined as follows: strong = 0.70 -1.0, moderate = 0.40 -0.69, or weak = 0.20 -0.39. The sample size to detect a difference at alpha (two-sided = 0.05, power = 80%) with an effect size of 0.3 was estimated to be 42 per group for Wilcoxon's signed rank test. The sample size to detect a significant correlation between two groups at alpha (two-sided = 0.05, power = 80%) with an effect size of 0.5 was estimated to be 34 per group for Spearman's rank correlation coefficient. Because the sample size of 32 per group was too small to detect statistical significance, we performed a post-hoc power analysis. This revealed a statistical power of 21% for Wilcoxon's signed rank test and 48% for Spearman's rank correlation coefficient. IBM SPSS Statistics ver. 23 (IBM Japan, Tokyo, Japan) was used to perform all statistical analyses. All values were expressed as medians [25th percentile, 75th percentile], and for all tests, p < 0.05 was considered statistically significant.
RESULTS
Thirty-two patients (64 knees) who underwent bilateral, scheduled, staged TKAs with a PCLR design and a PCLS design, between July 2009 and October 2018, were enrolled in this study. The median interval between surgeries was 14 months (range: 6 -77 months). The median patient age at the time of the first surgery was 72 years (range: 63 -83 years). Preoperatively, all patients had been diagnosed with OA. Table 1 presents the patients' preoperative demographics. There were no adverse events induced by the analgesics used in this series. There were also no differences in WBS scores at postoperative 24, 48, and 72 h between the patients' first and second TKAs ( Table 2).
Patients requested analgesics 102 times during the first 24 h, 43 times during the second 24 h, and 63 times during the third 24 h following the first TKA. After the second TKA, total analgesic requests were 111 at 24 h, 43 at 48 h, and 63 at 72 h ( Table 3). The observed differences between the total number of requests (both overall and at each time point) following the first and second TKA were non-significant. A comparison of the number of requests at each time point between the first and second TKA revealed a moderately strong, positive correlation (p < 0.001, r = 0.623) ( Table 3). However, there were no significant correlations between WBS and the number of requests for analgesics, at each time point, following the first and second TKA, with the exception of the 24 h time point following the second TKA (p = 0.002, r = 0.567) ( Table 4). Finally, patients required a median value of 1 day post-TKA to resume walking following both the first [1,1] and second [1,2] TKA, indicating no significant difference.
DISCUSSION
There was no significant difference in the frequency of analgesic requests over the first 3 postoperative days following TKA. There was also no obvious correlation between the frequency with which analgesics were requested and WBS. Time to resumption of walking also did not vary significantly between the first and second TKA. However, patients' requests for analgesics exhibited a moderately strong, positive correlation between the first and second TKA. Therefore, early postoperative pain might be affected by differences in individual sensitivities, regardless of whether the TKA was the patient's first or second such procedure.
Various reports [8,9,19] have indicated that poorly controlled acute postoperative pain can prolong pain [8,9] and potentially delay postoperative rehabilitation [19]; thus, pain control is important. Other reports [10 -12] compared acute pain intensity during the early postoperative period between the first and second TKA in patients who underwent staged bilateral TKA surgeries. The researchers concluded that patients required more analgesics after their second, compared with their first, TKA. The current comparative study provides additional information pertaining to pain control after a second TKA, adding to information provided in previous studies [10 -12].
Kim et al. [10] compared early postoperative pain after staged bilateral TKA surgeries with a 1-week interval. The authors concluded that repeated surgery may induce hyperalgesia via central sensitization and postulated that this was likely the main reason patients required additional analgesics after their second TKA. Likewise, Sun et al. [12] reported that patients used more analgesics after their second TKA, until 48 h postoperatively. The authors recommended performing a second TKA no fewer than 6 months after the first TKA to reduce or eliminate the hyperalgesic phenomenon induced by the first procedure. Other studies have also suggested that hyperalgesia secondary to central sensitization might affect the severity of postoperative pain. Central sensitization is reportedly induced by several factors, namely, harmful pain stimuli emanating from tissues around the operated joints [20,21], persistent pain after the initial surgery [22,23], and long-term pain stimuli secondary to severe OA [24]. However, our results differed from earlier findings [10 -12]. Specifically, we found no difference in WBS scores between the first and second TKA, and no significant differences in the frequency with which analgesics were requested, or the time before patients resumed walking.
There are two potential explanations for our findings. The interval between TKA procedures ranged from 6-52 months (median: 14 months). This interval might have been long enough to diminish the influence of central sensitization evoked by the first TKA, in keeping with the suggestion made by Kim et al. [10]. This is consistent with findings reported by Sun et al. [12], who concluded that the second TKA should occur at least 6 months after the first TKA to mitigate the effects of central sensitization. The second, previouslyreported, factor is a psychological one [25,26]. Pre- [25] and postoperative [26] anxiety can worsen pain after surgery. In this study, preoperative education did not vary between the first and second TKAs. However, it is reasonable to assume that patients might be less anxious about undergoing a second TKA, compared with their first procedure. The postoperative recovery process, including patients' familiarity with post-operative pain, may mitigate the fear and anxiety that would otherwise have accompanied a second procedure. Jiang et al. [27] reported findings that agreed with this hypothesis, stating that VAS anxiety scores [28] were lower in patients undergoing a second (second eye) cataract surgery. We believe that no matter what type of surgery, reduced presurgical anxiety likely exerts a positive effect on early postoperative pain, even assuming that some degree of hyperalgesia resulted from the first surgery [10 -12]. Future studies quantitatively evaluating pre-and postoperative anxiety during staged bilateral TKA surgeries using validated assessment tools with minimal bias are needed and may help clarify the relationship between perioperative anxiety and the intensity of early postoperative pain.
Our results showed that patients who tended to frequently request pain medication after their first TKA also tended to issue frequent requests following their second TKA. Given that there were no correlations between WBS scores and requests for analgesics at each time point, for both the first and second TKA procedures, we cannot conclude that patients who perceived stronger pain requested additional medication. Additionally, our results might have been influenced by bias related to individual pain threshold differences; thus, postoperative pain management following a second TKA should be determined on a case-by-case basis. Our results suggest that practices such as increasing the dose of analgesics, or adding other analgesic modalities during the second postoperative period might not always be necessary. However, physicians should consider each patient's condition following the first TKA, when planning for a second procedure.
There were several limitations to this study. First and most important, this was a retrospective medical record and database review, which has inherent limitations. Secondly, the sample size was relatively small, although a power analysis confirmed that the number of participants was sufficient to detect differences. In addition, studies that make within-patient, rather than between-patient comparisons may be advantageous because the within-patient design controls for confounding variables and thus, fewer patients are required to obtain statistically reliable results [29]. Thirdly, WBS scores were not obtained at maximum flexion as in previous studies [10,12] because of patients' limited ability to mobilize during the immediate postoperative period. However, instead of this measurement, we believe that the time until the resumption of walking was an objective measure of pain intensity to accomplish our purpose in this study. Fourthly, we compared requests for analgesics, not the actual amount of medication each patient was administered. The analgesics (nonsteroidal anti-inflammatory drugs and nonnarcotic analgesics) used in this series have different mechanisms of action and are associated with different patient medication sensitivities. Consequently, the order of use and personal preference could have affected our outcomes. Thus, we decided to compare only the frequency with which medications were requested. Finally, we did not use multimodal analgesics such as peripheral nerve blocks or periarticular infiltration analgesia, which are commonly used for postoperative pain control following current TKA surgeries [17,30,31]. However, the purpose of this study was to compare early postoperative pain intensity between the first and second TKA procedures using conventional analgesics. In addition, multimodal analgesic techniques are associated with certain limitations such as an increased risk of falls, delayed rehabilitation after nerve blocks [31], and disparities in the results related to infiltration techniques in periarticular infiltration analgesia [30].
Despite these limitations, a major strength of this study was our use of objective approaches to measure postoperative pain intensity after staged TKA, in addition to WBS scores. Furthermore, surgeon-related bias is not a consideration because all of the surgeries were completed by a single experienced surgeon who performed > 600 TKA procedures using the current designs, and who used the same procedure and similar forms of instrumentation for all patients. Finally, the results of this study may be particularly useful for preemptive planning of perioperative pain control for the second knee replacement in patients who must undergo scheduled staged TKA surgery because of their preoperative health status.
CONCLUSION
In conclusion, patients with bilateral OA requested analgesics with a similar frequency, regardless of whether the TKA was their first or second. Additionally, the frequency with which patients requested analgesics did not vary significantly during the immediate postoperative period when we compared the first and second TKA procedures. The time needed prior to the resumption of walking also remained the same, when we compared the first and second TKAs. Therefore, we recommend that pain control following a second TKA be modeled after each patient's individualized usage patterns, as demonstrated following their earlier (first) TKA. Future largescale research studies are needed to both affirm, and build upon, our conclusions. | 2020-04-23T09:14:55.514Z | 2020-04-21T00:00:00.000 | {
"year": 2020,
"sha1": "c29f9443c72403bf23d0bcee061065331c698238",
"oa_license": "CCBY",
"oa_url": "https://openorthopaedicsjournal.com/contents/volumes/V14/TOORTHJ-14-26/TOORTHJ-14-26.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "86327a0320618c89761725a74fe3558ac0ba29e4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231699705 | pes2o/s2orc | v3-fos-license | Relationship between cardiorespiratory fitness and latitude in children and adolescents: Results from a cross-sectional survey in China
Background This study assessed the correlation between latitude and the cardiorespiratory fitness (CRF) of children and adolescents. Methods In 16 provinces and autonomous regions in China, 25,941 children and adolescents aged 10–18 were included. CRF was measured using the 20 m shuttle run test (20 m SRT) and estimated peak oxygen uptake (VO2peak). One-way ANOVA and multiple regression analysis were used to explore the correlation between CRF and latitude in children and adolescents. Results The VO2peak values of the low (south), middle, and high (north) latitude groups for boys were 43.1, 43.1, and 40.7 mL/kg/min, respectively, and 40.0, 40.0, and 38.5 mL/kg/min for girls, respectively. After adjusting for confounding factors, the regression coefficients (β) between VO2peak-Z and both latitude-Z and (latitude-Z)2 for boys were −0.151 and −0.043, respectively. For girls, they were −0.142 and −0.020, respectively. The Partial correlation coefficient (r) for latitude-Z and (latitude-Z)2 were −0.14 and −0.04 for boys, and −0.13 and −0.02 for girls, respectively. Conclusion The CRF among children and adolescents in high latitude regions is significantly lower than that in middle and low latitude region, and it generally shows a “parabolic” trend between Latitude-Z and VO2peak-Z.
Introduction
As the core component of children's and adolescents' physical health, 1 cardiorespiratory fitness (CRF) is an important standard used for measuring the health of children and adolescents, and is one of the indicators used for predicting adult health. 2,3 Low CRF is related to the incidence of cardiovascular diseases, diabetes, and other diseases, which can be used as a predictor of disease occurrence and is directly related to mortality. 4e6 Low CRF ranks first among all the factors affecting all-cause mortality, surpassing risk factors such as hypertension, smoking, high cholesterol, and obesity. 7 CRF is influenced by many factors, and especially by the regional environment factor. A study 8 on 1,142,026 children and adolescents aged 9e17 years from 50 countries reported regional variations in CRF. Children and adolescents in Africa and Central-Northern Europe have the highest 20 m shuttle run test (20 m SRT) performance, whereas those in South America have the lowest 20 m SRT performance. In European countries, children and adolescents in northern and central countries have better CRF than their southern counterparts. A study of the CRF of children and adolescents aged 9e13 years in Canada, Kenya, and Mexico demonstrated that Kenyan teenagers in the tropical monsoon region had the highest CRF, whereas Canadian children in Northern North America had the lowest. 9 The CRF development of children and adolescents in different parts of a country also varies. Sun et al. discovered that children and adolescents in Northeast and Southwest China had lower CRF than in other regions. 10 Other studies reported that the gap between children's CRF in Eastern and Western China has gradually narrowed. 11 Most of previous studies compared the differences in CRF among children and adolescents in different countries or regions. The results of studies on CRF of children and adolescents at different latitudes are still inconsistent, and little is known about the differences and temporal trends in the CRF of children and adolescents at different latitudes in the same country. China is located in the east of Asia and has a vast territory, with a 49-latitude difference from south to north. It spans five climatic zones: tropical, subtropical, warm temperate, middle temperate, and cold temperate. The terrain and climate are complex and diverse. The huge geographic latitude span results in huge differences in sunshine time, temperature, barometric pressure, and precipitation in different regions. This inevitably influences the lifestyle, dietary structure, and physical activity of children and adolescents, 12e14 causing regional differences in children's and adolescents' CRF. 10 Therefore, we aimed to analyze the correlation between natural environmental factors and the CRF of children and adolescents aged 10e18 years in China to provide a scientific basis for improving the physical health of children and adolescents.
Participants and sampling
Data for the present study were drawn from the "Formulation of new methods and evaluation criteria for the physical health of children and adolescents in China", which was a cross-section survey of the physical health of Chinese children and adolescents conducted in 2015e2016. Considering population weighting and geographical location, we used the proportion of each indicator in the 2010 Sixth National Census Main Data Bulletin 15 to conduct sampling. Based on the population ratio of about 1.52:1 in the north and south, about 1:1 in urban and rural areas, and about 1:1 for boys and girls, corresponding cities were selected from 16 (Heilongjiang, Jilin, Xinjiang, Shanxi, Hebei, Henan, Jiangxi, Jiangsu, Shanghai, Zhejiang, Sichuan, Yunnan, Guizhou, Fujian, and Hainan) of 31 provinces in Mainland China. About 100 boys and girls aged 10e18 years in each province were selected using the random case method. After excluding invalid data and extreme values, a total of 25,941 healthy children and adolescents (without physical disability or serious illness, mental illness; Boys ¼ 12,864, girls ¼ 13,077) were extracted for the current study (Table 1). Written informed consent from parents and every participant has been obtained.
m SRT and questionnaire
The 20 m SRT adopts the test method developed by the Cooper Institute. 16 The method is as follows: After the warm-up, the participants start from the starting line, which is placed 20 m from the second line, and run to the opposite end of the line following the rhythm of the music. The initial speed is 8 km/h, which increases to 9 km/h in the second minute, and then the running speed increased by 0.5 km/h every consecutive minute. The test is stopped when the participants feel too tired to continue, or when they fail to reach the end line twice in a row before the sound. Each time 20 m is completed, it is recorded as 1 lap. The total round trip laps are recorded as the final result.
The children and adolescents participating in the study were given a self-reported questionnaire covering demographic indicators, lifestyle, and mental sub-health, from which the students' location, moderate-to-vigorous physical activity (MVPA) and individual socioeconomic status (SES) information was obtained for the current study. The family income was divided into low (less than 2000), middle (2001e5000), upper middle (5001e8000) and high (above 8000) based on the GNI per capita in China. 17 Parental education was divided into primary school, junior high school, senior high school, college or bachelor degree. Parental occupation included, for example, civil servant or teacher, worker, clerk, businessman, farmer and others. The father or mother with the highest level of education and the highest occupational classification score was selected as the representative of the parents' education and occupation. The Occupational classification was recorded according to the International Standard Economic Status Index (ISEI). 18 MVPA (moderate-to-vigorous physical activity) frequency (except physical education) was divided into never, 1e2 times per month, 1e2 times per week, and more than 3 times per week. GDP per capita in cities where children and adolescents live was obtained from the respective statistical yearbooks of China's provinces. 19 Based on the geographical research 20 and the actual distribution of the research samples, we divided the samples into three latitudes for regional comparison. Low latitudes (south) were defined as below 30 N, middle latitudes were defined as 30 e40 N, and high latitudes (north) were defined as above 40 N.
Anthropometric measurements
The test for measuring height and weight take use of the standardized equipment at schools. All the participants were required to wear a T-shirt and thin trousers, without shoes. Height (recorded to the nearest 0.1 cm) and weight (recorded to the nearest 0.1 kg) were used to calculate the BMI (Body mass index), which was calculated as weight (kg)/height (m) 2 .
Evaluation criteria for nutrition
The evaluation criteria for nutrition was based on the 2007 WHO report on growth reference 5e19 years (BMI-for-age), and examined the presence of underweight (BMI was minus 2Z-Score), overweight (BMI above or equal to 1Z-Score) and obesity (BMI above 2Z-Score). 21
Ethical consideration
This research was approved by the East China Normal University Committee on Human Research Protection (approval No. HR2016/ 12055). Informed consent was obtained from teachers, students, and parents.
Statistical analyses
The estimated VO 2peak of each student was calculated according to the formula reported by Leger 22 : VO 2peak ¼ 31.025 þ 3.238 Â S e 3.248 Â age þ0.1536 Â S Â age, where S is the speed at the last completed stage of participants. The first level, S ¼ 8, From the second level, S ¼ 8 þ 0.5 Â 20 m SRT completed stage. Latitude and VO 2peak Z-scores were calculated by sex and age, respectively. The Z-score was calculated as Z-score ¼ (measured value e mean value)/standard deviation.
Children and adolescents were categorized into three age groups: 10e12 years is upper primary school age, 13e15 years is junior middle school age, and 16e18 years is high school age. Chisquare test was used to analyze the individual characteristic information of children and adolescents in different latitudes. Oneway ANOVA was used to explore the differences in VO 2peak between participants in different latitude groups. The multiple regression model was used to analyze the relationship between latitude and the VO 2peak of children and adolescents of different genders. The partial correlation coefficient (r) and the regression coefficients (b) were used to comment on the strength of association between variables. According to Cohen's standard, 23 the correlation coefficient (r) of ±0.1, ±0.3, ±0.5 corresponds to small, moderate, and large effect sizes, respectively. All analyses were conducted using IBM SPSS statistics 23.0 (IBM, Armonk, NY, USA). Table 2 shows the individual characteristics of children and adolescents in different latitudes. The differences in family income, parental education level, parental occupation and the nutritional status were statistically significant in different latitudes (p < 0.001). The difference in MVPA frequency (except Physical Education) prevalence of boys was not statistically significant (p ¼ 0.168). The difference of MVPA frequency prevalence of girls was statistically significant (p < 0.001). In low, middle and high latitudes, 12.7%, 7.9% and 12.2% of boys' family income was less than 2000 yuan, and 15.0%, 7.2% and 13.7% of girls' family income were respectively. The proportion of children and adolescents with lower family income in mid-latitude group is lower than that in low-latitude and highlatitude groups. The prevalence of overweight and obesity in high latitude areas were 19.5%, 12.1% for boys and 13.1%, 3.4% for girls, respectively, which were higher than those in middle and low latitudes groups.
Descriptive characteristics of children and adolescents in different latitudes
Distribution of means and SDs of VO 2peak in different-aged children and adolescents at different latitudes Table 3 shows that the VO 2peak of boys at high latitudes was lower than those at middle and low latitudes ( Table 2). We found significant differences in all age groups (p < 0.05). For boys aged 12e14 years, the VO 2peak of the middle latitude group was significantly lower than that of low latitude group (p < 0.05). For boys aged 15e18 years, the VO 2peak at middle latitudes was significantly higher than that in the low latitude group (p < 0.05). Table 4 shows that the VO 2peak of girls in high latitude regions of all ages was always the lowest except the age of 10 years, where we found significant differences (p < 0.05). For girls aged 12e14 and 16 years, the VO 2peak of the middle latitude adolescents was significantly lower than that of low latitude adolescents (p < 0.05). For girls aged 17e18 years, the VO 2peak of middle latitude adolescents was significantly higher than that of low latitude adolescents (p < 0.05). VO 2peak Z-Scores' distribution for boys and girls in different latitude regions Fig. 1 shows the VO 2peak Z-score distribution of boys and girls in different latitudes regions. Fig. 1 shows that the VO 2peak Z-scores of boys and girls in high latitude regions are more distributed between the left and low divisions, indicating that the CRF of children and adolescents in high latitudes is lower than those in middle and low latitude regions. Fig. 2 shows that in the three age groups, with the increase in age, the VO 2peak of both boys and girls show a downward trend, and the decline at middle latitudes was less than those in the low and high latitudes.
Regression analysis of the variation in VO 2peak with latitude in children and adolescents in different age groups Table 5 is the multiple regression model of the relationship between latitude-Z and VO 2peak Z-scores. As shown in Tables 2 and 3, in some age groups, VO 2peak shows a trend of first rising and then falling among different latitude groups. Based on this, the hypothesis of quadratic function relationship between VO 2peak -Z and latitude-Z is proposed. Therefore, latitude-Z and (latitude-Z) 2 are brought into the regression model. Model 1 is a regression model with VO 2peak -Z as the dependent variable and latitude-Z and (latitude-Z) 2 as independent variables only adjusted for age; Model 2 adjusted for age, parental education, parental occupation, family income and Per Capita GDP; Model 3 In Model 1 -Model 5 , the regression coefficients (b) during VO 2peak -Z and latitude-Z, (latitude-Z) 2 , family income, BMI, MVPA, season of data collection were statistically significant among boys (p < 0.05), and the b during VO 2peak -Z and latitude-Z, (latitude-Z) 2 , Per Capita GDP, BMI, MVPA, season of data collection were statistically significant among girls. In Model 1 -Model 5 , the b of VO 2peak -Z and latitude-Z range from À0.171 to À0.135, and the r of VO 2peak -Z and latitude-Z range from À0.16 to À0.13 (p < 0.001). The effect sizes were all small. The b of VO 2peak -Z and (latitude-Z) 2 in Model 1 -Model 5 ranged from À0.058 to À0.020, and the r of VO 2peak -Z and (latitude-Z) 2 in Model 1 -Model 5 ranged from À0.06 to À0.02 (p < 0.05).
After adjusting various influencing factors in Model 5 , the results showed that the b of latitude-Z and (latitude-Z) 2 were À0.151 and À0.043 in boys, and À0.142 and À0.020 in girls, respectively. The r of latitude-Z and (latitude-Z) 2 were À0.144 and À0.039 in boys, and À0.132 and À0.018 in girls, respectively (p < 0.05). The relationship between latitude-Z and VO 2peak -Z of children and adolescents of different genders was shown in Fig. 3. Fig. 3 showed the relationship between latitude-Z and VO 2peak -Z in children and adolescents of different genders. Both boys and girls' VO 2peak showed a "parabolic" trend. Among them, the VO 2peak -Z of girls showed a slight increase first and then decreased, while the VO 2peak -Z for boys increased first and then decreased rapidly as the latitude increased. The parabolic curve of girls is more gentle than that of boys.
Discussion
The present study demonstrated that the CRF of children and adolescents in high latitude regions is significantly lower than that in low and middle latitude region. In model 5 , the partial correlation coefficients (r) between Latitude-Z and VO 2peak -Z were larger than those for other confounding factors (r Boys ¼ À0.144, r Girls ¼ À0.132).
There was a negative correlation between Latitude-Z and VO 2peak -Z (p < 0.001). After adjusting for all confounding factors, it generally show a "parabolic" trend between Latitude-Z and VO 2peak -Z. In other words, CRF showed a trend of slight rise and then rapid decline with the increase of latitude. Latitude was negatively correlated with CRF after adjusting for BMI, MVPA, SES and other confounding factors, and may be related to altitude, temperature and precipitation in different latitudes. 10,24,25 Similar trends have been found in studies of child and adolescent CRF in other countries. H eroux et al. 9 found the intercountry difference in children aged 9e13 years among Canada, Mexico and Kenya. Canada is a high-latitude country between 41 and 83 N, The city of Guadalaha in Mexico is at 20.4 N, and Kenya is a low-latitude country straddled the equator. In three countries, Canadian children (The boys and girls were 41.3 and 38.3 mL/kg/min, respectively) in highlatitude had lower CRF scores than their counterparts in Mexico (The boys and girls were 47.1 and 46.4 mL/kg/min, respectively) and Kenya (The boys and girls were 50.2 and 46.7 mL/kg/min, respectively) at low and middle latitude. There was no significant difference in CRF between Mexican girls and Kenyan girls. The trend of CRF among three different latitude countries is basically consistent with the results of this study. However, there may be an issue of comparability because of differences in assessment measures and equations used to estimate CRF among studies. In addition, convenience sampling was used in Mexico and Kenya and the sample size was in miniature which limited the representativeness of the results. Differences in CRF among children and adolescents at different latitudes were also confirmed within other countries. 26,27 The distribution of CRF for Chilean adolescents in the southern hemisphere is varied at different latitudes. The prevalence of unhealthy CRF 28 is higher in northern regions (low latitudes) and southern regions (high latitudes) than in central regions. 29 The CRF of adolescents in high latitudes is higher than that in low latitudes, which is similar to the results of Lang 8 and colleagues CRF study of children and adolescents in developed countries. After comparing the 20 m SRT performance of children and adolescents in 50 countries, Lang et al. found that the CRF of children and adolescents showed different trends in developed and developing countries. In developing countries, children and adolescents in areas with higher temperatures have higher CRF than those in areas with lower temperatures. 8 As a developing country, China showed the same trend, that is, the CRF of children and adolescents in high latitude regions with lower temperatures is lower than in low latitude regions with higher temperatures. It is worth noticing that in the Lang et al.'s study, there was a clear latitude gradient in CRF in Europe and other developed countries. Children and adolescents in Central-Northern countries performed better on the 20 m SRT than their Southern contemporaries. The CRF of children and adolescents in countries such as Estonia, Iceland, Norway, and Denmark is much higher than that of southern countries such as Greece, Portugal and Italy. These findings are conflicting and difficult to interpret. The reason is that the author suggested it may be due to the negative physiological effects of exercise at warm and humid temperatures, or the difference in PA caused by the widespread popularity of ice-snow sports among Nordic children and adolescents, 30 which is worthy of further exploration. It is worth noting that the study involved 177 studies covering a time span from 1985 to 2015, while in recent decades, CRF of children and adolescents in most countries showed a downward trend over time. 31e33 The study did not adjust the time trends, which could have influenced the results.
Similar to the above research, the CRF of children and adolescents in high-latitude developed countries is often better than that in low-latitude countries. Children and adolescents in Portugal's Porto (42 N) has better CRF than their peers in Maputo, the capital of Mozambique (25 S). 34 British children and adolescents have higher CRF than their Tanzanian counterparts. 35 The authors of the above studies often attribute these findings to the outcome of a higher SES in association with a better nutritional environment in developed country, as well as higher levels of self-reported physical activity. However, other confounding factors such as BMI and time of data collection have not been controlled in the above studies, so it is not clear whether the differences in CRF among children and adolescents in different countries are related to geographical factors such as latitude.
Another study found that children and adolescents in highlatitude Norway had similar CRF as peers in low-latitude Tanzania. 36 In this reach, the CRF estimated from an bicycle protocol test. However, about two-thirds of the Tanzanian children were not able to ride a conventional bicycle. The Tanzanian children reached significantly higher estimated VO 2peak in the 20 m SRT compared with the bicycle protocol test. The VO 2peak probably underestimated in Tanzanian children.
Strengths and limitations
Although most studies have compared CRF between children and adolescents in different regions and countries, some studies have a small sample size and are not representative, and most studies have failed to control the influence of confounding factors, so they cannot confidently conclude whether geographical factors such as latitude are independently associated with CRF. This study is one of the few that provides evidence for an independent association between latitude and CRF using a representative sample of children and adolescents. This study can help to identify target groups requiring future intervention.
There are also some study limitations to note. First, the present study did not investigated children and adolescents' physical activity by objective measurement. Second, this study used a crosssectional design and can not determine causality. Therefore, cohort studies and PA surveys should be conducted in children and adolescents of different latitude regions in the future.
Conclusions
The present study investigated and analyzed the relationship between CRF and latitude in Chinese children and adolescents. Findings from this study confirm the belief that children and adolescents at high latitudes have lower CRF than those at middle and low latitudes. After adjusting for confounding factors, we observed a "parabolic" trend between Latitude-Z and VO 2peak -Z. An independent association between latitude and CRF was confirmed among Chinese children and adolescents. However, whether this association can be confirmed in other countries, the trend of correlation between latitude and CRF in the southern hemisphere and developed countries still needs further empirical research. The CRF among developed countries in Europe shows a trend opposite to the results of this study, the reasons of which should receive further investigation. In addition, effective strategies should be implemented to improve the CRF of children and adolescents in high latitude regions. Whether the CRF can be improved in high latitude areas by building indoor sports venues and promoting north winter ice-snow sports on campus is also one of the future research direction.
Declaration of competing interest
The authors have no conflicts of interest relevant to this article. | 2021-01-26T05:26:56.750Z | 2021-01-06T00:00:00.000 | {
"year": 2021,
"sha1": "4841fcf93eb3cfa0f2531e4d20ca46ce019b8d84",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jesf.2020.12.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4841fcf93eb3cfa0f2531e4d20ca46ce019b8d84",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4340905 | pes2o/s2orc | v3-fos-license | Limited social plasticity in the socially polymorphic sweat bee Lasioglossum calceatum
Abstract Eusociality is characterised by a reproductive division of labour, where some individuals forgo direct reproduction to instead help raise kin. Socially polymorphic sweat bees are ideal models for addressing the mechanisms underlying the transition from solitary living to eusociality, because different individuals in the same species can express either eusocial or solitary behaviour. A key question is whether alternative social phenotypes represent environmentally induced plasticity or predominantly genetic differentiation between populations. In this paper, we focus on the sweat bee Lasioglossum calceatum, in which northern or high-altitude populations are solitary, whereas more southern or low-altitude populations are typically eusocial. To test whether social phenotype responds to local environmental cues, we transplanted adult females from a solitary, northern population, to a southern site where native bees are typically eusocial. Nearly all native nests were eusocial, with foundresses producing small first brood (B1) females that became workers. In contrast, nine out of ten nests initiated by transplanted bees were solitary, producing female offspring that were the same size as the foundress and entered directly into hibernation. Only one of these ten nests became eusocial. Social phenotype was unlikely to be related to temperature experienced by nest foundresses when provisioning B1 offspring, or by B1 emergence time, both previously implicated in social plasticity seen in two other socially polymorphic sweat bees. Our results suggest that social polymorphism in L. calceatum predominantly reflects genetic differentiation between populations, and that plasticity is in the process of being lost by bees in northern populations. Significance statement Phenotypic plasticity is thought to play a key role in the early stages of the transition from solitary to eusocial behaviour, but may then be lost if environmental conditions become less variable. Socially polymorphic sweat bees exhibit either solitary or eusocial behaviour in different geographic populations, depending on the length of the nesting season. We tested for plasticity in the socially polymorphic sweat bee Lasioglossum calceatum by transplanting nest foundresses from a northern, non-eusocial population to a southern, eusocial population. Plasticity would be detected if transplanted bees exhibited eusocial behaviour. We found that while native bees were eusocial, 90% of transplanted bees and their offspring did not exhibit traits associated with eusociality. Environmental variables such as time of offspring emergence or temperatures experienced by foundresses during provisioning could not explain these differences. Our results suggest that the ability of transplanted bees to express eusociality is being lost, and that social polymorphism predominantly reflects genetic differences between populations. Electronic supplementary material The online version of this article (10.1007/s00265-018-2475-9) contains supplementary material, which is available to authorized users.
Introduction
There is increasing interest in the environmental and genetic mechanisms underlying the transition from solitary living to eusociality (e.g. Yanega 1997;Field et al. , 2012Kapheim et al. 2012Kapheim et al. , 2015aKocher et al. 2013;Rehan and Toth 2015), and investigating these mechanisms requires taxa that straddle this transition Rehan and Toth Communicated by J. Heinze Electronic supplementary material The online version of this article (https://doi.org/10.1007/s00265-018-2475-9) contains supplementary material, which is available to authorized users. 2015). Socially polymorphic sweat bees (Hymenoptera: Halictidae) are ideal models for this purpose, because different populations of the same species exhibit either eusocial or solitary behaviour (Soucy and Danforth 2002;Chapuisat 2010). In spring, mated females (foundresses) emerge from hibernation and excavate individual nest burrows. Each foundress then mass provisions a first brood (B1) of offspring in separate, sealed brood cells. In solitary populations, B1 offspring emerge to mate and females enter hibernation, becoming the following year's new foundresses. In eusocial populations, however, at least some B1 females become workers that instead help to rear a second brood (B2) of reproductive offspring (Schwarz et al. 2007). Season length is thought to be a key proximate constraint on social phenotype because eusociality can be expressed only where the season is long enough to rear two consecutive broods (Soucy and Danforth 2002;Hirata and Higashi 2008;Davison and Field, in preperation) Within sweat bees, there have been at least two origins of eusociality and many subsequent losses, including to social polymorphism (Danforth 2002;Brady et al. 2006;Gibbs et al. 2012). It is thought that such reversals could be driven by selection acting on only a small number of regulatory switches (West-Eberhard 2003), and that in transitional populations initial plasticity in social phenotype might be lost once environmental conditions become predictable Cini et al. 2015;Smith et al. 2015). Therefore, a key question is to what extent alternative eusocial and solitary phenotypes result from environmentally mediated plasticity or represent distinct, genetically fixed alternatives (Wcislo 1997).
Field transplants are critical to addressing this question and yet are rarely performed (Yanega 1997;Field et al. 2012). Reciprocal transplants of the socially polymorphic sweat bee Halictus rubicundus Christ between social and solitary populations in the UK revealed that social phenotype is plastic with respect to the environment (Field et al. , 2012. Evidence suggests that B1 females become workers only when emerging sufficiently early in the season ; see also Hirata and Higashi 2008), and that nest foundresses may be able to adjust the size of B1 offspring depending on anticipated social phenotype (Field et al. 2012, but see . Conversely, significant mitochondrial differentiation exists between eusocial and solitary populations of North American H. rubicundus, suggesting that social phenotype might have a fixed genetic component (Soucy and Danforth 2002). A laboratory common garden experiment also suggested that social phenotype may have a fixed genetic component in Lasioglossum albipes Fabricius (Plateaux-Quénu et al. 2000). However, fixed genetic differences between social phenotypes have never been demonstrated experimentally in a natural field setting, which is critical to fully account for unmeasured environmental variables (Yanega 1997;Field et al. 2012).
Lasioglossum calceatum Scopoli is a common and widespread socially polymorphic sweat bee of the Palearctic, closely related to L. albipes (Sakagami and Munakata 1972;Pesenko et al. 2000;Danforth et al. 2003;Davison and Field 2016). In this paper, we test for plasticity in social phenotype by transplanting foundresses from a northern, solitary UK population to a southern UK population where native bees are typically eusocial (see Davison and Field 2016 for site details). We also genotype offspring, which confirms previous work suggesting that L. calceatum expresses eusociality in the south of the UK (Davison and Field 2016).
We focus on three aspects of B1 female social phenotype: emergence time, pollen collection and body size. B1 offspring might become workers only if they emerge sufficiently early in the season (Hirata and Higashi 2008;, workers typically begin provisioning the natal nest within 1 or 2 days of emergence (PJD, personal observation), and in common with most other eusocial sweat bees, B1 workers tend to be smaller than their mothers (Packer and Knerer 1985;Davison and Field 2016). Critically, since offspring are mass provisioned, B1 body size may largely reflect investment decisions by foundresses at the time of provisioning (Plateaux-Quénu 1983;Richards and Packer 1994). If social phenotype is plastic, foundresses transplanted to the south from northern solitary populations might respond by producing small B1 offspring that in turn remain at their nest as workers, or perhaps initiate their own nests instead of entering hibernation (Field et al. , 2012. Instead, we find that most transplanted foundresses and their offspring show no evidence of plasticity, indicating that inter-population differences in social phenotype predominantly reflect genetic differentiation.
Transplants
Foundresses were transplanted from Inverness, in the far north of the UK, approximately 800 km south to a nesting aggregation at the University of Sussex campus (Sussex) where annual temperatures are higher and the season is longer ( Fig. 1; Table 1). Bees at Inverness nest solitarily but bees at Sussex typically express eusociality (Davison and Field 2016). Because nesting aggregations are hard to find (Richards et al. 2015), we were unable to perform a control transplant from another eusocial site in the present study. However, controls implemented in previous studies show that transplantation per se is unlikely to influence behaviour (Field et al. , 2012Davison and Field, in preparation). Bees were transplanted from Inverness on 15-16 August 2014 (autumn transplant) and 16 May 2015 (spring transplant). Autumntransplanted bees (n = 70) were freshly emerged B1 females caught returning to their nests from feeding/mating flights. Spring-transplanted bees (n = 202) were nest foundresses recently emerged from hibernation, and were caught returning from feeding or provisioning flights. Most spring transplants were not carrying pollen and were therefore unlikely to have already begun provisioning their own offspring.
Bees were caught with an insect net and marked on both the clypeus and thorax with a single spot of enamel paint (Revell® and Humbrol™ enamel model paints), denoting the time of transplant (autumn or spring). Thus, transplanted bees could be readily distinguished from native Sussex bees. Prior to release, bees were maintained inside individual plastic tubes either in a cold box containing ice packs or a fridge at 4°C. Autumn-transplanted bees were released directly into 14 L plastic buckets containing artificial nest holes (15-20 cm), which were embedded into the ground near to the nesting aggregation at Sussex. Each bucket was covered with netting to encourage bees to enter the holes. Netting was removed the following morning and bees allowed to fly freely inside an insect-proof cage containing flowers before entering hibernation inside the buckets, which were left embedded in the ground throughout the winter. Buckets were removed and re-embedded within the Sussex nest aggregation before the start of spring 2015. Spring-transplanted bees were released over three evenings (17-19 May 2015) directly into artificial nest holes created among native and autumn-transplanted nests within the Sussex aggregation (not in the buckets). Of the two autumn-transplanted bees that successfully founded nests (see BResults^), one nested within a bucket and one in the ground surrounding the buckets. Similarly, some springtransplanted foundresses also founded nests in buckets, while others utilised the artificial nest burrows or dug new nest burrows in the surrounding soil.
Foundress demography and body size
We recorded the timing of three key events for native and transplanted bees: (1) date of nest initiation (the first day on which a foundress provisioned), (2) date of first B1 female emergence and (3) the time taken to produce the first B1 female offspring (time between (1) and (2)).
Foundresses initiating nests in spring were caught with an insect net after being observed provisioning, and individually marked with unique colour combinations of two enamel paint spots (Revell® and Humbrol™) applied to the thorax with a pin. Wing length was measured to the nearest 0.1 mm with digital callipers, as the distance between the outer edge of the tegula and the end of the forewing. Nest entrances were marked using individually numbered nails. During the foundress provisioning phase, the nesting aggregation was divided into two sections, which were observed continuously by the same person on alternate days when the weather was suitable for foraging (n = 29 observation days). It was not possible to record data blind because our study involved focal animals in the field.
The timing of nest initiation and offspring emergence date can vary between years, and therefore as a comparison we utilise demographic data from two previous years in which native L. calceatum were studied at Sussex: 2012 and 2013. Data in these years were collected using the same methods as in the present study (see Davison and Field 2016 for details). Determining social phenotype and offspring size Nests were considered eusocial only if B1 females were observed provisioning. Workers were identified as unmarked bees provisioning a nest in which the foundress had been marked: workers begin provisioning within 1 or 2 days of emergence, whereas B1 females that directly enter hibernation are typically observed entering the nest for several days after emergence but never with pollen (Davison and Field 2016). B1 females were caught on emergence from their nest only after being observed provisioning. They were then measured and marked with a single paint spot on the clypeus and thorax, using a colour unique within their nest. Directly hibernating offspring were measured when excavated from beneath their nests at the end of the season (see below). With the help of a second observer, nests in both sections were continually observed during the worker-provisioning phase (n = 26 observation days).
Nest excavations
Nests were excavated from 6 to 15 August 2015, near to the end of the season but prior to the emergence of B2 offspring (Fig. 2). All brood and adult bees (B1 females and foundresses) were removed and stored in ethanol before genotyping. In nests of L. calceatum, cells forming each brood are arranged in a single cluster surrounded by a cavity (Sakagami and Michener 1962), and it was therefore possible to be certain that all brood in a given nest had been collected. Excavations were continued well below the level of brood cell clusters to detect hibernating B1 offspring.
DNA extraction and microsatellite analysis
DNA was extracted from whole bees using the ammonium acetate precipitation method (Nicholls et al. 2000). We amplified 12 microsatellite loci, originally developed for L. malachurum, in two multiplexes (Parsons et al. 2017; Table S1). Multiplexes were amplified in a 2 μL Qiagen Multiplex reaction using the following PCR profile: 95°C for 15 min, followed by 44 × (94°C for 30 s, 57°C for 90 s and 72°C for 60 s), then 60°for 30 min. PCR products were genotyped using an ABI 3730 48-well capillary DNA Analyser using LIZ size standard (Applied Biosystems Inc.), and alleles were scored using Genemapper® v3.7 software. Two microsatellites were monomorphic and were discarded. The remaining ten loci had 6-19 alleles per locus (x = 9.5 alleles per locus) across both populations (see Table S2 for breakdown by population). We tested for linkage disequilibrium (LD) and departure from Hardy-Weinberg equilibrium (HW) within and among the Sussex and Inverness populations using Genepop 4.2 (Raymond and Rousset 1995;Rousset 2008). For these tests, we selected one female from each nest at Sussex to avoid pseudoreplication. To correct p values in multiple tests, we applied the q value to LD p values using QVALUE (Storey 2002). The q value is a measure of significance in terms of false discovery rate unlike the conventional Bonferroni correction, which attempts to measure significance in terms of false positives only (Storey 2002), and therefore provides a more powerful method for correcting multiple tests (Verhoeven et al. 2005). As a measure of genetic diversity, we recorded the total number of alleles at a given locus, and the observed and expected heterozygosity.
Brood relatedness
We used the software Relatedness 5.0.8 (Queller and Goodnight 1989) to estimate the life-for-life coefficient of relatedness (r) among B2 females and between foundresses and B2 females within each nest. Allele frequencies were estimated and calculations performed when weighting nests equally. Standard errors and 95% confidence intervals were obtained by jackknifing over nests (Queller and Goodnight 1989). We separated female B2 brood into groups of full sisters using the computer program Kinship 1.3.1 (Goodnight and Queller 1999). We asked whether females were more likely to be full sisters (r = 0.75) than aunt-niece (r = 0.375), with 100,000 replicates to estimate significance values: any B2 female brood laid by a B1 female will be the niece of a B2 female brood laid by the foundress. Where the principle egglayer's genotype was available, or could be reconstructed, we assigned male production: males were allocated to the Fig. 2 The timing and duration of key events for spring-transplanted Inverness (light grey) and native Sussex (dark grey) nest foundresses and their offspring. Solid bars show the periods during which activity was observed, and represent all bees in that cohort. Not all bees within each cohort, represented by a bar, began or finished individual stages on the same day. Bars therefore represent the first and last days on which different individual bees within a cohort were observed. Gaps between bars shows periods of bee inactivity principle egg-layer if they shared one of her two alleles at each locus, and to a secondary female if one or more alleles were not shared.
Confirming offspring population of origin
Lasioglossum offspring typically hibernate beneath their natal nest (Sakagami and Fukuda 1972), and B1 offspring of transplanted foundresses were indeed frequently found hibernating beneath their natal nests. In nests where we were unable to match the genotypes of B1 offspring with the foundress, however, we used STRUCTURE (version 2.3.4; Pritchard et al. 2000) to confirm that these hibernating adults were not offspring of native foundresses that might have entered the nest. STRUCTURE divides genotypes into genetic clusters according to HW and LD. Using this method, we could also test whether two of three bees (the third was not genotyped) that initiated new nests during the worker-provisioning phase originated from native or transplanted nests. We assumed admixture and uncorrelated allele frequencies, and specified the number of possible genetic clusters as K = 1-3. We ran three replicates for each K, and specified a burn-in period of 100,000 steps. A single individual from each nest was included, together with additional adults caught at Inverness in spring 2015 but not released at Sussex (n = 21) and additional bees from the Sussex native population (n = 18). We implemented the Evanno method (Evanno et al. 2005) within the program Structure Harvester (Earl and Vonholdt 2012) to determine the best fitting value of K. We further characterised genetic differentiation between our Inverness and Sussex L. calceatum populations by calculating F ST using the default settings in Genepop 4.2 (Raymond and Rousset 1995;Rousset 2008).
Data analysis
To determine whether transplanted foundresses and their offspring exhibited plasticity, we examined two characteristics associated with social phenotype: worker behaviour and B1 offspring size. First, we tested whether the observed pattern of behaviour exhibited by offspring of native and transplanted foundresses indicated (i) environmentally mediated plasticity or (ii) fixed genetic differences between the two populations. Under plasticity, the timing of B1 emergence might be a key factor mediating the decision of B1 offspring to become workers . A significant effect of 'source' (Sussex or Inverness) on social phenotype would indicate fixed genetic differences between populations, whereas a significant effect of 'emergence date' would be indicative of plasticity. Springtransplanted foundresses typically provisioned to produce their B1 offspring later than native foundresses ( Fig. 2; Fig. 5a), and therefore could have experienced different environmental conditions that may have influenced offspring social phenotype. To control for one such factor, we included temperature during a foundresses' provisioning period in the model. This was calculated as the average of mean daily temperature for each day between a foundress's first and last observed provisioning events, yielding 'foundress provisioning temperature'. We analysed the effect of 'source', 'emergence date' (designated as the date on which the first B1 female emerged at each nest) and 'foundress provisioning temperature' on a nest's 'phenotype' (presence or absence of workers) using a generalised linear model (GLM) with binomial errors. Given that later-provisioned offspring also emerged later (Fig. 5a), we checked for collinearity among explanatory variables (Dormann et al. 2013) by examining variance inflation factor (VIF) scores using the function 'vif' in the R-Package 'car' (Fox and Weisberg 2011). We employed a conservative threshold of VIF ≥ 2.5 to identify collinearity (Allison 2012). For all variables, VIF scores were low (< 1.3), indicating no significant collinearity.
Second, because eusociality in L. calceatum is associated with B1 offspring (workers) that are significantly smaller than their mothers (caste-size dimorphism; Davison and Field 2016), we tested for caste-size dimorphism and examined whether 'source' affected the size of B1 female offspring produced by native and transplanted foundresses. As there were multiple offspring per nest, we used a generalised linear mixed model (GLMM) to test for effects of 'caste' and 'source' on 'female wing length', with 'nest' included as a random factor. We initially included a caste/source interaction to test whether foundresses from different sources produced offspring of different sizes relative to themselves.
We also tested for differences between native and transplanted bees in the time taken to produce the first B1 offspring. We used a GLM with normal errors to test for effects of 'first foundress provision date' and 'source' on 'development time'. Development time was considerably left s k e w e d , a n d s o w e i m p l e m e n t e d t h e f u n c t i o n powerTransform in the R-package 'car' (Fox and Weisberg 2011) to transform the data. Offspring of transplanted and native foundresses were different in size, and therefore we included 'size' to control for this difference. As a measure of size, we used the mean wing length of marked workers in eusocial nests, and of B1 females excavated at the end of the season from solitary nests.
We used a GLM with negative binomial errors to examine the relationship between the number of workers in a nest and 'productivity' (the number of immature B2 offspring produced), with 'number of workers' as the single explanatory variable.
We used Chi-squared tests with Yates' correction to compare the frequency of nest failure between nests initiated by native versus transplanted foundresses, and to compare the frequency of successful spring nest initiation between autumn and spring-transplanted Inverness bees. Foundresses were considered to have successfully initiated nests once they had started provisioning, and nest failure was indicated by the absence of detected B1 offspring.
For all models, we report significance values when removing terms from the minimal adequate model, after stepwise reduction from the maximal model (Crawley 2013). All analyses were conducted in the R environment (R Development Core Team 2013). Results are presented ± 1 standard error.
All data generated or analysed during this study are included in this published article and its supplementary information files.
Social phenotype
Social phenotype was successfully recorded at 39 nests (n = 29 native, n = 10 spring-transplanted). Nearly all native nests (n = 28/29) were social, with B1 female offspring that began provisioning as workers (x = 3.1 ± 0.33 B1 workers per nest). In contrast, nine out of ten nests initiated by springtransplanted Inverness foundresses did not become social (Fig. 4a). The two successful nests initiated by autumntransplanted foundresses also did not become social: one produced two B1 males and the other a single B1 female that brought one recorded pollen load to the nest before disappearing. Although social phenotype was somewhat ill defined at these nests, neither scenario was observed among the 29 native nests.
Foundresses were alive at the time of B1 female emergence in six of the nine solitary nests initiated by spring-transplanted foundresses, and these foundresses were regularly observed leaving the nest on nectar-collecting trips alongside B1 females. The single transplanted foundress whose nest became social produced three B1 females, all of which began provisioning in the presence of the foundress. Provisioning behaviour by B1 females was clear-cut at this nest because each B1 female was observed provisioning on at least ten occasions over three or more separate days (Yanega 1989). In contrast with the solitary nests, and in common with native eusocial nests, the foundress did not leave her nest after B1 offspring emergence. Among native nests, the number of B2 offspring produced increased linearly with the number of workers in a nest (GLM: X 2 1,15 = 4.944, p = 0.026; Fig. 3). The single eusocial nest initiated by a spring-transplanted foundress produced just two B2 offspring, despite having three B1 provisioners. Three native nests containing three workers produced three, five, and ten B2 offspring, respectively (Fig. 3).
Bee size
Native foundresses produced B1 females significantly smaller than themselves. In contrast, transplanted foundresses produced offspring the same size as themselves, and which were larger than native B1 females (GLMM: caste/source interaction X 2 1 = 20.302, p < 0.001; Fig. 4b). Two of the three B1 offspring produced in the single eusocial nest initiated by a transplanted foundress were the same size as native workers, while the third was closer in size to the mean for transplanted foundress' B1 offspring. A B1 female offspring excavated from beneath the single native solitary nest was similar in size to other native B1 females that became workers. All of these offspring are included in the analysis illustrated in Fig. 4b.
Foundress provisioning, offspring emergence date and development time
Native Sussex foundresses were first observed provisioning on 20 April 2015, with an average first provisioning date of 24 April ± 1.4 days (n = 51 native foundresses). However, because the season started later in Inverness than at Sussex in 2015, and because spring transplants could not be carried out Number of workers B2 offspring produced Fig. 3 Relationship between the number of provisioning workers recorded a nest and the number of B2 male and female offspring produced. Filled circles represent native nests, and the filled triangle the single social transplanted nest. Points are jittered to reveal multiple overlapping data points. The dashed line shows least-squares regression for native nests until foundresses emerged from hibernation in spring, the first spring-transplanted foundresses did not begin provisioning until 20 May ( Fig. 2; Fig. 5a). Nevertheless, two native Sussex foundresses did begin provisioning after springtransplanted foundresses (Fig. 5a) and had eusocial nests. Moreover, four spring-transplanted foundresses that did not produce workers still began provisioning before the latestprovisioning eusocial foundresses in 2012 and 2013 (Fig. 5b). The two autumn-transplanted foundresses that established nests began provisioning on 21 April and 9 May, respectively.
Time taken to produce the first female offspring did not differ between nests initiated by native and transplanted foundresses, and decreased linearly as the date when a foundress first started provisioning progressed (GLM: F 1,30 = 292.58, p < 0.001). Nevertheless, because their nests were generally initiated later in spring, the first female offspring of transplanted foundresses emerged later than those from most native nests ( Fig. 5a; Wilcoxon signed rank test: W = 27.5, p < 0.001, n = 10 transplanted, n = 31 native nests), although at an earlier date than almost all native offspring in two previous years (Fig. 5b). However, nests of transplanted bees may not have been solitary simply because B1 offspring emerged later in the season, or because transplanted foundresses provisioned later in the spring: four nests of native foundresses had been initiated later, or had B1 females that emerged later, yet still became eusocial (Fig. 5a). Indeed, even after controlling for the effects of temperature during foundress provisioning, and for offspring emergence date, nests initiated by native foundresses were still significantly more likely to become eusocial than nests initiated by transplanted foundresses (GLM: foundress provisioning temperature X 2 1,34 = 0.739, p = 0.390; B1 emergence date X 2 1,34 = 1.613, p = 0.204; bee source X 2 1,34 = 5.565, p = 0.021). Additionally, B1 female offspring of transplanted foundresses in the present study emerged relatively early compared with native eusocial B1 offspring from two previous years in the same nest aggregation at Sussex (Fig. 5b; Davison and Field 2016).
Brood genotyping
Prior to q value correction, three locus pairs were weakly significant for LD. After q value correction, however, there was no significant LD within or between the Sussex and Inverness populations. One locus deviated from HW at both Sussex and Inverness (LMA53, see Table S2). Genetic diversity was variable among loci, with a mean expected heterozygosity of 0.66 at Sussex and 0.54 at Inverness (Table S2).
We successfully genotyped B2 offspring from 22 nests (n = 15 native, n = 7 transplanted). Native nests contained a mean of 5.7 genotyped B2 offspring per nest (x = 3.7 ± 0.67 females and x = 1.9 ± 0.41 males). Five nests also contained a live foundress and five contained live workers at the time of excavation. Genetic data confirm our behavioural observations that L. calceatum exhibits eusociality at Sussex: average relatedness among B2 female brood within nests was 0.74 ± 0.03 (mean ± SE; 95% CI [0.67; 0.81]) (see Table 2 for a breakdown by nest). In four of five nests containing a live foundress at excavation, the foundress monopolised most or all B2 reproduction ( Fig. 6; Table 2). Of native nests containing multiple B2 female brood (n = 13/15), approximately half (n = 6/13) contained a single B2 female which was not sister to the remaining female brood (Fig. 6). Six of 12 nests in which it was possible to assign males contained males not laid by the principal egg-layer ( Fig. 6; Table S3). We found no evidence of multiple mating by native foundresses, suggesting that Sussex L. calceatum are monandrous. However, we found evidence of at least one alien bee reproducing within a single nest (see nest 58 in Table 2).
We successfully genotyped the foundress, a marked B1 female and two female B2 offspring from the single eusocial Fig. 4 a Proportion of native nests at Sussex (Sx) and nests initiated by spring-transplanted foundresses from Inverness (Iv) that expressed social or solitary behaviour. b Mean wing lengths (mm) of native (Sussex) and transplanted (Inverness) foundresses and their B1 female offspring (± 1SE). Foundresses: n = 18 from Sussex, n = 5 from Inverness. B1 females: n = 51 from Sussex, n = 13 from Inverness. Significant caste/source interaction X 21 = 20.302, p < 0.001 nest initiated by a spring-transplanted Inverness foundress. The foundress was the mother of the B1 female. However, the two female B2 offspring were not sisters, and our data suggest two alternative possibilities for parentage: (i) the foundress mated multiply and laid both, or (ii) a B1 female laid one. In either case, bees in this nest exhibited eusocial behaviour not previously recorded at Inverness (Davison and Field 2016). We also genotyped female B1 offspring from six solitary nests of transplanted foundresses (x = 1.8 per nest). In the nests where we genotyped the foundress, she matched as mother to most or all of the adult females excavated from her nest. The population of origin for B1 females in the remaining four nests was determined by the STRUCTURE analysis (see below). The STRUCTURE analysis strongly supported the existence of the two known populations (K = 2, Sussex and Inverness), and assigned all bees of known origin to the correct cluster (Fig. S1). The pairwise F ST value for our Inverness and Sussex was 0.286, indicating considerable genetic differentiation between our populations ). All B1 offspring excavated from beneath the nests of transplanted foundresses were assigned to the cluster containing bees from Inverness. The two genotyped bees that initiated new nests in the summer were assigned to the Sussex population. Independent summer nest founding has not previously been reported (Davison and Field 2016), and therefore represents the discovery of a new behaviour by L. calceatum at Sussex.
Discussion
Few studies have utilised field transplants to address the mechanisms underlying socially polymorphic behaviour , see also Cronin 2001Baglione et al. 2002). We transplanted the socially polymorphic sweat bee Lasioglossum calceatum from a non-eusocial population 5 Social phenotype and the relationship between the date a foundress was first observed to provision in spring and the emergence date of her first female offspring. Panels show a data from the present study only and b data from the present study together with data gathered in the same way by Davison and Field (2016) at Inverness, in the far north of the UK, 800 km south to a predominantly eusocial population on the University of Sussex campus, in the far south of the UK (Fig. 1). Most native Sussex bees exhibited eusociality, whereas nine of ten transplanted Inverness bees and their offspring exhibited solitary behaviour. Bearing in mind that the sample size is small, we do not wish to overemphasise these precise figures, but our best estimate is that 10% of Inverness bees are capable of expressing eusociality. Our results provide the first fieldbased experimental evidence that inter-population differences in social phenotype might predominantly reflect genetic differentiation, and provide genetic confirmation that L. calceatum is truly eusocial in the southern UK.
In nine of ten cases, neither spring-transplanted foundresses nor their offspring showed evidence of social plasticity: spring-transplanted foundresses provisioned large B1 female offspring that did not attempt to become workers. By contrast, native foundresses produced small B1 females that typically became workers (Fig. 4a, b). We confirmed that offspring of transplanted foundresses did not enter hibernation simply because their mothers had died (e.g. Packer 1990;, since transplanted foundresses were still alive in seven of ten nests at the time of offspring emergence. One possibility, however, is that because springtransplanted foundresses developed and overwintered at the Inverness source site, solitary behaviour represents plasticity Table 2 and Table S3 for details) in response to cues experienced by the foundress prior to transplantation (Thibert-Plante and Hendry 2011). Maternal effects may then influence offspring social phenotype, for example through nutrition provided by mothers (e.g. Brand and Chapuisat 2012;Kapheim et al. 2015b;Berens et al. 2015). However, relatively large B1 females can still become workers in other socially polymorphic sweat bees , and small B1 females can enter hibernation if they emerge late in the season (Hirata and Higashi 2008). This suggests that in species exhibiting plasticity, any nutrition-mediated maternal effects can be overridden by environmental cues experienced by emerging offspring. Furthermore, although we could not be certain of social phenotype, two autumn-transplanted foundresses successfully founded nests that did not become social: one produced two B1 males, and the other a single female that provisioned once before disappearing. Neither scenario was observed among native nests, 28 out of 29 of which became eusocial. These foundresses experienced overwintering conditions at Sussex, yet neither nest became social as expected if social phenotype was plastic, which together with the spring-transplanted nest that became social hints that overwintering conditions alone are unlikely to explain our results. We note the possibility that the male-producing transplanted foundress had not mated prior to capture in the previous autumn. Another possibility is that emerging later than most native B1 females may have increased the propensity for B1 females of transplanted foundresses to enter hibernation instead of becoming workers Fig. 5a). Nevertheless, they still emerged earlier in the season than almost all native B1 workers in two previous years ( Fig. 5b; Davison and Field 2016), and our analysis showed that neither temperature experienced by foundresses during provisioning nor offspring emergence date successfully explained social phenotype. Moreover, although spring-transplanted foundresses tended to begin provisioning later than Sussex foundresses in this study, four springtransplanted foundresses that did not produce workers still began provisioning before the latest-provisioning eusocial foundresses in 2012 and 2013 (Fig. 5b). Together this suggests that date of offspring emergence per se might not be a critical factor influencing the social phenotype of L. calceatum. However, we cannot discount the possibility that other unmeasured cues correlated with later foundress provisioning/offspring emergence could have influenced social phenotype.
Plasticity and its loss
Our results show that although most were solitary, transplanted Inverness bees can still express eusociality (see also Plateaux-Quénu et al. 2000): all three B1 female offspring of one spring-transplanted foundress began provisioning the natal nest, behaviour never previously observed at Inverness (Davison and Field 2016). Moreover, once these offspring had emerged, the foundress did not leave the nest, thus expressing the same behaviour as eusocial foundresses native to the Sussex site; and two of the three provisioning offspring were among the smallest produced by transplanted foundresses. Our limited data hint that sociality expressed by Inverness bees might be inefficient: despite the nest containing three provisioning B1 females (mean for native nests = 3.1 ± 0.33), productivity at this nest was lower than at any native nest that successfully produced B2 offspring (Fig. 3).
Phenotypic plasticity can be lost via genetic drift and subsequent genetic assimilation once environmental conditions become predictable (Masel et al. 2007;Pfennig et al. 2010), and when circumstances in which the alternative phenotype is expressed no longer arise (Sikkink et al. 2014;Smith et al. 2015;Cini et al. 2015). At Inverness, B1 females always enter hibernation, whereas at Sussex they may either become workers or enter hibernation (Davison and Field 2016). Plasticity could be lost in solitary populations where emerging offspring only ever receive cues associated with entering hibernation, such as reaching adulthood late in the season (Hirata and Higashi 2008; or mating soon after reaching adulthood (Yanega 1989(Yanega , 1997; but see Lucas and Field 2013). Therefore, at Inverness, loci regulating eusocial behaviour will not be exposed to selection because female offspring always enter hibernation. This could lead to genetic changes in the response threshold at which eusociality is expressed, and to its eventual loss from the population (Abouheif and Wray 2002;Suzuki and Nijhout 2006;Sikkink et al. 2014). Through this process, for example, the threshold at which bees from Inverness express eusociality might be higher than for native Sussex bees. In the UK, eusociality in sweat bees is restricted to the south Falk 2015;Davison and Field 2016), and it is possible that L. calceatum from our Inverness population might exhibit greater plasticity if transplanted further south in Europe to sites where environmental cues for sociality are more extreme (Sikkink et al. 2014).
Local adaptation requires mechanisms that minimise gene flow between eusocial and solitary populations (Lenormand 2002). Without physical barriers to gene flow, one possibility could be differences in the timing of offspring production (Soucy and Danforth 2002;Quintero et al. 2014;Weis 2015). In sympatry, eusocial nests produce reproductive offspring in the second brood, later than the first brood reproductives produced in solitary nests. However, assortative mating may not occur because the first brood in eusocial nests often contains males, together with some females which may mate and enter hibernation without becoming workers (Plateaux-Quénu 1992; Davison and Field 2016; PJD, personal observation).
Eusociality in L. calceatum
We confirmed that native L. calceatum exhibits eusociality at Sussex. Relatedness among B2 female brood within nests was high (r = 0.74), and foundresses surviving to the end of the season tended to monopolise reproduction. In common with other eusocial sweat bees, we also found no evidence that Sussex foundresses mated multiply (Crozier et al. 1987;Packer and Owen 1994;Mueller et al. 1994;, but see Soro et al. 2009), consistent with the hypothesis that monandry might help to facilitate the evolution of eusociality (Boomsma 2007;Hughes et al. 2008). Our result contrasts with a recent study of H. scabiosae Rossi, where relatedness among B2 females was considerably lower due to high rates of foundress turnover and frequent drifting between nests (Brand and Chapuisat 2016). We were unable to sample workers comprehensively; however, we did detect at least one likely case of drifting in which an alien B1 female produced a B2 female offspring. We documented no cases of natal workers laying B2 female brood, consistent with the idea that B1 females will reproduce in the presence of the foundress only if she is not their own mother (Paxton et al. 2002).
Conclusion
The possibility that differences in social phenotype between eusocial and solitary populations of L. calceatum primarily reflect genetic differentiation will be of special interest for future studies investigating the genomics of sociality (e.g. Kocher et al. 2013). Few studies have examined the extent to which social polymorphism has promoted population differentiation (e.g. see Soucy and Danforth 2002;Zayed and Packer 2002;Soro et al. 2010), or considered whether polymorphism could facilitate ecological speciation (Rundle and Nosil 2005; Thibert-Plante and Hendry 2011). It would be interesting to transplant bees from a less northerly solitary population, where selection for plasticity may have persisted and bees may reveal a lower threshold for the expression of eusociality. Furthermore, because both eusocial and solitary nesting has been recorded at Sussex, and eusocial foundresses routinely pass through a solitary phase in spring prior to worker emergence, bees from Sussex may be more predisposed to exhibit plasticity if transplanted to Inverness. In general, the cornucopia of social variation exhibited by sweat bees demands that species are studied in detail throughout their geographic range, and in a variety of environmental contexts (Wcislo and Danforth 1997;Wcislo 1997). | 2018-04-03T03:23:47.351Z | 2018-03-10T00:00:00.000 | {
"year": 2018,
"sha1": "d893150cecc4a3eb96860caa4cda42d0bd3b1776",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00265-018-2475-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5e54dc0233f2981c90f218d2af3d94223954d52",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221346274 | pes2o/s2orc | v3-fos-license | miRNA expression profile changes in the peripheral blood of monozygotic discordant twins for epithelial ovarian carcinoma: potential new biomarkers for early diagnosis and prognosis of ovarian carcinoma
Background Ovarian cancer is the second most common gynecologic cancer with high mortality rate and generally diagnosed in advanced stages. The 5-year disease-free survival is below 40%. MicroRNAs, subset of the non-coding RNA molecules, regulate the translation in post transcriptional level by binding to specific mRNAs to promote or degrade the target oncogenes or tumor suppressor genes. Abnormal expression of miRNAs were found in numerous human cancer, including ovarian cancer. Investigating the miRNAs derived from the peripheral blood samples can be used as a marker in the diagnose, treatment and prognosis of ovarian cancer. We aimed to find biological markers for early diagnosis of ovarian cancer by investigating BRCA1 gene mutation carrier monozygotic discordant twins and their high risk healthy family individual’s miRNAs. Methods The study was conducted on monozygotic twins discordant for ovarian cancer, and the liquid biopsy exploration of miRNAs was performed on mononuclear cells that were isolated from the peripheral blood samples. The miRNA expression profile changes in the study were found by using microarray analysis. miRNA isolation procedure performed from the lymphocyte in accordance with the kit protocol. The presence and quality of the isolated miRNAs screened by electrophoresis. Raw data logarithmic analysis was studied by identifying the threshold, normalization, correlation, mean and median values. Target proteins were detected for each miRNA by using different algorithms. Results After the comparison of monozygotic discordant twins for epithelial ovarian carcinoma upregulation of the 4 miRNAs, miR-6131, miR-1305, miR-197-3p, miR-3651 and downregulation of 4 miRNAs, miR-3135b, miR-4430, miR-664b-5p, miR-766-3p were found statically significant. Conclusions The detected 99 miRNAs out of 2549 miRNAs might be used in the clinic as new biological indicators in the diagnosis and follow up of epithelial ovarian cancer with complementary studies. The miRNA expression profiles were identified to be statistically significant in the evaluation of ovarian cancer etiology, BRCA1 mutation status, and ovarian cancer risk in accordance with the obtained data. There is a need for validation of the miRNAs which were particularly detected between monozygotic twins and its association with ovarian cancer was emphasized in our study in wider cohorts including ovarian cancer patients, and healthy individuals.
Background
Ovarian cancer is a significant cause of mortality in gynecologic cancers and one of the leading cause of cancer-associated mortality [1]. In Turkey, ovarian cancer is the 7th most common type of cancer in women in accordance with worldwide. Globocan 2018 data's show that each year more than 295.000 women are diagnosed with ovarian cancer (OC) worldwide, and approximately 185.000 women die from it. The data of Globocan 2018 for Turkey shows that annually 3729 women are diagnosed with ovarian cancer, and 2191 women die from this malignancy. The 5-year survival rate was given as 23.8%. These data revealed that ovarian cancer is an important reason of gynecologic cancer associated mortality rate [2]. The epithelial ovarian cancers (EOC) originating from the ovarian surface epithelium constitutes approximately 90% of ovarian malignancy [3]. The majority (70%) of EOC patients are diagnosed in advanced stages (Stage III, and IV), and 5 year free-survival rate is below 40% [4]. The standard treatment for newly diagnosed ovarian cancer is the combination of cytoreductive surgery and platin-based chemotherapy. Significant advances in radical surgery and chemotherapy strategies have improved clinical outcomes, but unfortunately, no progress has been made with relapse and treatment resistance [5]. Ninety percent of ovarian cancer occurs sporadically in the population, whereas hereditary type appears 10% of ovarian cancer patients. BRCA1 and BRCA2 genes are the most common breastovarian cancer syndrome associated genes. Both BRCA1 and BRCA2 have roles in the control of the genomic stability, cell cycle, and apoptosis. The mutations occurring in these genes result with the inability of DNA repair, and therefore results in the accumulation of the mutations in the cell. The rate of the breast cancer susceptibility of women with BRCA1 gene mutation until the age of 80 years was 72% and the rate of ovarian cancer susceptibility rate is 44%, breast cancer susceptibility women with BRCA2 gene mutation until the age of 80 years is 69% and ovarian cancer development risk is 17% [6]. Twin studies became important on genetics by the end of the nineteenth century. Genetic and epidemiologic studies with monozygotic twins were accepted as highly useful investigation models in the past decades and have been used recently [7]. When a similarity for a disease or a quantitative feature between monozygotic and dizygotic twins is compared, variations are excluded according to studies conducted in the population and therefore, it is easier to identify and make etiological differences visible via twin studies. Because the affected siblings and dizygotic twins share the (approximately) 50% of the differentiated genes, the phenotypic differences between twins are known to be associated with the genetic variation. In addition, diversity may be revealed with a very limited patient population. Therefore, the results of the twin studies can be applied to the population and can make valuable contributions to the genetic studies. Monozygotic twins are genetically similar and generally expected to be compatible for congenital malformations, chromosomal abnormalities and Mendelian disorders. There are numerous studies conducted via discordant monozygotic twins revealing the genetic contribution [8]. Therefore, investigating the genetic variability in monozygotic twins is highly important and the majority of the human genetics associated research focus on finding genetic variability in discordant monozygotic twins. Phenotypically discordant monozygotic twins are used as the model systems in identification of the variable in understanding the pathogenesis of a disease. The most striking study is the one conducted with monozygotic twins in Canada and evidencing that multiple sclerosis (MS) was a genetic disease [9]. MicroRNAs are one of the subset of the noncoding RNAs generally consisting of single strand in 19-24 nucleotide length, not transformed to protein, having roles in post transcriptional regulation or suppression of translation of the target mRNAs [10,11]. The regulatory roles of miRNAs were demonstrated to occur in tumorigenesis, cell differentiation, proliferation and apoptosis [12][13][14][15]. miRNA genes are known to locate in the chromosomal breaks. This DNA breaks cause chromosomal abnormalities frequently associated with cancer susceptibility and tumor development [16][17][18]. The noninvasive biological indicators have been used for the treatment resistance of ovarian cancer. The most common of this indicators are the cancer antigen-125 (CA-125) and cancer antigen-15-3 (CA-15-3). These biological indicators can be used in the follow-up of the treatment response in the diagnosed patients but cannot be used in the early diagnosis and in differentiation of the malignant disease [19]. Therefore, there is a need for special therapeutic agents customized for patients that may be used target specific therapies and in the early diagnosis of the ovarian cancer in identification of the efficacy of therapy and in the follow up period. Thus, studies investigating the target molecules and biological indicators are required that will enable the early diagnosis and in the development of the better therapy options. Differentially synthesize miRNAs such as miR − 200, miR-141, miR-125b, miR-222-3p or let-7 family has been shown in studies with ovarian cancer patients [20]. However, the use of these miRNAs as a biomarker in ovarian cancer is not yet available. In order to clearly define the role of miRNAs in the pathogenesis of ovarian cancer, we planned to investigate the BRCA mutant monozygotic twins with the same genetic profile but with discordant for ovarian malignant transformation. In this study, 2549 miRNAs, which are thought to have the potential biological indicator role, were studied from blood samples of both discordant monozygotic twins and BRCA wild type healthy siblings.
Patients recruitment
The peripheral blood lymphocytes of monozygotic twins, discordant for ovarian cancer and healthy individuals in the same family were used in the study. The patient diagnosed with ovarian cancer and all family members, applied to the Cancer Genetics Clinic of Oncology Institute of Istanbul University for BRCA (breast cancer susceptibility) gene testing were examined for BRCA gene mutation. All family members in the study consisted of high-risk individuals with Hereditary Breast and Ovarian Cancer (HBOC) syndrome and the people included in the study were given as BR codes according to patient file number. The monozygotic ovarian cancer patient, healthy monozygotic twin, healthy 3 sisters, and 1 niece were found to have BRCA1 gene mutation, c.5266 dupC p.Gln1756Profs*74 rs397507247 on exon 20. The patient's brother and daughter were found negative for BRCA1/BRCA2 gene mutations. In this study, lymphocyte cells separated from peripheral blood belonging to total of 8 cases including younger age ovarian cancer patient and healthy monozygotic twin, a patient's daughter, 2 elder sisters, a younger sister, a nephew and a brother were examined by miRNA microarray method. The pedigree of the family included in the study and their hierarchical cluster analaysis via Euclidean method is shown on Fig. 1.
The study was approved by the Ethics Committee of the Istanbul Faculty of Medicine. Following Institutional Ethics Committee approval, informed consents were obtained from all participants before enrollment into the study (Ethics Committee Approval Number: 2016/4).
Lymphocyte and miRNA isolation
Ficoll (Sigma-Aldrich, Darmstadt, Germany) density gradient was used to separate white blood cells (mononuclear cells) from other blood components. miRNA isolation procedure was performed from the lymphocyte in accordance with the kit protocol using the miRNeasy Mini Kit (Qiagen, cat No./ID: 217004). The procedure steps in accordance with the protocol are as follows; 700 μL QIAzol solution was included on the cells stored in nitrogen tank. Cell fractionation was enabled by mixturing using the vortex. For complete nucleoprotein fractionation they were stored at 24°C room temperature for 5 min. By adding 140 μL chloroform, they were shacked and mixed on hand. The tubes were incubated for 2-3 min at 24°C room temperature, then were centrifuged for 15 min at + 4°C, and 12.000 g. The supernatant formed after centrifugation was transferred to collection tube using a pipette and was mixed using vortex by inclusion of 525 μL 100% ethanol. The supernatant formed after the ethanol centrifuging was transferred to collection tube was removed with a pipette, was mixed with vortex by including 525 μL %100 ethanol. Seven hundred microliters was taken from the obtained mixture and was transferred to the RNeasy MiniElute spin colon placed on 2 mL collection tube. The tubes were centrifuged for 15 s at 8000 g at 24°C room temperature. Seven hundred microliters RWT buffer was added to spin colons and the colons were washed by centrifuging at 8000 g for 15 s.
The centrifuging procedure was repeated by including 500 μL RPE buffer twice consecutively to colons. The colons placed into clean tubes with 2 mL were dried by centrifuging 1 min in maximum speed. The colons placed in 1.5 mL sterile tubes were included 50 μL distilled water by centrifuging at 8000 g in 1 min and the miRNAs were collect.
The quality control of the miRNAs
The presence and quality of the isolated miRNAs were screened by electrophoresis at 150 V on 1.5 Agarose gel. Then, the purity and concentrations of the miRNAs were measured on Thermo Scientific NanoDrop 2000 (spectrophotometer NanoDrop Technologies, Wilmington, DE, USA) device. The miRNA purity for each person in accordance with the NanoDrop device measurement result were obtained with the comparison of the measurements at spectrophotometrically at 260 nm and 280 nm wave lengths. The measurement rates at 260/280 nm wave lengths is a sign of quality of the purity of the samples, therefore the samples in the ideal value interval of 1.8 and 2.2 for RNA measurements were included in the study. The purity of miRNAs were evaluated using a Bioanalyser device (2100 Bioanalyser, Agilent Technologies, Santa Clara, CA, USA), Agilent RNA 6000 Nano Kit (Agilent Technologies, Santa Clara, CA, USA) for confirming whether the miRNAs were appropriate and in adequate level for microarray analysis. The evaluated sample concentrations and results were analyzed. The samples with RNA concentrations between 100 ng/μL and rRNA rate over 1 and RNA integrity number values between 7 and 9 were evaluated as the appropriate samples for array study.
Microarray trial protocol
Microarray protocol was performed by preparing the Spike-in solution, sample marking, hybridization, sample dephosphorylation, sample denaturation, sample ligation, hybridization of the samples, slide loading, preparation of the hybridization unit and elution and scanning of slides. The slide scanning procedure was performed using the Agilent Microarray Scanner (Agilent Microarray Scanner with Surescan High Resolution Technology, Agilent Technologies, Santa Clara, CA, USA) device. The scanning procedure of the slides were performed on SurePrint G3 Human miRNA Microarray, Release 21.0, 8x60K (Agilent, Inc. Santa Clara, CA) platform, and using the Agilent Technologies G2600D scanning protocol. The analysis of the "TIFF' (Tagged Image File Format) extensioned files obtained after scanning procedure was performed using the Agilent Feature Extraction v11.0.1.1 program.
The success levels of stages developed in all experiment process with this analysis program, the quality of the levels, the process were monitored and evaluated. Then, Bioinformatic Analysis procedure was performed.
Data analysis
Raw data logarithmic analysis was studied by identifying the threshold, normalization, correlation, mean and median values. Then, the miRNAs demonstrating different expression profile among the samples were filtered. Using the Fold & change rates and independent two sample T test, the possible difference between the compared groups were evaluated. All evaluations were performed to enable the cut-off values as the fold&change rates |FC| ≥ 2, and p-value < 0.05. Hierarchical cluster analysis was performed using the Euclidean method ( Fig. 1) and Complete Linkage cluster method. The control of the experimental errors and the detection of the erroneous finding rate were identified using the Hochberg method.
The targeted genes thought for each miRNA were confirmed by also both algorithms and the miRNAtarget relations were also experimentally confirmed mir-Tarbase7.0 (https://mirtarbase.mbc.nctu.edu.tw/php/ index.php) database.
Comparison groups
In the study, miRNA analysis was performed at the genome level with/without mutation in cases with/without ovarian cancer. The miRNA data was evaluated by comparing different groups in order to investigate the effect of BRCA mutation in ovarian malignancy development and determine the miRNAs that can be important in the ovarian cancer pathogenesis: In Group 1; the monozygotic twins discordant for ovarian cancer were compared in order to find the effects of miRNAs in the formation of ovarian cancer. In Group 2; the family members with BRCA1 mutation were compared with family members without BRCA1 mutation to identify the changes of miRNAs expression levels according to BRCA positivity. In Group 3; the monozygotic ovarian cancer patient with BRCA1 mutation carrier and the other healthy family members with mutation carrier were compared for investigate the effects of both ovarian cancer development and BRCA positivity on miRNAs expression level. In Group 4; all family members were compared with ovarian cancer monozygotic twin in order to find the miR-NAs that might be important in the predisposition of ovarian cancer. The comparison groups also showed in Table 1.
Results
We identified 2549 differentially expressed comparison of miRNAs between the groups. The raw data obtained after experimental studies were filtered before the comparisons between the groups. The upregulated or downregulated miRNAs expression levels more than 2 fold (FC > 2) and smaller than the p value 0.05 (p < 0.05) were considered in evaluation and the comparisons between the groups were performed based on these values. All these comparisons were evaluated for ovarian cancer etiology, BRCA1 mutation carriage and the ovarian cancer risk. Hierarchical cluster analysis of the expression of 99 miRNAs represents sharp separations of up-regulated (yellow) from down-regulated (blue) in Fig. 2. 17 miRNAs total of 2549 miRNAs were found statistically different after the comparison of phenotypically discordant monozygotic twin siblings. The 6 miRNAs miR-1273 g-3p, miR-1305, miR-197-3p, miR-3651, miR-6131, and miR-92a-3p expressions were found to have upregulated, and the other 11 miRNAs, let-7i − 5p, miR-125a-5p, miR-15b-5p, miR-22 − 3p, miR-3135b, miR- Table 1 Comparison groups and cases in the groups Group 1 Group 2 Group 3 Group 4 320d, miR-342-3p, miR-4430, miR-451a, miR-664b-5p, and miR-766-3p expressions were found to have downregulated. After the bioinformatic analysis, a total of 17 upregulated and downregulated statistically significant miRNAs and their target molecules are given in Table 2 and Fig. 3. Different miRNAs level were compared between group 2 in order to determine the effect of BRCA1 gene mutation. Group 2 was consisted after the comparison of the BRCA1 gene mutation carrier family members and individuals not carrying BRCA1/2 gene mutation according to miRNAs expression profiles. After the comparisons, downregulated and upregulated miRNAs related to BRCA1 gene mutation carrier were determined. The expression of a total of 6 miRNAs including miR-4449, miR-4653-3p, miR-486-5p, miR-5739, miR-6165, and miR − 874-3p associated with the BRCA1 gene mutation carrying were upregulated, and the expression of a total of 19 miRNAs including miR-126-3p, miR-320a, miR-320b, miR-320c, miR-320d, miR-320e, miR-324-3p, miR-3656, miR − 4284, miR-4428, miR-4516, miR-4741, miR-484, miR-564, miR-6089, miR-6869-5p, miR-6891-5p, miR-7107-5p and miR-7847-3p were found downregulated. After the bioinformatic analysis, a total of 25 upregulated and downregulated statistically significant miRNAs and their target molecules are shown in Table 3 and Fig. 4.
Different miRNA levels were compared between group 3 in order to determine the relation with ovarian cancer development and BRCA positivity. Group 3 consists of comparison of miRNAs of BRCA1 positive ovarian cancer patient with all other BRCA1 positive healthy individuals.
Discussion
Women are diagnosed with ovarian cancer at an advanced stage due to limited number of biological markers for ovarian cancer patients. Although existing ovarian cancer biomarkers, cancer antigen-125 and cancer antigen-15-3 (CA125, CA15-3) are sensitive in the follow-up of diagnosed gynecological cancers, they have less sensitivity in the diagnosis of early stage gynecological cancers and separation of malignant tumor formations from benign formations [19]. Therefore, to understand the underlying mechanisms of ovarian cancer and to explore targeting drugs and to improve new treatment protocols for ovarian malignancy, revealing significant genetic changes is necessary. The genetic and epidemiologic studies conducted on monozygotic twins are known to provide accurate and direct information about the gene and environment interaction with the disease occurrence mechanism [7]. The changes in genes that result in the occurrence of tumors such as miRNA expression level among the monozygotic twins provides information on the etiology of disease and may have a role as a biological indicator in identifying the early stage disease and in the follow up of the prognosis. We aimed to identify the non-invasive biological markers that may be used in the early diagnosis of ovarian cancer through investigating the miRNAs in the peripheral blood of monozygotic twin siblings discordant for ovarian cancer with the miRNA molecules of the other healthy members in the family. Thus that may cause less bias than the controls to be selected from the population. Ninetynine different miRNA molecules presented in the study were detected after the comparison of monozygotic twin siblings who were discordant for ovarian cancer and with the other healthy individuals. Seventeen different miRNAs were found that could be used for detecting early diagnosis and prognosis of ovarian cancer between the monozygotic twin siblings who were discordant for ovarian cancer in our study. The association between 8 out of 17 miRNAs and ovarian carcinoma is being reported for the first time in this study. Due to the high number of newly detected miRNAs in our study, the discussion and comparison were only made between the candidate miRNAs. Although miR-197-3p, miR-1305, miR-6131, miR-3651, miR-3135b, miR-4430, miR-664b-5p and miR-766-3p have not been shown to be associated with ovarian cancer in literature but limited number of studies have suggested the association with other cancers. Wang et al. found the elevated level of miR-197-3pin the same way as we do. The upregulated miR-197-3p expression level was shown to promote the cellular invasion, and metastasis in bladder cancer in that study. Researchers reported that LINC00312 gene was responsible for invasion and metastasis mechanisms and this gene inhibited the cellular migration, and invasion by suppressing the miR-197-3p expression. Similar results were detected in thyroid cancer in the study of Liu et al. [21,22]. Jin et al. reported that increased expression level of miR-1305 caused pluripotent stem cells to accelerate the cell cycle G1/S transfer in addition to causing the cellular differentiation with the increased miR-1305 expression [23]. The expression levels of reduced miRNA-125a-5p and let-7i-5p found in the scope of our study have been shown to parallel with other studies in the literature. Langhe et al. suggested that let-7i-5p might be described as a diagnostic indicator in ovarian cancer [24]. The miRNA-125a-5p expression was upregulated to inhibit the cancer proliferation and migration in the in vitro study of Qin et al. in human cervical carcinomas [25] and miR-125a-5p upregulated expression level was demonstrated to inhibit the cervical cancer metastasis in cell lines [25]. In our study, statistically significant 82 miRNAs out of 99 miRNAs detected after the comparisons between the monozygotic twin with ovarian cancer and other healthy siblings were discussed here. We reported increased expression level of miR-4653-3p that had similar expression level in primary breast cancer tumors [26]. According to Zhong et.al, high expression level of miR-4653-3p was demonstrated to cause tamoxifen resistance by affecting the FRS2. They suggested up-regulated miR-4653-3p level would be possibly used as a therapeutic agent and will be effective in order to eliminate the tamoxifen resistance [26]. Ma et al. reported that upregulated miR-486-5p expression level was associated with estrogen receptor positive ovarian cancer occurrence and development was effective through OLFM4 expression [27]. We also found upregulated miR-486-5p expression in our study, and suggested that the occurrence of ovarian cancer was performed through the same pathway. In our study, miR-126-3 was found to be decreased in ovarian cancer.
In parallel with our study, Fiala et al. reported the miR-126-3 expression level was important in angiogenesis, tumor growth, invasion and vascular inflammation and shorten the prognosis-free survival and overall survival in metastatic colorectal cancers patients [28]. The decreased miR-126-3 expression level was suggested to possibly show the same effect in ovarian cancer. Here in this study, miR-320b, miR-320c, miR-320d and miR-320e-miRNAs belonging to the miR-320 family have low expression levels in ovarian cancer were reported to found downregulated in colorectal adenomas and carcinomas by different researchers. In this study, decreased expression of miR-320 family has been shown to activate cell proliferation [29,30]. Kuo et al. reported that increased expression of miR-324 family suppresses the growth and invasion of cells in breast cancer and suppresses cell proliferation in colorectal cancer and has been shown to have tumor suppressor effect in terms of cancer prevention [30]. We detected the decreased miR-324-3p expression level in our study. This result was in parallel with the tumor suppressor effect of this miRNA, and we suggest that might have a role in the development of ovarian cancer. Yang et al. showed the increased expression of miR-4284 in human glioblastoma cancer stem cells reduced cell viability and induced apoptosis via the JNK / AP-1 signaling pathway [31] and in another study, it was argued that decreased expression of miR-4284 causes cervical cancer [32]. In our study, it is thought that the decreased expression level of miR-4284 can trigger the formation of epithelial ovarian cancer via the same signaling pathway shown in glioblastoma and cervical cancers.
The upregulated miR-4516 expression level caused the tumor suppression by changing the regulation of STAT3, which is the target molecule of miR-4516 and causing the decrease of vascular endothelial growth factor (VEGF), the target of STAT3 in the in vivo study of Chowdhari et al. [33]. However, we found downregulated miR-4516 expression level in our study. In contrary to the results, the down-regulated miR-4516 expression level in our study was suggested to result in VEGF increase and might be effective in the development of malignant ovarian formation. Hu et al. in the study with cervical cancer cells suggested that miR-484 targets the ZEB1 and SMAD2 functioning as tumor suppressor, suppress the cell proliferation and epithelial mesenchymal transition and therefore might be a biologic indicator for cervix cancer [34]. In our study, it is thought that decreased miR-484 expression level detected in ovarian cancer promotes ovarian cancer development by increasing cell proliferation and it may be a novel biological marker in ovarian cancer. Mutlu et al. reported in their study on breast cancer cell lines that miR-564 directly affects the PI3K and MAPK pathways through AKT2, GNA12, GYS1 and SRF molecules and inhibit the cell cycle in G1 phase and through this pathway inhibited the proliferation and invasion of the breast cancer cells [35]. Decreased miR-564 expression was detected in ovarian cancer in our study. This result in parallel with the study results in the literature suggested that miRNA molecule increased the cell proliferation, invasion in ovarian cancer and stimulated the development of the ovarian cancer. The increased miR-1260 expression level in parallel with our study was found to increase in colorectal cancers [36], in non-small cell lung cancer [37], kidney cancers [38] and this increase was associated with the lymph node metastasis, venous invasion and was emphasized that that might be evaluated as a potential prognostic biological indicator in these cancers. The miR-5100 found upregulated in our study was reported to increase tumor expression in lung cancer by targeting Rab6 molecule [39]. The miR-5100 may have the same effect in ovarian cancer. Lower miR-1225-5p expression level detected in our study was shown to demonstrate more aggressive phenotype and strong correlation with poor prognosis in gastric cancers, and also supported the cell proliferation, colonial formation, in vitro invasion, tumor growth, and metastasis in mice [40].
Decreased miR-142-3p expression level detected in our study was reported to be significantly associated with the advanced tumor stage, lymph node metastasis and cervical invasion [41]. In addition, researchers reported that increased miR-142-3p expression level might inhibit the tumor progression, and invasion in hepatocellular carcinoma tissues [42]. We showed that miR-638 demonstrating decreased expression profile was among one of the miRNAs whose expression levels highly decreased and targeted the BRCA1.
In addition, this miRNA was found to have an important role in triple negative breast cancer progression by disrupting the BRCA1. Therefore, miR-638 was reported to be a potential prognostic biologic marker in breast cancer, and might function as a therapeutic target role in literature [43]. It is thought that this situation may also apply to ovarian cancer according to the result found in our study.
Conclusions miRNAs are found in all eukaryotic cells and involve in the conversion of genes into proteins and is considered as an important cause of cancer in recent years. Here in this study, after investigation of ovarian cancer monozygotic discordant twins and their healthy family members 99 miRNAs were identified for the first time. miRNAs detected in different cancer types in the literature were parallel with the miRNA expression levels in our study. | 2020-08-28T14:23:36.670Z | 2020-08-27T00:00:00.000 | {
"year": 2020,
"sha1": "7d1ee04fbb7d469bba11f1a6864af666008108a9",
"oa_license": "CCBY",
"oa_url": "https://ovarianresearch.biomedcentral.com/track/pdf/10.1186/s13048-020-00706-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d1ee04fbb7d469bba11f1a6864af666008108a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249706236 | pes2o/s2orc | v3-fos-license | Lived experiences of children and adolescents with obsessive–compulsive disorder: interpretative phenomenological analysis
Background Childhood obsessive–compulsive disorder (OCD) is distinct from OCD in adults. It can be severely disabling and there is little qualitative research on OCD in children. The present study aims to explore the subjective experiences of diagnosis, treatment processes and meaning of recovery in children and adolescents suffering from OCD and provide a conceptual model of the illness. Methods It is a qualitative study of ten children and adolescents selected by purposive sampling. MINI KID 6.0, Children’s Yale-Brown Obsessive–Compulsive Scale and Clinical Global Impression-Severity Scale were administered at the time of recruitment of subjects into the study. Interviews were conducted using an in-depth semi-structured interview guide and audio-recorded. The transcribed interviews were analyzed using Interpretative Phenomenological Analysis (IPA). The study sought to explore participants’ sense-making of their world, their thoughts, feelings and perceptions through interpretative enquiry. The findings were confirmed by a process called investigator triangulation, member check and peer validation. Results IPA yielded five major themes—‘illness perception changes over time’, ‘disclosure on a spectrum’, ‘cascading effects of OCD’, ‘treatment infuses hope and helps’, and ‘navigating through OCD’. A summary of these themes and their subthemes is presented as a conceptual model. The essence of this model is to show the inter-relationship between themes and provide a comprehensive understanding of the phenomenon of OCD. Conclusions To the best of our knowledge, this is the first study to explore lived experiences of children and adolescents with OCD using interpretative phenomenological analysis (IPA). It was noted that perception of illness and treatment processes evolves over time, and recovery is viewed as a process. Future qualitative research can be carried out with a focus on ‘therapist-related barriers’ or ‘student–teacher dyads’ that can inform clinical practice and school policies respectively. Trial registration NIMH/DO/IEC (BEH. Sc. DIV)/2018, l1 April 2018.
Introduction
Pierre Janet described pediatric obsessive-compulsive disorder (OCD) for the first time in a 5-year-old boy [1]. OCD with onset in childhood appears to be a distinct subtype with a unique clinical, etiological and epidemiological profile [2,3]. It has a higher prevalence of comorbid ADHD, anxiety and tic disorders [4], higher familial and genetic loading [5], and higher persistence rates [6] as compared to the adult-onset OCD. When OCD has its onset in childhood, it interferes with normal development and poses a higher likelihood of anxiety disorders in adulthood [7]. It was found that children usually minimize obsessive-compulsive symptoms when compared to parental reports [8]. Treatment delay is common and is associated with poorer outcomes. Diagnosis is generally delayed by three years after the onset of symptoms [9]. Factors leading to delay in recognition and diagnosis are lack of insight, shame associated with symptoms, family accommodation and lack of awareness of the disorder both among patients and clinicians [10].
In addition, there are social issues surrounding OCD in children with classroom implications being at the forefront [11]. Storch et al. reported that more than one-fourth of the sample with OCD faced peer victimization on a regular basis [12]. This condition in children can even be associated with low self-esteem and social ostracization [11]. Bhattacharya and Singh described 'feeling different from others' and a loss of an authentic self in youth aged 18-25 years in their thematic content analysis [13]. Brooks interpreted OCD as a traumatic brain disorder that impairs a person's public and private identities causing significant mental disability [14]. This suggests that an individual's true self is masked by the face of illness, which may gradually disappear in the due course of time. Therapists often tend to focus on symptom reduction. Symptom reduction is only one aspect of "treatment". It is imperative to shift focus to enabling children and adolescents to live their lives with dignity by providing holistic care and helping them develop an authentic sense of self.
Subjective experiences or otherwise called lived experiences are often a subject of interest and appealing to study in the field of human psychology and psychiatry. Literature on the subjective experience of severe mental illnesses has laid emphasis on three major aspects viz. the person's responses and attitudes to his or her illness, the degree of awareness of the illness and the experience of illness as a traumatic event [15]. Interpretative Phenomenological Analysis (IPA) aims to explore lived experience of a phenomenon through the subject's personal experiences and perception of objects and events. Its hallmark is that the researcher not only gets an insider's perspective but also plays an active role in interpreting the process and experience [16].
Qualitative data on the phenomenon of OCD in children is limited as compared to the amount of quantitative research done in this field. Most of the findings are from work done in adult participants. Some aspects of pediatric OCD that have been studied are coercive and disruptive behaviours [17], parental adaptation [18], caregiver's experiences [19], and therapists' perception of ERP [20]. The study participants in these studies were either the family members [17][18][19] or therapists [20] and not the children. To the best of our knowledge, there are only two qualitative studies done in adolescents with OCD that have been published. Lenhard et al., studied adolescents' experiences of internet-delivered CBT [21]. Effectively, there is only one study published that explored the lived experiences of adolescents with OCD [22]. Table 1 summarizes the phenomenon, age group and the number of children studied, methodological approach, and key findings of these two studies.
Despite the obvious differences in illness presentation in children as compared to adults such as the unique comorbidity profile of pediatric OCD or its varying levels of insight, the treatment strategies are similar in both
Key message
• Delay in recognition of symptoms contributes to significant distress to the individual and family in cases where help is sought. • There is a need to improve awareness so as to identify the problems early. This will improve help-seeking behaviour and aid in reducing peer victimization. • Therapist-related barriers contribute to delays in making a diagnosis. • Treatment is likely to positively impact other spheres of the child's life such as academics and interpersonal relationships and result in more long-term benefits. • Bullying in school-context is a common occurrence.
Keywords: Lived experiences, Children, Adolescents, Obsessive-compulsive disorder There is no data on the in-depth analysis of the subjective experiences of children and adolescents with obsessive-compulsive disorder with a focus on illness perception, perception of its impact on functioning and treatment processes and the meaning of recovery. IPA is the method of choice to explore lived experiences. We address this felt need to study the first-hand accounts of children and adolescents suffering from OCD through our study. This study aims to comprehensively analyze the subjective experiences of children and adolescents living with OCD and provide a conceptual model of the lived experiences of the phenomenon of childhood OCD. The findings generated will improve our understanding of their subjective perceptions and help in devising a plan for providing holistic care. The objective is to help clinicians in connecting with these children to gain better insight into their inner world during the process of treatment.
Study design
A qualitative exploratory study using an in-depth, individual, semi-structured interviews was conducted. The interview guide was developed according to the interpretative phenomenological analysis (IPA) guidelines. At the end of each interview, significant observations were noted that were used while interpreting the data to get a better understanding of each subject's account.
The study was conducted in a naturalistic setting of the out-patient and in-patient settings of the department of child and adolescent psychiatry at the National Institute of Mental Health and NeuroSciences (NIMHANS). The study population constituted of children and adolescents diagnosed with obsessive-compulsive disorder selected by purposive sampling. The primary objective was to explore subjects' perception of illness, the experience of others' perceptions, treatment and meaning of recovery. The secondary objective was to identify barriers and facilitators to recovery and design a recovery model. The data pertaining to recovery (barriers, facilitators and recovery model) is a subject of separate manuscript.
Participants
The primary objective when doing an interpretative phenomenological study is to get a deep understanding from an individual point of view and the focus is less on generalizability. Therefore, homogeneity is preferred for studies employing IPA [23]. Hence, it was ensured that the sample was homogeneous in terms of the phenomenon being studied. Participants had to fulfil the criteria of having had the illness for at least six months duration and were in remission at the time of intake into the study and interview as per the definition given by Mataix-Cols et al. [24]. It is as follows 'If a structured diagnostic interview is feasible, the person no longer meets diagnostic criteria for OCD for at least one week. If a structured diagnostic interview is not feasible, a score of ≤ 12 on the (C)Y-BOCS plus Clinical Global Impression -Severity (CGI-S) rating of 1 ("normal, not at all ill") or 2 ("borderline mentally ill"), lasting for at least one week. ' Table 2 enumerates the inclusion and exclusion criteria devised for the purpose of the study.
Sample size
Participants were recruited till the saturation of the themes occurred. Fig. 1 depicts the process of recruitment.
Materials
MINI KID (Mini International Neuropsychiatric Interview for Children and Adolescents) 6.0 [25], Children's Yale-Brown Obsessive-Compulsive Scale [26] and Clinical Global Impression-Severity Scale [27] were administered at the time of recruitment of subjects into the study. Interviews were conducted using an in-depth semi-structured interview guide and audio-recorded.
Procedure
All the interviews were undertaken and coded by the first author [LS], a qualified female psychiatrist with substantial experience in collecting and analysing qualitative data, particularly within mental health populations. Children who met the DSM-5 criteria for a clinical diagnosis of OCD were referred to the first author, who then administered the structured diagnostic interview MINI KID (Mini International Neuropsychiatric Interview for Children and Adolescents) 6.0 [25] to confirm the diagnosis and scales-Children's Yale-Brown Obsessive-Compulsive Scale [26] and Clinical Global Impression-Severity Scale [27] to assess severity at the time of recruitment of subjects into the study. The qualitative data was collected using the in-depth semi-structured interview guide mentioned above. The interview guide was literature guided and validated by a senior researcher of the team with extensive experience in conducting qualitative research. A technique called funnelling was used while constructing the guide. It provides a chance for the participants to express their general views before they are directed to more specifics pertaining to the issues being discussed [28]. Developing the interview schedule was a reflective process keeping in mind that the questions inquire into other people's lives and how they may affect them [29]. Prompts were also used to enhance the richness of responses, especially in instances when participants had difficulty understanding the questions or talking at length [28].
The questions were openended with no right or wrong answers, instead provided an opportunity for a descriptive process. The participants were allowed to take the lead and it was ensured that all the specific areas were covered. In addition, they were also provided space to voice any other ideas that they felt were relevant. The interviews were audio-recorded to avoid loss of data and recall bias, and, to get data exactly as narrated by the participants. The use of audio recordings helped in understanding the responses better as they contained researcher's responses and also made it possible to pause it when needed [30].
Analytic approach
The research question of this study required participants to reflect and talk about their lived experiences and therefore called for a qualitative research methodology and analysis. The study sought to explore participants' sense-making of their world, of their thoughts, feelings and perceptions through interpretative enquiry. Hence, Interpretative Phenomenological Analysis (IPA) was the chosen method of analysis as it is deals with exploring and understanding the lived experience of a specified phenomenon [31].
Analysis
IPA is rooted in the philosophy of phenomenology as developed by Husserl and refined by Heidegger. Hermeneutics and idiography are two other elements that form a strong theoretical foundation of IPA. Hermeneutics deals with the interpretation of a subject's personal world and idiography refers to an in-depth analysis exploring the individual perspectives of participants in their unique contexts [23]. In fact, IPA applies double hermeneutics or a dual interpretation process where the researcher seeks to make meaning out of the meaning-making of others [16]. It views research as a dynamic process involving both researcher and participants. Did not meet inclusion criteria (not in remission): n = 3 Did not give consent: n = 5 Did not give assent: n = 2
Withdraw from the study (n=2)
Due to difficulty in expressing themselves fluently in English during the detailed interview
Consent and assent obtained
Sample: n = 12
Validation of analysis and model
The findings were confirmed in discussion with the guide and the sub-themes and overarching main themes were corroborated by an independent researcher-a process called investigator triangulation [32]. Member check was done to enhance the rigour of the study. The themes and sub-themes upon which there was agreement were included in the final report and where consensus could not be reached were discarded. All the participants agreed on the model presented. The final model was then presented to colleagues and members of the team, thus completing peer validation of the model.
Ethical considerations
Approval to conduct the study was obtained from the Institutional Ethics Committee of the National Institute of Mental Health and Neurosciences (NIMHANS). The children and their parents were provided with written and verbal explanations of the purpose and procedures of the study. Written informed assent and consent were taken from all the participants and their parents respectively. Anonymity and confidentiality were maintained.
Clinicodemographic data
All participants were aged from 10 to 17 years of age. The sample constituted of four girls and six boys. All were going to regular school except two (participants 2 and 10) who had dropped out of school due to impaired academic functioning. Participant 2 was training in pre-vocational skills and participant 10 was pursuing studies through open schooling. Age at onset ranged from 9.5 to 13 years (mean-11 years, SD-1.2). Age at diagnosis ranged from 10 to 14.5 years (mean-12.4 years, SD-1.9). The total duration of illness ranged from 10 months to 4 years 6 months (mean-30 months, SD-16). Duration of remission ranged from 12 to 28 weeks (mean-19.7 weeks, SD-6.5). The scores at the time of recruitment were between 0 to 7 (mean-2.7, SD-2.2) indicating subclinical illness in all children [26]. On CGI-Severity rating, seven children scored one indicating 'normal or not at all ill' and three children scored two suggesting 'borderline mentally ill [27]. The IPA analysis done as described in the methodology section yielded five major themes. Each theme and sub-themes are presented below in Table 3. Each theme and sub-theme was validated with at least three significant statements as per the recommended standards for IPA and all the statements were interpreted. However, only one example per sub-theme is presented in Table 4.
A summary of these themes is presented as a 'Conceptual model of lived experiences of the phenomenon of OCD' . The essence of this model is to show the inter-relationship between themes and provide a comprehensive understanding of the phenomenon of OCD as perceived by children (Fig. 2). In addition, there is a pictorial representation of the phenomenon of OCD using analogies of a child's psyche as a flower and OCD as a bug. So, it has six parts as enumerated and described below. A. Phenomenon of OCD: As mentioned above this is explained by using analogies. A child's mind is as tender as the petals of a flower [33]. When it is infested by the OCD bug, it is scarred. It also emanates toxins contaminating the surroundings. It is important to recognize the problem and provide a suitable environment for the flower to bloom. While scarring symbolizes the impact of OCD, toxins contaminating surroundings signify the negative impact of OCD on family and a suitable environment here is the right treatment. B. Evolution of illness perception: Illness perception changes over time from initial confusion, fear and a feeling of helplessness to grief and acceptance. As clarity about the condition sets in, a battle with OCD ensues. Later, as one experiences initial successes, a sense of accomplishment and feeling of empowerment emerge. This strengthens determination and gives hope to the individual in their fight with OCD. C. Spectrum of disclosure: Disclosure lies on a spectrum. On one end, there are children who do not feel the need to disclose due to internal barriers such as no felt need for help or lack of insight or awareness. There are others who recognize the need to reveal but do not find the space safe enough to disclose. Some others express, following which the family members go through denial and or ambivalence before coming to acceptance. And at the other end are those who seek help actively but face therapistrelated barriers to recognition of the problem. D. Cascading effects of OCD: OCD leads to a chain of events that eventually disrupt one's role functioning and or cause disruptions in self. It also puts the individual at higher risk of being a victim of bullying as people tend to misinterpret behaviours related to OCD. These sequelae can be interrelated. E. Treatment helps as the 'hub': The central theme about treatment is that it helps and this forms the hub of the perception of treatment processes. Although there is initial reluctance to seek treatment, children not only perceive therapy as helpful but helpful beyond illness. However, this process is kept as a personal affair. F. Journey through OCD: It is not a planned journey but a forced one that takes a person by storm. So going through it, children experience a lot of internal battles and chaos within and outside. They face "how I wish I didn't have it" and "if only I were" moments, in addition to being wise in the retrospect like -"could I have averted this by doing something differently". Eventually, the storm settles as the individual gets control over the situation and things improve.
Discussion
Most of the qualitative research on OCD has been done in adult participants. It is focused on exploring subjective experiences [13,14,[34][35][36][37][38][39][40][41][42], reassurance seeking [43,44], enablers and barriers [45], stigma and labeling [46], family members perceptions [47], impact on partner relationships [48] and user perspectives on interventions [49][50][51][52][53][54]. The sample size in these studies varied depending on the methodology that varied from case study to ethnographic approaches. While Lemelson noted that culture strongly influenced symptomatic expression [36], Olson, Vera & Perez illustrated cultural and ethnic connectedness among their adult participants in their qualitative research [39]. Comparing study findings with relevant literature The major themes derived in this study are compared with relevant studies on lived experiences in this field. Table 5 summarizes the methods employed and the key findings of these studies.
Theme 1-Illness perception changes over time
The sub-theme of 'confusion, fear and helplessness' under this overarching theme corresponds to the sub-themes 'lack of understanding of their behaviour' and 'I thought I was going crazy' of the major theme of 'responses to 16:44 signs of OCD' reported in a study by Keyes et al. [22]. 'Recognising something's wrong' and 'coming to terms with OCD' are two sub-ordinate themes under the major theme of 'realisation of OCD' elucidated by Kohler, Coetzee and Lochner [35]. These correspond to the subthemes of 'clarity sinks in and battle ensues' and 'grief and acceptance' of the current study. It is important to note that they have not been described in relation to time in the previous study by Kohler, Coetzee and Lochner [35].
Although Murphy and Perera-Delcourt describe 'having obsessive-compulsive disorder' as a super-ordinate theme, the concept of evolving over time is not reflected in their description of the minor themes of 'wanting to be normal and fit in' , 'failing at life' and 'loving and hating OCD' [38]. Pedley et al. enumerated dimensions of illness perception viz. identity, cause, consequences, timeline, emotional representation, personal control/treatment control and coherence using the Common-Sense Model (CSM), however, these dimensions do not follow a timeline [41]. Olson, Vera and Perez noted that participants tried to make sense of their symptoms both clinically and personally and that symptoms change over time [39].
Theme 2-Disclosure on a spectrum
The sub-ordinate theme under this major theme 'no felt need to disclose due to internal barriers' describes lack of awareness and lack of insight as coming in way to recognize and talk about the problem. 'Not wanting to tell people' and 'not wanting to tell the doctor' due to 'stigma' as noted by Robinson, Rose and Salkovskis can be compared to this minor theme [45]. Recognition of symptoms was hampered by a failure to interpret experiences as 'symptoms' as noted by Pedley et al. However, in that study, the individuals interpreted symptoms as a personality quirk, or as evidence that they had become deviant [41]. In an autoethnographic account, Brooks alludes to secret rituals done in an attempt to maintain a social image [14].
Theme 3-Cascading effects of OCD
This major theme has a sub-theme called 'victim of bullying and social misperceptions' . Bhattacharya and Singh give a detailed account of their participants' difficulties in sharing experiences of OCD due to social misperceptions as 'connection vs. disconnection' , which also overlaps with the subtheme of 'felt need but no space to disclose' under theme 2 of the current study [13]. 'Bullying and friendlessness' under the major theme of 'traumatic and stressful life events' described by Keyes et al. reflects traumatic life events occurring in the months immediately preceding the onset of OCD [22]. However, the sub-theme 'victim of bullying and social misperceptions' under 'cascading effects of OCD' refers to bullying occurring in the aftermath of OCD. Kohler, Coetzee and Lochner described an overarching theme 'disruptions to daily life' that encompassed sub-ordinate themes of 'disruptions in sleep and rest' , 'disruptions to leisure activities and hobbies' and 'disruptions to productivity [35]. The present study had 'disruptions in sense of self ' as one of the sub-themes. While Bhattacharya and Singh described 'feeling different from others' and the 'loss of an authentic self ' [13], Brooks elaborated on the disruptions to social life and the impact of OCD on public and private identities leading to significant suffering [14].
Theme 4-Treatment infuses hope and helps
This major theme corresponds to the sub-ordinate theme of 'wanting therapy' under 'impact of therapy' as elucidated by Murphy and Perera-Delcourt. Moreover, subtheme 'useful beyond illness' corresponds to the subtheme of 'a better self ' [38]. It is interesting to note that one of the other sub-themes 'initial refusal to seek treatment' is similar to the super-ordinate theme 'ambivalent relationship to help' identified by Keyes et al. [22].
Theme 5-Navigating through OCD
One of the sub-themes of this major theme is 'internal battles and chaos' that occurs in the initial phase of the illness. Brooks illustrated a battle against OCD in her personal account of steering herself among and between "appropriate" performance and secret rituals [14]. Keyes et al., also identified 'the battle of living with OCD' illustrating the struggle of adolescents to cope with internal experiences related to their difficulties [22].
Clinical implications of themes derived
Despite the developmental differences, there is an overlap in the lived experiences of children and adults having OCD. It is important to be aware of how the perception evolves over time, be it of the illness or treatment processes.
• After coming to acceptance, children develop hope while battling OCD. Therapists must be aware of it and not be too aggressive but be supportive during the initial phase. • Children found therapy to be helpful beyond illness in terms of inculcating self-discipline. It is likely to positively impact other spheres of the child's life such as academics and interpersonal relationships and result in more long-term benefits.
Strengths and limitations
To the best of our knowledge this is the first qualitative study in children and adolescents with OCD done using interpretative phenomenological analysis, which is the choice of analysis for exploring lived experiences. The youngest subject in this study was ten years old. To our knowledge, this study reports the youngest child to narrate lived experiences of OCD. The average duration of interviews was 1 h and 49 min, which meant a lot of data for comprehensive and a holistic understanding of each subject. As it is a qualitative study, the data were intended to provide in-depth insights into subjective experiences, rather than to be generalizable. The research team chose to study only English speaking participants to ensure uniformity and sociodemographic homogeneity (lifestyle, social scrutiny and support) of the sample studied. We ensured 'clinical homogeneity' by having subjects who had the illness for at least six months duration and were in remission at the time of intake into the study. However, the researchers wanted to explore the experiences of children who suffered varying intensity of the disease, hence the sample was heterogeneous in terms of the severity of illness endured. The transcribed data was meticulously analysed to derive the results. While all subjects were interviewed at length and a lot of data was collected, only the overlapping themes have been elucidated in detail. The subtleties of individual experiences were deliberately missed in the process of drawing conclusions from a large amount of data. It is noteworthy that despite the developmental differences due to the broad age group of the sample studied (10-17 years), the themes of lived experiences of OCD that emerged were the same in all the subjects. After much deliberation, it was decided to include subjects in remission from illness so as to get a sense of the experience of the 'whole journey' through the illness and the treatment processes during different phases. While this could be a strength of the study, it is not the same as getting narratives from those who are acutely ill. There could have been a recall bias. But, it is unlikely that the most prominent and difficult experiences would not have been recounted by the subject. There is a possibility that subtle features would have been missed while narrating their experiences.
Recommendations for future research
This study suggests that major themes of illness perception and treatment processes evolve over time. Longitudinal follow-up studies would be required to establish this. Qualitative research can be carried out with a focus on the specific issue related to disclosure, i.e. 'therapistrelated barriers' as there is scope to understand the processes better and improve them accordingly to facilitate disclosure and thereby identify and address the phenomenon of OCD early. As there is a lack of research on the impact of OCD on the developing identities of children, it would be worthwhile studying this aspect too by conducting qualitative research. Children talked about their difficulties in the school context and often being misunderstood by friends and or teachers. Moreover, children spend a substantial amount of time in school. Therefore, it would be worthwhile studying the student-teacher dyads as it will not only provide an opportunity to understand teachers' perspectives but also inform policies related to school. | 2022-06-17T00:07:46.074Z | 2022-06-16T00:00:00.000 | {
"year": 2022,
"sha1": "bb79facacabcb007387de090114db54eab48df9c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "8bc3ecb323f6d9e5ad6d0838caa9767308766763",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17948949 | pes2o/s2orc | v3-fos-license | Oscillations and patterns in interacting populations of two species
Interacting populations often create complicated spatiotemporal behavior, and understanding it is a basic problem in the dynamics of spatial systems. We study the two-species case by simulations of a host--parasitoid model. In the case of co-existence, there are spatial patterns leading to noise-sustained oscillations. We introduce a new measure for the patterns, and explain the oscillations as a consequence of a timescale separation and noise. They are linked together with the patterns by letting the spreading rates depend on instantaneous population densities. Applications are discussed.
Interacting populations often create complicated spatiotemporal behavior, and understanding it is a basic problem in the dynamics of spatial systems. We study the two-species case by simulations of a host-parasitoid model. In the case of co-existence, there are spatial patterns leading to noisesustained oscillations. We introduce a new measure for the patterns, and explain the oscillations as a consequence of a timescale separation and noise. They are linked together with the patterns by letting the spreading rates depend on instantaneous population densities. Applications are discussed. A fundamental aim in studying population dynamics is to understand species interactions. A paradigmatic, still interesting, case is that of two species [1], where one feeds or lives from the other. These may be predators and their prey or parasitoids living on the expense of hosts. If the interaction is strong and if the parasitoid or predator is specialized, its growth will have a delayed negative feedback as its resource is diminished. In such cases, oscillations are an inherent feature [2]. Many surface reactions also have similar dynamics [3].
The classical ways of looking at such systems assume fully stirred populations. The encounters between individuals are assumed to be proportional to the product of their densities, analogously to the mass action principle in chemistry [1]. This assumption is also at the heart of the mean-field -like Lotka-Volterra equations. In general, however, spreading and interaction are restricted in space. In this case, correlated structures arise and the assumption about complete mixing no longer holds [4].
If the parasite abundance is small, any feedback effect is weak. Population sizes then show no oscillations, and the predating species is locally concentrated in a clusterlike arrangement. This has been theoretically studied in Ref. [5]. With strong feedback spatiotemporal patterns emerge in a multitude of forms. These include disordered flame-like patterns [6,7,8], and ripple-like spatiotemporal ones [9]. There is also a large body of work on similar patterning in individual-based models (e.g. [10]), in statistical physics [11,12], and in calcium concentration oscillations in living cells [13]. A paradigmatic patternforming system is the complex Ginzburg-Landau equation (CGLE) [14], which exhibits spiral-like geometries.
Voles in Northern Britain [15], mussels in the Wadden sea [16], and lemmings in Northern Europe [17], are good examples of empirical observations of such patterning. These involve either predation or being predated. Spatial structures weaken the interactions since species tend to be aggregated within themselves. They also provide the prey a refuge since around the prey there are less predators. Therefore, spatial inhomogeneity can stabilize the dynamics and promote coexistence [4,7,18].
Here, we analyze the full spatiotemporal dynamics of two interacting species using instantaneous configurations and time series of the population densities. We introduce a measure for the level of patterning in such systems. When the patterns form, one observes persistent oscillations in the population densities. We show that the underlying dynamics follows a particular logic: it originates in the response of the rates to changes in instantaneous densities, and the emerging system proves different from the limit cycle in Lotka-Volterra systems, or recent developments where three-species models have been mapped [19,20] to CGLE. The present mechanism works by the interplay of oscillatory transients to a stable fixed point and stochasticity. The response of the interaction rates is due to spatial correlations. The mechanism is novel, and does not in fact need any non-linearities to work, which will become evident below based on simulations and effective equations describing them.
We study a host-parasitoid model in discrete time and space. It is inspired by [21], but has a wider interaction range as in the incidence function models of metapopulation dynamics [22]. The model describes annual hostparasitoid dynamics on a two-dimensional square lattice Λ [23]. At each time step, a site x can be either empty (in state e) or populated by a host without (state h) or with parasitoids (state p). Transitions between the states are cyclic, e → h → p → e, neglecting possible spontaneous deaths of non-parasitized hosts assumed to be rare, for simplicity. Although this means that hosts live forever if there are no parasitoids, this is not a serious restriction; in coexistence it boils down to assuming faster extinction for the parasitoids than for the hosts. The model can also be described as an SIR model with rebirth.
At each site transition probabilities depend on the surrounding populations through the connectivity of site x with respect to species α (h or p). Here χ α (x) = 1 if the state of x is α, and 0 else. The kernel k α has an exponential decay with the scale w α and is normalized by x∈Λ k α (|x|) = 1. Dispersal lengths are chosen since these are biologically motivated [22] and lead to generalizability. In a timestep, the transition e → h takes place with probability min(1, λ h I h ) and h → p with probability min(1, λ p I p ) (in the parameter range of interest, λ α I α ≤ 1 practically always). Parasitized hosts may die (p → e) with probability δ irrespective of the surroundings. Note that they do not reproduce. Periodic boundary conditions and parallel updates are used. There are two absorbing states, an empty lattice (e) and one full of hosts. Fig. 1 (a) shows an example with coexistence. One finds moving regions predominantly in one of the three states. To quantify these patterns, we define the dominance regions (Fig. 1b) as follows. By smoothing one obtains continuous densities ρ α (x, t) = For each site, ρ h (x, t) and ρ p (x, t) are compared to the space-time averagesh andp. The densities are positive, lying in the first quadrant of R 2 , divided into three regions shown in Fig. 1c. The site x at time t is then defined to belong to a domain according to the region (e, h, or p) containing (ρ h (x, t), ρ p (x, t)). In essence, the regions coarse-grain on a scale σ > w h,p . In this regime, they are insensitive to changes in σ.
The domains are separated by walls, joining at triple points, vortices [6]. A vortex has a sign +1 (−1), if one encounters the domains in the order ehp following a small cycle around the vortex counter-clockwise (clockwise). Pairs of vortices of opposite signs are created and annihilated together. The domains rotate around the vortices, which are relatively stable. In other words, the species invade the appropriate neighboring domains so that the walls rotate around the vortices. Similar structures have been identified earlier in related systems (e.g. [6,11]). In three dimensions, the vortices generalize to strings [6].
First, consider static measures such as the domain wall length from source to sink vortex. It has an exponential distribution, whose mean is drawn for different parameters in Fig. 1d. Its ratio to its counterpart in uncorrelated random arrangements with the same densities is shown. Patterns and oscillations lead to walls with ℓ ≈ 100 lattice units (l.u.). This is more than ten times larger than the smoothing width σ and also several times that in the random arrangements (ℓ random = 35 l.u.). The coarse-graining gives a measure of patterns distinguishing between uncorrelated and patterned states.
Next, turn to the spatially averaged densities h t and p t . Fig. 2a shows them as a function of time in three cases: (i) a non-patterned state with a small parasitoid population, (ii) a state with patterns, and (iii) a small subsystem (L sub = 64) out of a large system (L = 512) with patterns. In the patterned systems, there is a highfrequency oscillation matching the angular velocity of single vortices and a slow variation of the amplitude. Below, we explain both as a consequence of a timescale separation, connect them to the patterns, and explain why the oscillations do not conform to the usual limit cycles. (2) and (3). Black (red) lines denote the boundaries for the spatial system (MF approximation). The boundary between (1) and (2) coincides for the two cases.
To build a description of the dynamics of the model in a novel fashion using aggregated variables, consider Poincaré maps (Fig. 3). For large enough systems h t+1 and p t+1 are unique functions of h t and p t up to noise. Also by attractor reconstruction [24] we find that the full system with 2L 2 degrees of freedom coarse-grains into a two-dimensional one. The points in the maps lie close to a two-dimensional surface, and for large enough L (with many patterns in the system) even on the tangential plane through the average (h,p). Based on these numerical observations, the dynamics is linear in h t and p t : (2) In other words, the observations imply that even though the dynamics is expected to be non-linear based on the MF approximation, in a large system with many patterns the possible non-linearities self-average out. It is then useful to consider the expansion around the average -a linear iterative map. In oscillatory cases, its eigenvalues form a conjugated pair ρe ±iφ . These are associated with two timescales, the period and the decay rate of the amplitude. Their ratio tells whether dynamics is oscillatory or "just noisy". ν ≫ 1 indicates patterned systems and oscillatory dynamics. Fig. 4 shows the typical behaviors of the two kinds of dynamics observed depending on the presence or absence of patterns, and whether one adds noise (as additional Gaussian uncorrelated noise terms on the RHS of Eq. (2)) to the coarse-grained dynamical system to mimic the finite-L simulations of the full spatial system. Note that in all cases the fixed point is attractive. So far we have given separately a temporal and a spatial diagnosis of the pattern dynamics. Next we link these together. For λ α I α (x, t) small, they equal the spreading probabilities, and the dynamics can be written as and p t+1 = (1−δ)p t + λ p x∈Λ k p (x) C hp (x, t), where the influence of the connectivities is expressed by the corre- This is an approximation of the usual MF form with the interaction parameters κ(h, p) and µ(h, p) generalized to be arbitrary functions of the instantaneous densities. They can be non-linear and they do not have to conform to the standard MF nor to any ad-hoc approximations [25]. By an expansion of Eqs. (4) around the fixed point (h,p) one arrives at Eq. (2) with
t). A corresponding non-spatial approximation is
where κ, µ, and their derivatives are evaluated at the fixed point. The derivatives are necessary for consistency. The matrix elements a σ,σ ′ and the densitiesh andp are measured from the simulations. Since κ and µ are parameters, omitting the derivatives would make Eqs. (5) overdetermined and thus unsatisfiable. By keeping them, there are four equations and four unknowns to be solved.
The effect of the nonzero derivatives is best illustrated by a phase diagram, Fig. 2. In the spatially extended system, there are three qualitative phases: the extinction of the parasitoids, non-oscillatory and oscillatory coexistence. Except for the extinction, the boundaries are not sharp. Instead, there is a transition zone, defined via the timescale ratio (Eq. (3)) as the region where 1 < ν < 4. In MF, there is also a fourth phase, absent here: oscillatory coexistence in a limit cycle. The phase structure resembles that in earlier work on a related model with only nearest-neighbour spreading [26,27,28]. There, as well, oscillatory and non-oscillatory phases are recovered, the latter identified as a limit cycle using the pair approximation. Based on our findings, it could also be noisesustained.
Let us now compare the explained mechanism with recent approaches. A possibility is to make the angular velocity of the oscillation either amplitude-or phasedependent [29,30]. However, Eq. (2) does not allow for either dependence. Another one is to map the population model [19,20] to CGLE [14]. An unstable fixed point is necessary for the mapping, yielding a limit cycle.
To conclude, we have studied spatiotemporal dynamics of a model of two kinds of interacting particles, in biological terms hosts and parasitoids. A large parasitoid population creates patterns and noisy oscillations of population sizes. We have introduced a new measure for the patterns, and explained the noisy oscillation as a consequence of a time-scale separation. In other words, even with a limit cycle at the well-mixed limit, the spatial case has stable dynamics with long-lived oscillatory transients. This is due to spatial correlations making the spreading rates functions of the instantaneous population densities. Since the type of oscillation determines its properties (e.g. the fluctuating amplitude), which in turn affect vulnerability to extinction, the distinction is important. The connection offers a shortcut to study the effect of, e.g. environment: it could be related directly to the matrix elements in (2), in contrast to a full form of the interactions, lightening the analysis. We expect that the observation of patterns and oscillations arising from local dynamics and self-averaging -in finite systems since the noise amplitude depends on system sizewill find other applications beyond the biology-inspired model. They are not restricted to only two species or two-dimensional systems, since the analysis can be carried out also for more complicated cases. There is no restriction to cyclic dynamics either, nor to discrete-time systems since continuous-time ones can be handled by considering snapshots taken at regular intervals. Further examples of applications include chemical reactions on surfaces [3], and metapopulations on disordered and scalefree landscapes. | 2008-10-27T15:26:40.000Z | 2008-10-27T00:00:00.000 | {
"year": 2008,
"sha1": "a58f81c8f3c9b9eafc35ffe30bdb791519816e5d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0810.4839",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "94cd686beeb26c4b2950c1c2a6b6d6318c6a4c69",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Biology"
]
} |
49428064 | pes2o/s2orc | v3-fos-license | The FCS-like zinc finger scaffold of the kinase SnRK1 is formed by the coordinated actions of the FLZ domain and intrinsically disordered regions
The SNF1-related protein kinase 1 (SnRK1) is a heterotrimeric eukaryotic kinase that interacts with diverse proteins and regulates their activity in response to starvation and stress signals. Recently, the FCS-like zinc finger (FLZ) proteins were identified as a potential scaffold for SnRK1 in plants. However, the evolutionary and mechanistic aspect of this complex formation is currently unknown. Here, in silico analyses predicted that FLZ proteins possess conserved intrinsically disordered regions (IDRs) with a propensity for protein binding in the N and C termini across the plant lineage. We observed that the Arabidopsis FLZ proteins promiscuously interact with SnRK1 subunits, which formed different isoenzyme complexes. The FLZ domain was essential for mediating the interaction with SnRK1α subunits, whereas the IDRs in the N termini facilitated interactions with the β and βγ subunits of SnRK1. Furthermore, the IDRs in the N termini were important for mediating dimerization of different FLZ proteins. Of note, the interaction of FLZ with SnRK1 was confined to cytoplasmic foci, which colocalized with the endoplasmic reticulum. An evolutionary analysis revealed that in general, the IDR-rich regions are under more relaxed selection than the FLZ domain. In summary, the findings in our study reveal the structural details, origin, and evolution of a land plant–specific scaffold of SnRK1 formed by the coordinated actions of IDRs and structured regions in the FLZ proteins. We propose that the FLZ protein complex might be involved in providing flexibility, thus enhancing the binding repertoire of the SnRK1 hub in land plants.
signals (3). In agreement with this, FLZ proteins were found to be interacting with regulators of hormone signaling, with development and abiotic and biotic stresses supporting their role as adaptors to mediate SnRK1 signaling (8,10). Functional analysis of FLZ genes identified their roles in the regulation of many SnRK1-regulated processes (11)(12)(13). Consistent with these studies, we recently identified that two members of Arabidopsis FLZ gene family work as negative regulators of SnRK1 signaling (14). Interestingly, many FLZ proteins also showed interaction with RAPTOR, which suggests that they may work as scaffolds by bringing both SnRK1 and TOR complexes in close proximity, which is required for the energy-dependent antagonistic interaction (8,10).
The scaffold proteins usually possess high conformational flexibility to accommodate the interaction with diverse proteins (15). The plasticity and adaptability of scaffold proteins are usually achieved by the presence of intrinsically disordered regions (IDRs) in these proteins (16). FLZ proteins show low sequence conservation outside the FLZ domain indicating that these regions might be enriched with disordered regions (6,7). IDRs lack fixed secondary or tertiary structures. They are particularly enriched in eukaryotic proteins and thought to be related to organismal complexity (17). The pliable nature of IDRs increases the binding repertoire of proteins. Consistent with this, hub proteins are generally found to be enriched with IDRs (17, 18). The versatile interaction property of FLZ proteins, coupled with their low sequence conservation in the N and C termini, indicates that IDRs might be contributing to the protein interactions of FLZ proteins.
In this study, we employed a large dataset of FLZ proteins from 33 sequenced plant genomes to study the origin and evolution of IDRs in the FLZ protein family. Our analyses predict that the FLZ family proteins possess evolutionarily conserved protein-binding IDRs in the N and C termini. Specific enrichment of post-translational modification (PTM) sites responsible for protein-protein interaction in the IDR-rich region indicates their role in enhancing the interaction repertoire. The protein-protein interaction assays identified that IDRs in the N termini and the structured FLZ domain mediate the interactions with specific SnRK1 subunits in Arabidopsis. The scaffold proteins provide an interaction surface for other proteins by virtue of their large size or through dimerization (19). We also found that FLZ proteins form homo-and heterodimers through the IDRs in the N termini. Interestingly, in planta interaction assays identified that the interaction of FLZ proteins with SnRK1 subunits was specific to cytoplasmic foci that colocalizes with the endoplasmic reticulum (ER). The evolutionary analysis identified that the sequence divergence in the IDR-rich N and C termini is more dynamic compared with the structured FLZ domain. Thus, this study uncovers the origin and evolution of a plant-specific complex formed by the division of labor between the structured FLZ domain and the IDRs, and this complex might be important in mediating the function of the conserved eukaryotic energy gauge, SnRK1.
N and C termini of FLZ proteins are enriched with intrinsically disordered regions
The FLZ proteins from 33 sequenced plant genomes, which belong to key taxonomical positions, were identified through BLAST and HMM-based searches (Fig. S1). A nonredundant dataset of these FLZ proteins was created and used for the subsequent analysis (Table S1). To find out the disorder propensity of FLZ proteins, we employed PONDR-FIT, which is a metapredictor of PONDR-VLXT, PONDR-VSL2, PONDR-VL3, FoldIndex, IUPred, and TopIDP predictors and displays enhanced accuracy over individual predictors (20). FLZ proteins show limited domain acquisition throughout the plant lineage, and they usually possess a solitary FLZ domain ( Fig. 1A and Table S2), which is predicted to form an ␣--␣ topology (6,7). Using PONDR-FIT, we first predicted the disorder propensity of FLZ proteins from the model plant Arabidopsis. Most of the FLZ proteins from Arabidopsis are predicted to have high propensity for disorder in N and C termini compared with the FLZ domain (Fig. 1B). FLZ domain-containing proteins are absent in all the green algae (Chlorophyta) species sequenced until now, and we identified an FLZ protein from Klebsormidium flaccidum, which belongs to Charophyta, suggesting that the origin of FLZ domain coincides with the terrestrialization of plants ( Fig. S1) (21). The FLZ protein of K. flaccidum was also predicted to have high propensity for disorder at the N and C termini. Furthermore, the FLZ proteins from Bryophyta and Pteridophyta also showed a similar trend indicating that the disorder of N and C termini of FLZ proteins is conserved (Fig. 1C). The FLZ gene family is highly expanded in spermatophytes due to the common and lineage-specific whole-genome duplication (WGD) events ( Fig. S1) (6). Analysis of the disorder propensity of FLZ proteins from 28 species belonging to Spermatophyta suggested that N and C termini of FLZ proteins possess high propensity for disorder in all species analyzed (Fig. 1D). These results collectively suggest that disordered nature of N and C termini of FLZ proteins is an evolutionarily conserved feature.
IDRs generally show great variation in sequence and length. We calculated the number of predicted short (10 -29 residues) and long (30 or more residues) IDRs from the FLZ protein family across the plant lineage (Table S3). We found that the number of predicted short IDRs is very high compared with long IDRs in the FLZ protein family (Fig. 1E). The N termini showed high frequency of both predicted short and long IDRs followed by C termini (Fig. 1F). Interestingly, many IDRs were also predicted in the junction of the N and C termini and FLZ domain. The probability for IDRs was found to be less in the structured FLZ domain region (Fig. 1F).
Although the number of FLZ proteins is highly expanded in spermatophytes, the average size of FLZ proteins is conspicuously reduced in spermatophytes (Fig. S1). We compared the size of FLZ proteins from lower plants with Arabidopsis and rice, and it seems that the reduction in the size of IDR-rich N terminus is responsible for the reduction in the overall protein size in spermatophytes (Fig. S2).
FLZ proteins form SnRK1 scaffold in land plants N and C termini of FLZ proteins predicted to have high propensity for protein binding
IDRs generally facilitate versatile protein and nucleic acid binding (22). To get clues about the molecular functions of the IDRs in the N and C termini of FLZ proteins, we analyzed the nucleic acid-and protein-binding propensities using Diso-RDPbind, which is an efficient multiple parameter-based pre-dictor of protein-, DNA-, and RNA-binding residues in IDRs (23). In our analysis, both N and C termini were predicted to have consistently high propensity for protein binding in all species analyzed. In the case of the C terminus, many proteins were also predicted to have high RNA-binding propensity ( Fig. 2A).
The IDRs are generally enriched with sites for PTMs such as phosphorylation, acetylation, and methylation (24 -28). These The bars indicate the relationship between IDR length and frequency after categorizing IDRs into groups based on length. The IDRs with a length difference of up to five amino acids were grouped together. The red dots indicate the relationship between IDR length and frequency without any grouping. F, average distribution of short (SIDR; 10 -29 residues) and long (LIDR; 30 or more residues) IDRs in different regions of FLZ protein family. The detailed table of IDRs in FLZ protein family is given in Table S3. Table S4. The p value obtained in the paired t test indicating the statistically significant difference in the number of PTM sites between N or C termini and FLZ domain is given in the graphs.
FLZ proteins form SnRK1 scaffold in land plants
modifications alter the target-binding properties of IDRs and increase the repertoire of protein states in the cell (17, 29). Phosphorylation, arginine methylation, and lysine acetylation are known to modulate protein-protein interactions (22,27,28,30). Because the N and C termini of FLZ proteins are predicted to have high propensity for protein binding, we analyzed the extent of enrichment of putative phosphorylation-, acetylation-, and methylation-prone residues in these regions as compared with the structured FLZ domain. Indeed, we found a significant increase in the number of potent serine and threonine phosphorylation sites in N terminus followed by the C terminus as compared with the FLZ domain ( Fig. 2B; Table S4). However, we did not find any major increase in the putative tyrosine phosphorylation sites in the N terminus. A significant decrease in the tyrosine sites was observed in C terminus as compared with the FLZ domain region ( Fig. 2B; Table S4). Furthermore, we observed significant increases in the putative arginine methylation and lysine acetylation sites in the N termini compared with FLZ domain (Fig. 2, C, and D; Table S4).
Prediction of disorder-to-order transition regions and low complexity regions in the FLZ proteins
IDRs may or may not undergo disorder-to-order transition upon binding with targets (17, 31). To understand the evolutionary perspective of disorder-to-order transition in FLZ proteins, we predicted the regions that can undergo disorder-toorder transition upon binding to globular protein partners in the IDRs of FLZ proteins from lower plants and Amborella trichopoda, a basal species from the sister lineage of angiosperms using ANCHOR (32,33). The IDRs in the N terminus of FLZ proteins from lower plants were predicted to be highly enriched with such binding regions, whereas a general reduction in their number and restricted distribution of binding regions was predicted in A. trichopoda (Fig. S3A). Other spermatophytes also showed the same scenario (Fig. S3B). Molecular recognition features (MoRFs) are short motifs (typically 10 -70 residues long) in the IDRs that undergo disorder-toorder transition upon binding with their targets, and they are generally involved in facilitating protein-protein interactions (22,34). The fMoRFpred prediction revealed low enrichment of MoRFs in the FLZ proteins in both lower plants and spermatophytes (Fig. S3, C and D) (35,36).
The low complexity regions (LCRs) in the proteins are formed due to limited diversity and repetition of certain amino acid types. They are highly abundant in eukaryotic proteins in their IDRs and often enhance the binding promiscuity (37)(38)(39). In our analysis using SEG algorithm (40), LCRs were predominantly predicted in the IDRs of the N terminus of lower plants, whereas reduction in their number and restricted distribution was observed in spermatophytes (Fig. S3, E and F).
Arabidopsis FLZ proteins interact with all subunits of SnRK1
The in silico analysis suggested that FLZ proteins possess IDRs, which are potentially involved in protein-protein interaction. It is already reported that many FLZ proteins promiscuously interact with ␣ kinase subunits of SnRK1 in Arabidopsis (8,9). To experimentally validate the role of IDRs in proteinprotein interaction, we selected the interaction of FLZ proteins with SnRK1. SnRK1 is an obligate heterotrimeric enzyme, and FLZ proteins are proposed to be an adaptor of SnRK1; therefore, we hypothesized that the interaction might not be restricted to kinase subunits. To test this, we cloned all 18 Arabidopsis FLZs in the BD vector for the Y2H experiment with SnRK1 subunits. Prior to the Y2H experiment, BD-FLZ clones were transformed in yeast, and their auto-activation and toxicity were checked. In this assay, all 18 Arabidopsis FLZ proteins showed no auto-activation and toxicity in yeast (Fig. S4). Next, we cloned SnRK1␣1, -␣2, -1, -2, -3, -␥, and -␥1 subunits in the AD vector. The BD and AD constructs were cotransformed in yeast, and interaction was analyzed. As reported previously, FLZ proteins showed promiscuous interaction with SnRK1␣ subunits in yeast (Fig. 3). In a previous study, no interaction of SnRK1␣ subunits with FLZ14 and FLZ17/18 in yeast was observed (9). Interestingly, in our analysis, we found that all 18 Arabidopsis FLZ proteins, including FLZ14 and FLZ17/18, can interact strongly with both ␣ subunits in yeast. The  subunits showed interaction with many FLZ proteins in variable strengths. Among the  subunits, SnRK13 interacted with most numbers of the FLZ protein family. SnRK1␥ showed strong interaction with FLZ2 and FLZ13 and weak interaction with FLZ8. Earlier, based on the sequence similarity, a cystathionine -synthase protein was annotated as a ␥ subunit in Arabidopsis; however, protein-protein interaction and complementation studies found that it can neither complement the yeast snf4 mutant nor interact with ␣ and  subunits (41,42). Later, this protein was found to have sequence similarity with SDS23, which works as an alternative energy sensor in fungi (5). None of the 18 FLZ proteins showed interaction with this SnRK1␥1/SDS23 LIKE (SDS23L) protein suggesting that FLZ proteins specifically interact with SnRK1 subunits that were found to be forming an enzyme complex in the earlier study ( Fig. 3) (42). SNF1-related-kinase 1 activating kinase 1 and 2 (SnAK1 and SnAK2) are redundant upstream kinases of SnRK1 that are essential for SnRK1 activity (43). Because these kinases physically interact with the SnRK1 complex, we investigated whether FLZ proteins can also interact with SnAK1 and SnAK2. In the Y2H assay, none of the 18 FLZ proteins showed interaction with SnAK1 or SnAK2 confirming that the interaction of FLZ proteins is restricted to the SnRK1 complex (Fig. S5).
FLZ domain mediates the interaction with SnRK1␣ subunits
The FLZ domain of FLZ1 is sufficient to mediate its interaction with PFA-DSP3 and STH2 suggesting that FLZ domain is the canonical protein-protein interaction module in the FLZ proteins (6). Consistent with this, the FLZ domain of FLZ12 alone was found to be sufficient to mediate interaction with SnRK1␣ subunits (9). Our in silico analysis identified that the N and C termini of FLZ proteins are highly enriched with IDRs with protein-binding propensities, and FLZ domain is the least disordered region in FLZ proteins. To find out which part of the FLZ protein is responsible for the interaction with SnRK1␣ subunits, we first cloned the FLZ domain region from five diverse FLZ proteins (Fig. 4A) and checked their ability to interact with SnRK1␣1 and -␣2. Interestingly, all five FLZ domains showed strong interaction with SnRK1␣2, whereas the FLZ The CDS of SnRK1 and FLZ genes were cloned in AD and BD vectors respectively. The constructs were cotransformed, and interaction was screened on DDO (upper row in each group) and QDO plates supplemented with X-␣-Gal and AbA (lower row in each group). Simultaneously, a negative control experiment with BD vector and AD construct was carried out to identify false interactions.
FLZ proteins form SnRK1 scaffold in land plants
domain of FLZ1, -2, and -8 showed strong interaction with SnRK1␣1 ( Fig. 4B). However, the FLZ domain of FLZ3 showed a weak interaction, and the FLZ domain of FLZ15 did not show any interaction at all (Fig. 4B). Consistent with the previous report (9), these results suggest that the FLZ domain is majorly responsible for facilitating interaction with SnRK1␣ subunits. Furthermore, we checked whether the IDR-rich N and C termini of FLZ1 and FLZ2 can establish interaction with SnRK1␣ subunits. In the Y2H assay, neither N nor C termini showed interaction with SnRK1␣ (Fig. 4C). To confirm the role of the FLZ domain in mediating interaction with SnRK1␣ subunits, we replaced conserved cysteine residues with serine in the FLZ domain of FLZ8 (Fig. 4D). In the interaction assay with FLZ8⌬1, the strength of interaction with SnRK1␣1 was reduced compared with the intact FLZ domain, whereas FLZ8⌬2 and FLZ8⌬1-⌬2 showed a more pronounced reduction in the interaction (Fig. 4E). The interaction property of FLZ8⌬1 was dramatically reduced when we also replaced the adjacent cysteine (Cys-225 and Cys-227, FLZ8⌬1-⌬3 construct) residues with serine residues (Fig. 4E). This result suggests that the ability of the nonconserved cysteine residues (Cys-225 and Cys-227) to partially replace the function of conserved cysteine residues (Cys-226 and Cys-229) could be the reason for the retention of interaction in the FLZ8⌬1 construct. Taken together, mapping and site-directed mutagenesis (SDM) analysis confirmed that the FLZ domain is the canonical SnRK1␣-interacting module in FLZ proteins.
N terminus mediates the interaction with SnRK1 and ␥ subunits
We further investigated the role of different parts of FLZ proteins in mediating interaction with SnRK1 and ␥ subunits. Strikingly, except for a few weak interactions, the FLZ domain failed to establish interaction with SnRK11-3 and ␥ subunits (Fig. 4, F-I). This result suggested that other regions in the FLZ proteins might be responsible for mediating interaction with SnRK1 and ␥ subunits. To test this, we checked the interaction of the N and C termini of FLZ1 with the SnRK1 subunits and the N and C termini of FLZ2 with the SnRK1 and ␥ subunits. We found that the N terminus alone is sufficient to mediate interaction with SnRK1 and ␥ subunits (Fig. 5, A and B). Apart from ␣ subunits, FLZ8 interacts strongly with all three  subunits in yeast (Fig. 3). Because the N termini of FLZ1 and FLZ2 are responsible for interaction with SnRK1 subunits, we hypothesized that the mutation of cysteine residues in the FLZ domain should not hamper the interaction of FLZ8 with  subunits. In our assay, we found that FLZ8⌬1, FLZ8⌬2, and FLZ8⌬1-⌬2 constructs produced strong interaction with all three  subunits indicating that the IDR-rich N terminus is responsible for establishing interaction with SnRK1 and ␥ subunits (Fig. 5, C-E).
FLZ-SnRK1 interaction site colocalizes with endoplasmic reticulum
To confirm the Y2H results and to identify the in planta interaction site of FLZ-SnRK1, we performed BiFC assay of
FLZ proteins form SnRK1 scaffold in land plants
SnRK1 subunits with different FLZ proteins. In the BiFC assay, all FLZ proteins were found to be interacting with SnRK1␣ subunits in cytoplasmic foci as sinuous bodies ( Fig. 6A; Fig. S6). These bodies were predominantly found to be closely associated with the nucleus. These results prompted us to investigate whether they are ER-associated with the nucleus. To investigate this possibility, we employed a widely used ER-marker, which was constructed by the fusion of ER-signal peptide of WALL-ASSOCIATED KINASE 2 and an ER retention peptide to mCherry (44). Indeed, we found a strong colocalization of BiFC signal with the ER-marker signal ( Fig. 6A; Fig. S6). To further confirm this, we used ER-Tracker Red dye. The interaction of FLZ15 with SnRK1␣1 was found to colocalize with the signal of ER-Tracker dye (Fig. 6B). These results collectively confirm the colocalization of the SnRK1-FLZ interaction with ER. The interaction of FLZ4 with SnRK13 was also found to be localized to cytoplasmic sinuous bodies (Fig. S7A). Furthermore, staining with ER-Tracker Red dye identified that this interaction also colocalizes with ER (Fig. S7B). The BiFC signal of FLZ-SnRK1 interaction was found to be specific because removal of SnRK1 subunits or FLZ proteins resulted in the abolition of the fluorescent signal (Fig. S8). Intriguingly, subcellular localization analysis found that only FLZ9 and FLZ15 are predominantly localized to cytoplasmic bodies, whereas most of the FLZ proteins were found to be localized uniformly in both nucleus and cytoplasm (Fig. S9).
FLZ proteins undergo homo-and heterodimerization
The in planta BiFC assay identified that FLZ proteins interact with SnRK1 subunits in the same cytoplasmic foci. This result suggested that either different FLZ proteins might be recruited to the SnRK1 complex on the basis of specific cues or that SnRK1 might be forming a complex with multiple FLZ proteins simultaneously. The promiscuous interaction of FLZ proteins with SnRK1␣ subunits led us to propose their role as adaptor proteins of SnRK1, which help in the recruitment of SnRK1 targets to the complex (9). In this investigation, we also observed that multiple FLZ proteins interact with not only
FLZ proteins form SnRK1 scaffold in land plants
SnRK1␣ but also with the  and ␥ subunits. These results led us to speculate that various FLZ proteins might be forming a complex with SnRK1, and the FLZ proteins might be physically interacting among themselves to facilitate the formation of this complex.
To test this hypothesis, we cloned 10 FLZs in the AD vector, and their interaction with 16 FLZ proteins in the BD vector was screened through the Y2H assay. Indeed, we found homo-and heterodimerization of FLZ proteins with varying strengths (Fig. 7). Among the combinations we analyzed, FLZ7, FLZ10, FLZ12, and FLZ15 showed homodimerization. Interestingly, these proteins also showed a high degree of heterodimerization with other proteins, whereas members such as FLZ1, FLZ6, and FLZ11 interacted specifically with one or two FLZ proteins (Fig. 7).
N terminus is involved in the interaction among FLZ proteins
To identify which part of the protein is responsible for mediating interaction among the members of FLZ family, we first analyzed which part of FLZ2 can mediate the interaction with the other FLZ proteins. Mapping of interacting regions identified that among the different parts of FLZ2, only the N terminus could recapitulate the interaction with full-length proteins (Fig. 8A). To confirm the role of the N terminus in mediating the interaction with other FLZ proteins, we mapped the FLZ-interaction region of FLZ1, which interacted with FLZ7 and FLZ15 in the previous assay. In the interaction site mapping, FLZ7 and FLZ15 were found to be strongly interacting with the N terminus of FLZ1. FLZ15 showed a weak interaction with the FLZ domain as well (Fig. 8B). The Y2H assay with the N terminus of both interacting FLZ proteins identified strong interaction confirming that the N terminus alone is sufficient for mediating interaction among these FLZ proteins (Fig. 8, C and D).
IDRs in the N terminus are involved in mediating interaction with FLZ proteins and SnRK1 subunits
The in silico analyses predict that the N terminus of FLZ proteins is enriched with protein-binding IDRs. In agreement with this, protein-protein interaction analysis identified that the IDR-rich N terminus of FLZ proteins is responsible for mediating interaction with other FLZ proteins and SnRK1 and ␥ subunits. To decipher the specific role of IDRs in mediating interactions, we first selected FLZ1 and FLZ2, a paralogous pair, which showed extensive interaction with other FLZ proteins and SnRK1 subunits in the Y2H assays. FLZ1 is a comparatively large protein due to the longer N terminus harboring a long IDR of 79 residues, which covers most of the N terminus (hereinafter referred as FLZ1 NIDR1 ) (Fig. 9A). In FLZ2, the middle region of FLZ1 NIDR1 is lost during evolution, which resulted in the formation of two separate IDR-rich regions (hereafter referred to as FLZ2 NIDR1 and FLZ2 NIDR2 ). The second IDR-rich region in the FLZ2 was found to be only 9 amino acids long on the cutoff parameters we employed; however, because these regions show similarity with the long FLZ1 NIDR1 , we considered it as a short IDR (Fig. 9A). We first cloned the short IDRs of FLZ2 separately, and their interaction property was analyzed with FLZ7 and FLZ10, which was found to be interacting with FLZ2 through the N terminus in the earlier experiment (Fig. 8C). Intriguingly, only FLZ2 NIDR2 could recapitulate the interaction of FLZ2 with FLZ7 and FLZ10 (Fig. 9B). This result suggested that specific IDRs in the N terminus might be involved in mediating interaction among FLZ proteins. Furthermore, we analyzed which of the IDRs is involved in mediating the interaction of FLZ2 with SnRK1 and ␥ subunits. SnRK1 and ␥ subunits were found to be interacting with both IDRs in yeast (Fig. 9C). Collectively, these results suggest that IDRs specifically or collectively are involved in mediating interactions of FLZ proteins with other proteins. To identify whether this specificity of IDR is conserved, we checked the interaction capacity of the long IDR of FLZ1. We divided the FLZ1 NIDR1 into two parts, where FLZ1 NIDR1 harbors the region that shows similarity with FLZ2 NIDR1 and FLZ1 NIDR1 (49 -81) shows similarity with FLZ2 NIDR2 (Fig. 9A). In the interaction assay with both parts of FLZ1 NIDR1 , only FLZ1 NIDR1 (49 -81) , which shows sequence similarity with FLZ2 NIDR2 , could recapitulate the interaction of FLZ1 with FLZ7 and FLZ15 (Fig. 9D). FLZ1 interacts with all three SnRK1 subunits through the N terminus. In the Y2H assay with FLZ2 NIDR1 and FLZ2 NIDR2 , we found that both these IDRs are cooperatively involved in facilitating the interaction with SnRK1 subunits (Fig. 9C). To find whether this property is conserved in FLZ1 also, we tested the interaction of FLZ1 NIDR1 and FLZ1 NIDR1 (49 -81) with SnRK1 subunits. As observed in FLZ2, both regions were found to be interacting with SnRK1 subunits (Fig. 9E). These results suggest that specific IDR regions might be involved in mediating different interaction in FLZ proteins, and this specificity might be conserved among paralogs. The FLZ genes were cloned in AD and BD vectors; constructs were cotransformed; and interaction was screened on DDO (upper row in each group) and QDO plates supplemented with X-␣-Gal and AbA (lower row in each group). Simultaneously, a negative control experiment with BD vector and AD construct was carried out to identify false interactions.
FLZ proteins form SnRK1 scaffold in land plants
To further confirm the role of IDRs in mediating specific interactions, we constructed a chimeric construct where the FLZ2 NIDR2 , which mediates the interaction with FLZ10, was fused with the N terminus of FLZ1. In the Y2H assay, the FLZ1 N terminus did not show interaction with FLZ10, whereas the chimeric construct with FLZ2 NIDR2 showed interaction with FLZ10 confirming the role of IDRs in mediating specific interactions (Fig. 9F).
Different regions of FLZ proteins show different rates of sequence divergence
During evolution, protein structure tends to be more conserved than the sequence (45). Consistent with this, many studies suggest enhanced sequence divergence among IDRs (46,47). However, later elaborative studies identified a more nuanced pattern of sequence divergence among IDRs where some IDRs and amino acids contributing to IDR formation were found to be highly conserved (17, [47][48][49]. FLZ domain was found to be the most conserved region in FLZ proteins suggesting the existence of different rates of sequence divergence among the different regions in the FLZ proteins (6,7). To know whether the IDR-rich N and C termini of FLZ proteins show more sequence divergence compared with the ordered FLZ domain, we estimated the K a /K s ratio of the N and C termini and FLZ domain region of putative orthologous genes from six closely related species each from monocot and eudicots ( Fig. 10; Table S5). Interestingly, among the 112 total gene pairs we analyzed, 92 pairs showed increased K a /K s ratio for the N terminus compared with FLZ domain. The remaining 20 pairs showed high ratio value for FLZ domain region compared with the N terminus (Table S5). This observation suggests that different regions of FLZ proteins are evolving under different evolutionary constraint, and in general the N terminus is under a more relaxed selection than the FLZ domain. Interestingly, the C-terminal region showed great variation in the K a /K s ratio ( Fig. 10; Table S5). In many proteins, the C-terminal region showed very high sequence divergence, and in some proteins this region was found to be under strong purifying selection. We found high variation in ratio among the orthologous genes from the same species pair. The difference in the age of duplicated genes, which occurred due to the rampant WGD events in the angiosperms, could be the major contributing factor for this variation in the ratio (50).
Discussion
SnRK1/SNF1/AMPK1 is a conserved obligate heterotrimeric serine/threonine kinase in eukaryotes. Mounting evidence from studies using diverse eukaryotic models identified that AMPK and its homologs work as master regulators of adaptive growth in response to energy deficit (3). To achieve this, SNF1/AMPK1 interacts with a large number of proteins involved in the regulation of primary metabolism, transcription, splicing, translation, protein trafficking, autophagy, protein degradation, etc. (51)(52)(53)(54)(55)(56). Many of these proteins are phosphorylation targets of SNF1/AMPK1, and through these phosphorylation events, it works as a central regulator of growth in eukaryotes (57,58). Understandably, AMPK signaling is implicated in many diseases in humans (59). The SnRK1 subunits are also found to be interacting with diverse proteins and possess functions similar to SNF1/AMPK (8,60). These studies suggest that SnRK1/SNF1/AMPK1 complex works as a convergent point/hub of many diverse signaling pathways. Earlier, due to promiscuous nature of the interaction of FLZ proteins with SnRK1␣ subunits and common interacting partners, FLZ proteins were proposed to be scaffolds of SnRK1 enzyme complex (9,10). In this study, our in silico analysis predicts that FLZ proteins possess evolutionarily conserved IDRs. Together with the FLZ domain, they contribute to the interaction of these proteins with SnRK1 subunits and other FLZ proteins. These IDRs might be playing an important part in the proposed role of FLZ proteins as the scaffolding proteins.
The enhanced propensity for protein binding and the enrichment of PTM sites observed in the in silico analysis prompted us to investigate the role of IDRs in protein-protein interaction. Intriguingly, the FLZ domain region showed significant enrichment of potential tyrosine phosphorylation sites that could be due to the presence of two relatively conserved tyrosines in the spacer region across the plant lineage (6,7). The conservation of these residues across the plant lineage suggests their possible roles in regulating the protein-protein interactions mediated by the FLZ domain. In our Y2H assays with all subunits of SnRK1 in Arabidopsis, we could identify previously unknown interactions of FLZ proteins with SnRK1 kinase subunits that were also verified by the BiFC assay. Furthermore, FLZ proteins also showed extensive interaction with SnRK1 and ␥ subunits. It should be noted that there could be more interactions between SnRK1 subunits and FLZ proteins than what we identified in this study because the Y2H interactions are heavily dependent on the orientation of the construct and choice of AD and BD vectors (61).
The promiscuous interactions of FLZ proteins with SnRK1 subunits further reinforce their role as scaffold proteins of the SnRK1 complex. The FLZ proteins are relatively small proteins in angiosperm, and we speculated that the scaffold might be
FLZ proteins form SnRK1 scaffold in land plants
formed due to the interaction of different FLZ proteins. Indeed, we found that FLZ proteins show homo-and heterodimerization properties. Intriguingly, some proteins, such as FLZ7, FLZ12, and FLZ15, showed promiscuous dimerization property, whereas proteins such as FLZ1 and FLZ6 showed very specific interactions. It is indeed an interesting observation because, in multiprotein complexes, some core proteins can possess a promiscuous interaction property that helps in recruiting other subunits to the complex. A remarkable example for such core proteins is MED14 and MED17 subunits in the Mediator complex, which interact with a large number of other subunits (62). More functional and structural studies can uncover whether FLZ7, FLZ12, and FLZ15 possess similar functions in facilitating the formation of the FLZ protein complex.
Our results suggest that the IDRs in the N terminus are involved in the complex formation of FLZ proteins with SnRK1 and ␥ subunits and FLZ proteins. The FLZ domain was found to be solely responsible for mediating interaction with SnRK1␣ subunits. However, in some cases the FLZ domain produced weak or no interaction. Previously, we have shown that although the FLZ domain is sufficient to mediate interaction with other proteins, the strength of the interaction is significantly diminished when the FLZ domain alone was used, suggesting that other regions in the protein facilitate stronger binding (6). Apart from this, the stringency of selection medium must have also contributed to the weak interaction; and no interaction observed in case of FLZ3 and FLZ15 with SnRK1␣1. Although we predicted the enrichment of IDRs in the C terminus, in our interaction analysis with C-terminal constructs, we could not recapitulate any interactions. The C termini in FLZ proteins are usually small, and it is yet to be seen whether the IDR in this region possess any specific function or indirectly contribute to interactions facilitated by other regions.
We also found a general reduction in the size of FLZ proteins in most of the angiosperms with limited novel domain acquisition. This reduction in the overall protein size seems to have occurred due to the contraction of N termini. Although the number of FLZ proteins is limited in algae and bryophytes, they showed a relatively long N terminus with more disordered regions. In angiosperms, due to the contraction of N termini, the number and length of IDRs are generally reduced. The protein-protein interaction analysis in Arabidopsis suggests that FLZ proteins interact among themselves, and these interactions might be important in scaffold formation. Many regulatory pathways, which are essential for the survival in terrestrial environment, originated as simpler pathways with few proteins and regulatory modules in algae and bryophytes, and their complexity is gradually increased with the addition of more genes and regulatory interactions in higher plants (21,63,64). A classic example is the evolution of the auxin perception signaling pathway where successive additions of different modules resulted in the formation of a multilayered and complex regulatory pathway in higher plants (65). Interestingly, the expansion of FLZ gene family is also implicated in the evolution of body plan complexity in angiosperms (66). One possibility is that the solitary or limited number of FLZ proteins in the Char-ophyta and Bryophyta with its long N terminus enriched with IDRs can harbor SnRK1 and its limited number of binding partners in lower plants. The rampant gene duplication events in higher plants and the gradual increase of biological complexity might also have increased the complexity of SnRK1 signaling. The concurrent duplication and divergence of FLZ genes in higher plants might have resulted in functional specialization and facilitated the scaffold formation by the interaction of multiple FLZ proteins. Supporting this hypothesis, comparison of the distribution of ANCHOR-based disorder-to-order transition regions and LC regions in lower plants and spermatophytes identified a restricted distribution where some proteins in a species showed an enhanced number of such regions in spermatophytes. Taken together, these results suggest a gradual increase in the complexity of SnRK1-FLZ signaling pathway in higher plants. Interestingly, even after keeping very low stringency (five tandem amino acid residues with a propensity score of Ն0.5) for MoRF prediction, a very low number of MoRFs was predicted in the FLZ proteins, especially at the N-terminal region. In fact, proteome level analysis identified that only about 21% of the IDRs in eukaryotes contain MoRFs. This percentage is slightly increased to 29% in bacteria and archaea (36). These results indicate that a large majority of IDRs do not undergo disorder-to-order transition upon binding to the targets. These IDRs with increased conformational plasticity are often involved in the formation of fuzzy complexes (17). ANCHOR predicted more disorder-to-order transition regions in FLZ proteins compared with fMORFPred. This difference could be due to the difference in the prediction methods where MoRFPred uses more elaborative prediction strategy and displayed significantly better performance (33,35,67). The reduced frequency of MoRFs in the IDRs of FLZ proteins suggests that they might be involved in the formation of complexes with more conformational freedom such as fuzzy complexes, and this feature might help in recruiting diverse targets of SnRK1. The increased sequence divergence observed in the IDR-rich regions of FLZ proteins might have accelerated the adaptability and enhanced the binding repertoire of IDRs.
The in planta analysis identified that FLZ proteins interact with SnRK1 subunits in the common cytoplasmic foci indicating the formation of a complex. The colocalization analysis with a marker protein and stain in this study and a previous study identified that SnRK1-FLZ interaction colocalizes with ER, which provides clues about the biological significance of this complex (14). In most cases, these interactions were found to be located in the proximity of the nucleus suggesting that it might be the regions in ER where ribosomes are studded and mRNA translation is undergoing. Studies in various eukaryotes, including plants, suggest a pivotal role of SNF1/AMPK/SnRK1 signaling in negatively regulating the protein synthesis during energy starvation (3,4,68). Upon activation by starvation signals, AMPK inhibits TOR activity through phosphorylating RAPTOR, which promotes its dissociation from the TOR complex (69). This phosphorylation and the negative regulation are found to be conserved in plants as well (70). The multisubunit eukaryotic initiation factor 3 (eIF3), which is involved in the translation initiation, also works as a scaffold for TOR and its immediate target, ribosomal S6 kinases 1 and 2 (S6K1/2). Upon
FLZ proteins form SnRK1 scaffold in land plants
activation, the mTOR complex phosphorylates S6K1/2, which results in its dissociation from the eIF3 complex and further phosphorylation and activation by PDK1. The activated S6K1/2 phosphorylates eukaryotic translation initiation factor 4B (eIF4B) and ribosomal protein S6 (RPS6), a subunit of the 40S ribosome (71). mTOR, through the phosphorylation events detailed above and other phosphorylations, promotes translation in response to activation signals (68). In plants, this mechanism is found to be conserved where auxin was identified as one of the potent activators of TOR (72). TOR complex can also directly associate with ribosome (72,73). Taken together, these results indicate that the TOR and S6K shuttle from the inactive to active pool depending on signals and that they are linked with ribosomes. Although the role of AMPK as a potent inhibitor of mTOR-mediated activation of translation is known, how the AMPK is recruited to this complex is still elusive. The interaction of cell death-inducing DNA fragmentation factor 45-like effector A (CIDEA) with AMPK subunit colocalizes with ER in brown adipose tissue suggesting that AMPK also exists in close proximity with ribosomes (74). In the subcellular localization assays, SnRK1 kinase subunits were found to be predominantly localized in the nucleus and cytoplasm (3,9,75). We found that most of the FLZ proteins also show a similar localization pattern. However, the specific localization of the SnRK1-FLZ interaction to the bodies colocalizing with ER indicates that FLZ proteins might be responsible for the recruitment of SnRK1 to the translation-regulating complex. This hypothesis is supported by the fact that FLZ proteins also show promiscuous interaction with RAPTOR (8). The expression of FLZ genes is highly regulated by cellular energy levels (76). Indeed, functional analysis with individual FLZ genes identified that they not only work as inert scaffolding proteins but also regulate the level of SnRK1␣1 and hence SnRK1 and TOR activity (14). Energy status and SnRK1 signaling regulate the transcription of FLZ genes (76). These results indicate that this complex formation might be regulated by energy status and might be involved in the regulation of SnRK1 signaling in plants. Taken together, this study identified that the FLZ domain and IDRs in the FLZ proteins coordinate in the formation of a complex with SnRK1. Because SnRK1 is a highly structured complex, the surface provided by the interaction of multiple FLZ proteins may help in the recruitment of upstream and downstream factors to the SnRK1 complex.
Identification of FLZ proteins from different plant genomes
In the previous studies, we identified FLZ proteins from more than 40 plant genomes through BLASTP and PFAM and InterPro Id (PF04570 and IPR007650)-based searches (6,7). We used this dataset to create a hidden Markov model profile of the FLZ domain. Using this HMM profile, proteins containing the FLZ domain were identified from 33 genomes, which represent important taxonomical positions in the plant lineage (Fig. S1) using Hmmsearch available in the HMMER web server version 2.15.0 with the given parameters (significance e-value: 0.01 for sequence and 0.03 for hits). Simultaneously, three rounds of iterative-BLAST searches were performed in the ref-erence proteomes using aligned FLZ domain sequences using jackhmmer available in the HMMER web server version 2.15.0 with the given parameters (significance e-value: 0.01 for sequence and 0.03 for hits). The hits obtained from the profile HMM-based search and iterative BLAST searches were combined with the previous dataset and outliers and repeats were removed. Finally, the nonredundant dataset was screened with Batch Web CD-search Tool (77) for the presence of FLZ and other protein domains (Table S2). The final set of protein sequences used for the analysis is given in Table S1. The species tree of the 33 genomes of interest was retrieved from NCBI Taxonomy Common Tree Tool and edited and visualized in FigTree version 1.4.3 (http://tree.bio.ed.ac.uk/software/figtree/). 7 The reported common and lineage-specific WGD events (50,79) in these species were also annotated in the species tree. The average size of FLZ proteins in a species is calculated as the mean size of all FLZ proteins in the given species.
Disorder prediction
The disorder of FLZ proteins was predicted by the metapredictor PONDR-FIT (20) using protein sequences as input. The average disorder score and standard deviation of each protein residue were obtained in the tabular form which is used for subsequent analysis (deposited in Figshare; https://figshare. com/s/0372c02ce2ed1173d93e). 7 We classified IDRs into two groups. Amino acid stretches ranging from 10 to 29 residues with average disorder score for each residue of Ն0.5 are classified as short IDRs. Amino acid stretches of Ն30 residues long with average disorder score for each residue of Ն0.5 are categorized as long IDRs. As used in the earlier studies (80, 81), a maximum three tandem amino acids with less than a 0.5 average disorder score is set as the tolerance limit. The boundaries of the N terminus, the FLZ domain, and the C terminus were determined by Batch Web CD-search Tool (77), and the average disorder score of each part was calculated from the disorder score of all amino acid residues in the region. An IDR is categorized as junction-IDR if at least five disordered residues (disorder score for each residue Ն0.5) are located in the other region. The IDRs identified from different regions of FLZ proteins are given in Table S3. Statistical sampling and data sorting was performed by employing in-house Cϩϩ scripts, and figures were generated in gnuplot version 5.2.
Binding propensity, disorder-to-order transition, and low complexity region predictions
The propensity of protein and nucleic acid binding of the N and C termini were calculated by DisoRDPbind (23) using protein sequences as input. The protein, DNA, and RNA binding scores of each protein residue were obtained in tabular form. The average binding propensity of N and C termini was calculated from the binding score of all amino acid residues in the particular region (deposited in Figshare; https://figshare.com/ s/0372c02ce2ed1173d93e).
The protein-binding disordered regions of FLZ proteins, which undergo disorder-to-order transition upon binding with globular proteins, were predicted by ANCHOR (33) using protein sequences as input. The results obtained were presented in tabular form (Table S6). The MoRFs were predicted by fMoRFpred (36). At least five amino acids in tandem with the propensity score of Ն0.5 for MoRF was considered as a potential MoRF. The final results obtained were presented in tabular form (Table S7).
The LCRs in the FLZ proteins were identified by SEG algorithm (40), and the results obtained were presented in tabular form (Table S8). The average protein and nucleic acid binding propensity, frequency of ANCHOR-based binding regions, MoRF, and LCR were converted to heat maps using Multi-Experiment Viewer (MeV, version 4.8) (82).
Post-translational modification site prediction
The putative serine, threonine, and tyrosine phosphorylation sites with a prediction score of Ն0.5 in the N and C termini and FLZ domain regions of FLZ proteins were identified by Net-phos3.1 (26). The putative arginine and lysine methylation sites with support vector machine score of Ն0.5 in different regions of FLZ proteins were identified by PMeS (83). The acetylation residues in the N and C termini and FLZ domain was predicted with PAIL (84) with medium stringency. The PTM site data obtained from this analysis was presented in Table S4. Paired t test was used to identify the difference in the distribution of PTM sites in N and C termini compared with FLZ domain.
Yeast two-hybrid assays
The full-length CDS of FLZ genes and SnRK1 subunits were amplified by CDS-specific primers (Table S9) and cloned into the pCR8/GW/TOPO vector using the pCR8/GW/TOPO cloning kit (Invitrogen). Subsequently, positive clones were mobilized to pGBKT7g (BD) and pGADT7g (AD) (61) vectors using Gateway cloning strategy (Invitrogen). All partial clones of FLZ genes were prepared using specific primers (Table S9) using the same strategy. The chimeric FLZ1 N -FLZ2 NIDR2 was constructed in pJET1.2 vector (ThermoFisher Scientific) and mobilized to pCR8/GW/TOPO vector. Subsequently, the construct was mobilized to pGBKT7g vector. All Y2H experiments were conducted in Y2H gold yeast strain according to the manufacturer's protocol (Clontech). Before the Y2H experiment, all the BD constructs were subjected to auto-activation and toxicity test as per the manufacturer's protocol (Clontech). For Y2H assays, respective AD and BD constructs were cotransformed in Y2H gold strain using EZ-Yeast transformation kit (MP Biomedicals), and transformed colonies were selected on Double Dropout medium (DDO; TrpϪ/LeuϪ). The transformed colonies were cultured, and equal amounts of cells were spotted on interaction screening Quadruple Dropout medium (QDO) supplemented with X-␣-Gal and aureobasidin A (TrpϪ/LeuϪ/ HisϪ/AdeϪ/X␣Galϩ/AbAϩ). Simultaneously, a negative control experiment with BD vector and AD construct was carried out to identify false interactions. The experiments were repeated three times.
Site-directed mutagenesis
The SDM of FLZ domain of FLZ8 (C225S, C226S, C227S, C229S, C249S, and C252S) was performed in the pCR8/GW/ TOPO-FLZ8 construct by QuikChange site-directed mutagenesis kit using the specific primers (Table S9) according to the manufacturer's protocol (Agilent). All mutations were verified by sequencing. Subsequently, the mutated constructs were mobilized to pGBKT7g using Gateway cloning (Invitrogen).
Bimolecular fluorescent complementation
The FLZ and SnRK1 subunits cloned in pCR8/GW/TOPO vector were mobilized to pSAT4-DEST-N(1-174) EYFP-C1 and pSAT5-DEST-C(175-END) EYFP-C1 vectors (85) using Gateway cloning strategy (Invitrogen). The BiFC assays were performed in onion epidermal peel system by bombarding both constructs by PDS-1000 Helios Gene Gun (Bio-Rad). A negative control experiment was conducted with vector alone along with the other construct to find out false-positive results. The ER-marker construct (44) was obtained from ABRC and transformed with BiFC constructs for colocalization. After bombardment, samples were incubated at 22°C for at least 16 h in the dark. DAPI staining was performed as described previously (6). For staining with ER-Tracker Red dye (Invitrogen), the samples were washed three times in Hanks' balanced salt solution (HBSS) without phenol red. Subsequently, samples were stained with 1 M ER-Tracker Red dye prepared in HBSS at 30°C for 30 min in dark. The samples were subjected to a quick wash in HBSS before visualization. All visualization and photography were performed in TCS SP2 (AOBS) laser confocalscanning microscope (Leica Microsystems) or AxioImager M2 Imaging System (Zeiss).
Subcellular localization assays
Subcellular localization assays were performed in onion epidermis and Arabidopsis mesophyll protoplasts. The FLZ genes cloned in pCR8/GW/TOPO vector were mobilized to pEG104 vector (86) using Gateway cloning strategy (Invitrogen). The constructs and vector alone were transfected individually to onion epidermis through PDS-1000 Helios Gene Gun (Bio-Rad) and incubated at 22°C for at least 16 h in the dark. Arabidopsis mesophyll protoplasts were prepared and transfected with vector and constructs as described previously (87). After the transfection, mesophyll protoplasts were incubated at 22°C for 16 h in the light. All visualization and photography were performed in TCS SP2 (AOBS) laser confocal-scanning microscope (Leica Microsystems).
Analysis of selection pressure on FLZ proteins
The putative orthologs from six closely related species each from monocots and eudicots were recovered through Bayesian phylogenetic reconstruction in TOPALI version 2.5 (88). The protein and CDS sequences were split to N and C termini and FLZ domain based on Batch Web CD-search Tool (77). The nonsynonymous (K a ) and synonymous (K s ) substitution rates and K a /K s ratio of each orthologous pair were estimated in the codeml program in PAML (78) package (Table S5). Protein regions with a minimum of 15 amino acid residues were considered for K a and K s estimation. The data sorting was performed by in-house Cϩϩ scripts, and the graph was generated in gnuplot version 5.2. | 2018-07-04T02:58:58.187Z | 2018-06-26T00:00:00.000 | {
"year": 2018,
"sha1": "9f49f464bf6382564b004475f1fec94e9cee99c1",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/293/34/13134.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "87d2832b182ef280c26ca9a18eb566dbf19290f9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
268370553 | pes2o/s2orc | v3-fos-license | New systemic treatment paradigms in resectable non-small cell lung cancer and variations in patient access across Europe
Summary The treatment landscape of resectable early-stage non-small cell lung cancer (NSCLC) is set to change significantly due to encouraging results from randomized trials evaluating neoadjuvant and adjuvant immunotherapy, as well as adjuvant targeted therapy. As of January 2024, marketing authorization has been granted for four new indications in Europe, and regulatory approvals for other study regimens are expected. Because cost-effectiveness and reimbursement criteria for novel treatments often differ between European countries, access to emerging developments may lead to inequalities due to variations in recommended and available lung cancer care throughout Europe. This Series paper (i) highlights the clinical studies reshaping the treatment landscape in resectable early-stage NSCLC, (ii) compares and contrasts approaches taken by the European Medicines Agency (EMA) for drug approval to that taken by the United States Food and Drug Administration (FDA), and (iii) evaluates the differences in access to emerging treatments from an availability perspective across European countries.
Introduction
Non-small cell lung cancer (NSCLC) is one of the most frequently diagnosed cancer types and remains a leading cause of cancer-related death. 1 Recent systemic treatments, including immune checkpoint inhibitors (ICIs) and targeted therapies, have significantly improved survival and health-related quality of life (HRQOL) in metastatic NSCLC. 2 For instance, the Netherlands Cancer Registry showed that 5-year survival rates rose from 12% (1995-2004) to 25% (2015-2021). 3[9][10] The United States Food and Drug Administration (US FDA) and the EMA are central in regulating medical products in the US and the European Union (EU). 11,12oth engage in processes for drug approval, including pre-authorization, detailed assessment of clinical trial data, and post-marketing surveillance.However, they have distinct regulatory frameworks: the FDA's unified federal system ensures a streamlined process, while the EMA operates within the European Medicines Regulatory Network, a closely-coordinated framework of national competent authorities in the Member States of the European Economic Area working together with the EMA and the European Commission (EC).
Substantial differences in healthcare systems and drug reimbursement across Europe may further amplify existing disparities in lung cancer care.This Series paper provides an overview of studies influencing the upcoming changes in resectable early-stage (stages I-IIIA) NSCLC, discusses the approach for drug approval by the EMA compared to that of the FDA, and evaluates access to innovative therapies from an availability perspective across EU countries. 13
Background
For decades, early-stage NSCLC treatment remained largely unchanged.Operable stages I and II NSCLC, and selected IIIA cases, typically undergo an anatomical resection (segmentectomy, (bi)lobectomy, or pneumonectomy) with lymph node dissection. 14Stereotactic ablative radiotherapy (SABR) is an effective alternative when patients with stage I disease are unwilling or unable to undergo surgery, as shown in the revised STARS study. 155][6] In case of positive surgical margins, postoperative radiotherapy is considered. 14For patients with resectable stage III NSCLC with PD-L1 expression ≥1% who cannot undergo surgery, the preferred treatment is definitive chemoradiotherapy followed by durvalumab. 13espite curative intent treatments, 5-year survival rates for early-stage NSCLC vary from 92% in stage IA to 36% in stage IIIA. 16This modest long-term survival is largely due to distant tumor relapse, which is reported to occur up to three times more frequently than local recurrences. 17This highlights the need for more effective systemic treatments.
Summary of landmark trials
Fig. 1 shows a timeline with relevant landmark trials that have impacted the systemic treatment in resectable NSCLC.Published phase II and III randomized trials and ongoing phase III randomized trials in resectable NSCLC are summarized in Tables 1 and 2, respectively.
Adjuvant targeted therapy
ADAURA is the first phase III trial to demonstrate improved overall survival (OS) in resectable NSCLC using targeted therapy. 18,19In this double-blind, phase III trial, 682 patients with completely resected epidermal growth factor receptor (EGFR exon 19 deletion or exon 21 L858R mutation) mutation-positive stage IB-IIIA NSCLC (according to the 7th edition of the Tumor, Node, Metastasis (TNM) classification system) were randomized to receive up to three years of adjuvant osimertinib (80 mg orally once daily), a third-generation EGFR tyrosine kinase inhibitor, or placebo.Adjuvant platinum-based chemotherapy was allowed.After a median follow-up of 22.1 months in the experimental arm and 14.9 months in the placebo arm, ADAURA met its primary endpoint with improved disease-free survival (DFS) in favor of the experimental arm in both the stage II-IIIA population (median not reached (NR) vs 19.6 months with placebo; hazard ratio (HR) 0.17, 99.06% confidence interval (CI) 0.11-0.26;p < 0.001) and overall population (median NR vs 27.5 months with placebo; HR 0.20, 99.12% CI 0.14-0.30;p < 0.001).Interestingly, patients showed comparable benefits from osimertinib regardless of prior adjuvant chemotherapy.Moreover, fewer central nervous system (CNS) recurrences (HR 0.18, 95% CI 0.10-0.33)and maintained HRQOL were noted with osimertinib. 20Based on these results without mature OS data, osimertinib was approved by the FDA (December 2020) and EMA (May 2021) for patients with completely resected stage IB-IIIA EGFR mutation-positive NSCLC. 7,21The final OS analysis in mid-2023 reported a significant OS benefit, with 5-year OS rates of 85% vs 73% with placebo in the stage II-IIIA population (HR 0.49, 95.03% CI 0.33-0.73;p < 0.001), and, in the overall population, 88% vs 78% with placebo (HR 0.49, 95.03% CI 0.34-0.70;p < 0.001). 19he open-label, phase III ALINA trial investigated adjuvant alectinib, an anaplastic lymphoma kinase (ALK) inhibitor, in 257 patients with completely resected stage IB(≥4 cm)-IIIA NSCLC (7th edition) with an ALK alteration who were randomized to receive either adjuvant alectinib (600 mg orally twice daily) for up to 24 months, or up to four cycles of platinum-based chemotherapy. 22An improvement in investigatorassessed DFS was demonstrated favoring the alectinib arm in both stage II-IIIA (median follow-up duration 27.8 months; median NR vs 44.4 months with chemotherapy; HR 0.24, 95% CI 0.13-0.45;p < 0.0001) and overall populations (median NR vs 41.3 months with chemotherapy; HR 0.24, 95% CI 0.13-0.43;p < 0.0001).CNS-DFS advantages in the overall population were also observed (HR 0.22, 95% CI 0.08-0.58).While OS data were immature, no new safety signals were identified.As of January 2024, adjuvant alectinib has not received EMA or FDA approval.
Adjuvant immunotherapy
Immunotherapy has revolutionized NSCLC treatment in both advanced and locally advanced stages and is now being explored in early stages.Immunotherapy offers better outcomes for patients, with fewer side effects than traditional chemotherapy. 2The EMA's 2015 approval of nivolumab, a monoclonal antibody against PD-1, for advanced NSCLC post-chemotherapy was a milestone, followed by various treatment approvals in advanced and locally advanced stage, including the anti-PD-L1 durvalumab post-chemoradiotherapy for unresectable stage III tumors. 13,23These advances were driven by discoveries in cancer immunology, notably PD-1 and CTLA-4, leading to the 2018 Nobel Prize in Medicine, awarded to Tasuku Honjo (PD-1) and James Allison (CTLA-4). 24djuvant immunotherapy shows promise in eradicating micrometastatic disease, reversing the immunosuppressive post-surgical microenvironment, and targeting circulating tumor cells. 25It may enhance the elimination of minimal residual disease, particularly when combined with adjuvant chemotherapy, which alters neoantigen exposure patterns and may augment ICI efficacy.Adjuvant atezolizumab, a monoclonal antibody targeting PD-L1, was investigated in IMpower010 and is now the first EU-approved immunotherapy for resectable NSCLC. 26In this open-label, phase III trial, 1005 patients with completely resected stage IB(≥4 cm)-IIIA NSCLC (7th edition) were randomized between atezolizumab 1200 mg every three weeks for 16 cycles or one year, or best supportive care (BSC).To be eligible, patients should have received at least one cycle of platinum-based adjuvant chemotherapy, and postoperative radiotherapy was not permitted.The primary endpoint of investigator-assessed DFS was evaluated first in the stage II-IIIA population with PD-L1 ≥1% tumors, followed by all patients with stage II-IIIA disease, and finally in the overall population.After a median follow-up of 32.2 months, the first DFS analysis showed improved DFS with atezolizumab in patients with stage II-IIIA tumors and PD-L1 expression ≥1% (median NR vs 35.3 months with BSC; HR 0.66, 95% CI 0.50-0.88;p = 0.0039) as well as in the stage II-IIIA population (median 42.3 vs 35.3 months with BSC; HR 0.79, 95% CI 0.64-0.96;p = 0.020) but not in the overall population.The most pronounced DFS benefit was observed in stage II-IIIA patients with PD-L1 expression ≥50% (HR 0.43, 95% CI 0.27-0.68).No DFS benefit was observed in the stage II-IIIA population with PD-L1 negative tumors, never-smokers, and those with EGFR or ALK alterations.Based on these DFS results, the FDA approved adjuvant
Series
atezolizumab in October 2021 for completely resected stage II-IIIA EGFR wild-type and ALK-negative NSCLC with PD-L1 expression ≥1%, following completion of adjuvant platinum-based chemotherapy. 27In June 2022, the EMA approved atezolizumab only for those patients with PD-L1 ≥50% tumors, after a blinded independent central review (BICR). 8,28This step was crucial as the EMA did not accept the open-label design and investigator-assessed DFS of the study.The BICR specifically confirmed the DFS benefit in this group.Also, the first prespecified interim OS analysis showed OS benefits with atezolizumab in patients with PD-L1 ≥50% tumors and without EGFR or ALK alterations (5-year OS 85% vs 68% with BSC; HR 0.42, 95% CI 0.23-0.78). 29he triple-blind, phase III PEARLS/KEYNOTE-091 trial randomly assigned 1177 patients with stage IB(≥4 cm)-IIIA NSCLC (7th edition) to receive either the anti-PD-1 pembrolizumab 200 mg or placebo after complete surgical resection, both administered every three weeks for up to 18 cycles. 30Unlike IMpower010, adjuvant chemotherapy was optional but encouraged for stage II-IIIA patients.Study co-primary endpoints were DFS in the overall population and in those with PD-L1 ≥50% tumors.The planned second interim analysis was driven by the DFS events that occurred in the latter group.After a median follow-up of 35.6 months, improved DFS was observed in the overall population with pembrolizumab (median 53.6 vs 42.0 months with placebo; HR 0.76, 95% CI 0.63-0.91;p = 0.0014) but, interestingly, not in the PD-L1 ≥50% population (both groups, median NR; HR 0.82, 95% CI 0.57-1.18).Notably, patients with PD-L1 1%-49% tumors showed improved DFS (HR 0.67, 95% CI 0.48-0.92).Contrary to IMpower010, never-smokers and patients with EGFR-mutation positive tumors did seem to benefit. 31However, these subgroup analyses should be interpreted with caution due to their exploratory nature and differences in trial design.Other factors that may have contributed to the differences in efficacy results are the overperformance of the placebo group with PD-L1 ≥50% tumors in PEARLS/KEYNOTE-091 and the differences in enrolled populations and PD-L1 assays used.Both trials reported comparable safety profiles with around 20% of patients experiencing grade ≥3 treatment-related adverse events. 26,30Adjuvant pembrolizumab treatment was completed by 52% of patients, in contrast to a completion rate of 65% for adjuvant atezolizumab.As of January 2024, OS data are pending.The FDA approved adjuvant pembrolizumab in January 2023 for stage IB(≥4 cm)-IIIA NSCLC regardless of PD-L1 expression following complete surgical resection and adjuvant platinum-based chemotherapy. 32In October 2023, the EMA granted a similar approval. 10
Neoadjuvant immunotherapy
4][35] Neoadjuvant therapy has several advantages over adjuvant treatment, such as more reliable treatment delivery and early eradication of micrometastatic disease.7][38] This approach also allows for direct assessment of treatment effects and the identification of potential biomarkers of efficacy in resection specimens, supporting the development of predictive models for ICI efficacy.The open-label, phase III CheckMate 816 trial randomized 358 patients with resectable stage IB-IIIA NSCLC (7th edition) without known EGFR or ALK alterations to receive either neoadjuvant nivolumab 360 mg every three weeks for three cycles with platinum-based chemotherapy or chemotherapy-alone. 39No adjuvant immunotherapy was planned.In the unplanned interim analysis and after a median follow-up of 29.5 months, an improvement in EFS favoring the combination arm was observed (median 31.6 vs 20.8 months with chemotherapy-alone; HR 0.63, 97.38% CI 0.43-0.91;p = 0.005).EFS was better across most subgroups, especially in patients with stage IIIA disease, non-squamous histology, and those with PD-L1 ≥1% tumors (HR 0.41, 95% CI 0.24-0.70).No benefit was observed in the PD-L1 negative group (HR 0.85, 95% CI 0.54-1.32).The other co-primary endpoint pathological complete response (pCR: defined as 0% residual viable tumor cells in either the primary tumor or the sampled lymph nodes) also favored the combination arm (24% vs 2.2%; p < 0.001).The major pathological response rate (MPR: defined as ≤10% residual viable tumor cells in the resection specimen) was higher as well (36.9%vs 8.9%).These pathological responses are generally in line with the pCR and MPR rates of 20-25% and 30-40%, respectively, observed in the perioperative immunotherapy trials (Table 1).The feasibility of surgery was not compromised, with a comparable safety profile and no detrimental impact on HRQOL. 40A 3-year trial update indicated maintained EFS benefits and a promising OS trend with nivolumab (OS HR 0.62, 99.34% CI 0.36-1.05). 41,42At three years, 78% of patients were alive in the combination arm, compared to 64% in the chemotherapy-alone arm.Based on the first EFS results, the FDA approved neoadjuvant nivolumab with platinum-doublet chemotherapy for resectable stage IB(≥4 cm)-IIIA NSCLC regardless of PD-L1 expression in March 2022, a first in the neoadjuvant ICI setting. 43In June 2023, the EMA approved this regimen for resectable stage II-IIIA NSCLC with PD-L1 expression ≥1%. 9,44e perioperative approach Phase III trials have explored the addition of immunotherapy in both neoadjuvant and adjuvant treatment phases, a so-called perioperative strategy.This approach combines ICIs with chemotherapy pre-surgery to maximize tumor reduction and systemic control, followed by ICI monotherapy post-surgery to maintain surgical outcomes and target residual micrometastatic disease.6][47][48] Notably, KEYNOTE-671 is the only one among these to have investigated OS as a primary endpoint.
In AEGEAN, 802 patients with resectable stage II-IIIB (N2 node stage) NSCLC (8th edition) were randomized to receive either neoadjuvant durvalumab or placebo with platinum-based chemotherapy for four cycles every three weeks followed by adjuvant durvalumab, or placebo for 12 cycles. 45Patients planned for a pneumonectomy were excluded, as were patients staged with T4 tumors for any reason other than size (>7 cm).The trial met both of its co-primary endpoints with improved EFS (median follow-up duration 11.7 months among patients without an event; median NR vs 25.9 months with placebo; HR 0.68, 95% CI 0.53-0.88;p = 0.004) and pCR favoring the durvalumab arm.The improvement in EFS was seen across disease stages, PD-L1 expressions, and types of platinum agents used.
Neotorch aimed to randomize 500 patients with resectable stage II-III NSCLC (8th edition) to receive either neoadjuvant anti-PD-1 toripalimab or placebo with platinum-based chemotherapy for three cycles and one cycle postoperatively followed by adjuvant toripalimab or placebo monotherapy for 13 cycles. 46In the first planned interim analysis of stage III patients, 404 patients were included.With a median follow-up of 18.3 months, an improvement in EFS with toripalimab was observed (median NR vs 15.1 months with placebo; HR 0.40, 95% CI 0.28-0.57;p < 0.0001), with consistent effect across subgroups.Both MPR, the co-primary endpoint, and pCR rates were higher with toripalimab.Immature OS results also indicated a trend favoring toripalimab.Since this study was conducted in the Chinese population, it could impact potential approval by the EMA or FDA.For example, a typical rule of thumb cited by experts for a treatment to be considered for FDA approval is that at least 20% of the supporting clinical data should be from US based patients. 49However, the FDA has permitted acceptance of clinical studies based on solely high-quality foreign data before and has regulations under which marketing approval may be granted.CheckMate 77T enrolled 461 patients with resectable stage II-IIIB (N2) NSCLC (8th edition) who received either neoadjuvant nivolumab plus platinum-based chemotherapy followed by adjuvant nivolumab or chemotherapy plus placebo followed by adjuvant placebo for one year. 47In the first prespecified interim analysis, better EFS was demonstrated in favor of the nivolumab arm (median follow-up duration 25.4 months; median NR vs 18.4 months with placebo; HR 0.58, 97.36% CI 0.42-0.81;p = 0.00025).Additionally, pCR and MPR rates were higher with nivolumab.KEYNOTE-671 compared four cycles of neoadjuvant pembrolizumab with placebo, both administered with platinum-based chemotherapy, followed by postoperative pembrolizumab or placebo every three weeks for up to 13 cycles in 797 patients with resectable stage II-IIIB (N2) NSCLC (8th edition). 48At the second planned interim analysis, the pembrolizumab group showed a maintained EFS benefit (median follow-up duration 36.6 months; median 47.2 vs 18.3 months with placebo; HR 0.59, 95% CI 0.48-0.72)and a significant OS improvement (median NR vs 52.4 months with placebo; HR 0.72, 95% CI 0.56-0.93;p = 0.00517).These results, consistent across most subgroups and without new safety concerns, led to FDA approval in October 2023 of neoadjuvant pembrolizumab with platinum-based chemotherapy followed by adjuvant pembrolizumab for resectable NSCLC (tumors ≥4 cm or node positive). 50As of January 2024, KEYNOTE-671's regimen is the only perioperative treatment with FDA approval, with EMA approval pending.
Immunotherapy and SABR
The randomized phase II I-SABR trial demonstrated promising EFS outcomes and manageable toxicity when 4 cycles of nivolumab were added to SABR, suggesting a new combined treatment strategy for medically inoperable patients with stage I or II NSCLC (8th edition). 51nterestingly, 20% of the study population were potentially operable, indicating a need for further exploration in future trials.
Identifying study endpoints
In adjuvant and neoadjuvant studies, OS is widely recognized as the most reliable and valuable parameter for drug approvals and guideline recommendations. 52,535][56][57][58][59][60][61] Consequently, many recent studies use surrogate endpoints, often lacking statistical power to show significant OS differences.Based on surrogate endpoints, accelerated approval for oncology drugs can be granted, although these may be temporary while awaiting mature OS data. 52,62DFS and EFS are often used as such endpoints.DFS is defined as the time from randomization until disease recurrence or death from any cause and is typically used in adjuvant trials.This measure is usually applied to patients who have undergone surgery and are considered fit postoperatively.By contrast, EFS is used in neoadjuvant trials and is defined as the time from randomization to progression of disease that precludes surgery, disease recurrence after surgery, or death from any cause.The EFS population mainly consists of preoperative patients who are fit but still require surgery.These patients have a higher chance of experiencing disease worsening before or because of surgery.Therefore, it is important to note that the thresholds for determining significant benefits in EFS should not be directly compared with those used in DFS, due to the different patient populations and circumstances in which these measures are used.Given these differences, it may even be argued that EFS cutoffs should be less stringent compared to those for DFS, as the EFS population faces a higher likelihood of adverse events during the preoperative and surgical periods.Table 3 summarizes the merits and limitations of selected endpoints. 52,63,64CR and MPR can be potential surrogate endpoints in neoadjuvant trials. 65The FDA and EMA take a cautious, context-specific stance on using pCR and MPR as endpoints in oncology trials.For instance, the EMA accepts approval based on pCR in high-risk earlystage breast cancer when part of a well-established regimen with significant pCR increase and minimal toxicity. 66This is due to the longer time needed for DFS data to mature in breast cancer.However, the shorter DFS in lung cancer reduces the urgency to use pCR as a surrogate endpoint.Neither FDA nor EMA has fully endorsed pCR and MPR as definitive endpoints for drug approval in NSCLC.The agencies typically require more data demonstrating a direct correlation between these endpoints and long-term patient outcomes before considering them for approval. 52Therefore, while pCR and MPR are valuable for assessing immediate treatment response, their role in predicting long-term benefits in early-stage NSCLC is still being evaluated.
To determine the clinical significance of a surrogate endpoint, it is essential to establish appropriate thresholds.In the absence of conclusive endpoints such as OS or when awaiting OS data to mature, surrogate endpoints could be considered using thresholds that are expected to align with the magnitude of benefit that could be expected from the conclusive endpoints.Such thresholds often come about through consensus.Tools like the European Society for Medical Oncology-Magnitude of Clinical Benefit Scale (ESMO-MCBS) provide a more objective approach for evaluating clinical benefit, albeit using arbitrary rules. 53For instance, on the ESMO-MCBS scale, a 95% CI lower limit of the DFS hazard ratio below 0.65 is scored as grade A, and therefore deemed most beneficial, as was the case in ADAURA, IMpower010, PEARLS/KEYNOTE-091, and CheckMate 816.The availability and accessibility of new therapies in Europe Three milestones must be achieved before patients can gain access to novel adjuvant and neoadjuvant immunotherapies and targeted therapies: (1) marketing authorization, (2) national reimbursement, and (3) postreimbursement access (Fig. 2a). 67
Marketing authorization
After the authorization application to EMA, the EMA performs a single EU-wide assessment to evaluate the safety, efficacy, and quality of a product and provides a recommendation to the EC on whether to grant a marketing authorization. 12Once granted by the EC, the marketing authorization is automatically valid in all EU Member States.
Discrepancies between EMA and FDA
Studies have highlighted discrepancies between EMA and FDA in oncology drug approvals.In general, the FDA tends to grant approvals earlier than EMA.Uyl-de Groot and colleagues found that drugs take an average of 403 days (range 17-1187 days) to reach the EU market, 242 days later than the US. 68Also, while half of the drug approvals and label wordings are similar between the agencies, about 20% are approved by only one, and 28% have different labeling. 69Often, the second agency to review a drug chooses a more restrictive indication.Furthermore, when comparing the special regulatory pathways of both agencies, such as the FDA's Accelerated Approval and the EMA's Conditional Marketing Authorization, there are frequent discrepancies in decision-making and pathway usage, despite using the same pivotal trials. 70Both agencies often approve drugs amidst significant uncertainty, underscoring the need for further post-marketing studies.The delay in fulfilling post-marketing obligations raises concerns about approval standards.
Recent experiences in the field of resectable NSCLC have also shown delays in drug approval by the EMA.For example, adjuvant atezolizumab received EMA approval 235 days after the FDA, totaling 344 days from application. 8,27,28Adjuvant osimertinib was approved 154 days later in Europe, taking 268 days. 7,21,71The FDA approved neoadjuvant nivolumab three days before application in the EU, with EMA taking 476 days to approve. 9,43,72The EMA recently approved adjuvant pembrolizumab 259 days after the FDA, 554 days postapplication. 10,32,73Additionally, there are notable differences in labeling.For instance, the FDA approved adjuvant atezolizumab for stage II-IIIA NSCLC with PD-L1 expression ≥1%, whereas the EMA approved it only for patients with PD-L1 ≥50% tumors.For neoadjuvant chemo-nivolumab, the EMA's approval is for stage II-IIIA NSCLC with PD-L1 expression ≥1%, contrasting with FDA's approval for stage IB-IIIA regardless of PD-L1 expression.
Speeding up review times
Excluding clock stops (pauses in the review process), both FDA and EMA have similar review durations. 74owever, when these pauses are included, the FDA's process is notably faster.Over the last decade, both agencies have slightly reduced their mean review times, with the FDA often receiving new drug applications before EMA.This trend is due to factors like international cooperation and initiatives like Project Orbis, aimed at expediting drug approvals.In April 2023, the EC proposed policy changes to reduce the EMA's review times, including pre-submission scientific support to applicants, a practice already implemented by the FDA. 75his policy change could reduce the review time gap between the two agencies.
Following Brexit, the Medicines and Healthcare products Regulatory Agency (MHRA) in the United Kingdom (UK) approved adjuvant osimertinib for EGFR-mutated resectable NSCLC in May 2021, the first authorization issued by the MHRA under Project Orbis. 76This project is a global collaborative review program of seven regulatory partners including the UK and Switzerland.This initiative, led by the FDA, aims to accelerate patient access to new cancer drugs internationally through parallel submission and reviews, while maintaining independent regulatory decision by each partner. 77,78Under Project Orbis, the UK often approves drugs faster than the EMA, but typically after FDA approval. 79For instance, adjuvant atezolizumab and neoadjuvant nivolumab were approved 131 and 314 days earlier, respectively, than the EMA. 8,9,76While the FDA's operations are partly supported by industry sources, this does not necessarily influence Orbis' operations. 80Each country independently reviews the FDA's dossiers, as exemplified by the different PD-L1 expression cutoffs for adjuvant atezolizumab approval.For instance, while the FDA set a cutoff at 1%, the UK MHRA and Swiss-Medic chose the higher cutoff of 50%.Aligning with the FDA's processes, Project Orbis facilitates rapid approval of innovative medicines, particularly beneficial for smaller regulatory agencies, and represents a concerted effort to reduce global inequalities in cancer treatment access, although its primary impact is currently seen in high-income countries alone. 81
National reimbursement differences per country
In the EU, post-marketing authorization involves several steps before patients can access novel drugs, including regulatory procedures, price regulations, and health technology assessments (HTA) to decide on reimbursement through national health services or via insurance schemes.While the EMA grants approval at the EU level, individual Member States control the coverage and reimbursement of EMA-approved drugs.In contrast, the US uses a more centralized system through the Centers for Medicare & Medicaid Services, which incorporates reimbursement for FDA-approved therapies into the National and Local Coverage Determination. 82Although this process can be lengthy, it ensures consistent reimbursement policies across the US.
Reimbursement delays
In Europe, national reimbursement decisions involve a multiple-stage decision-making process, and authorities at various levels, including national, regional, and local hospital settings, may employ different processes and requirements, leading to delays and inequalities in patient access. 67The 2021 'Patient W.A.I.T. indicator' survey revealed the average reimbursement time of 545 days for novel oncology therapies in Europe, ranging from 100 days in Germany to over 964 days in Romania (Fig. 2b). 83These delays are attributed to factors like late submissions, nonadherence to maximum timelines, and complex decision-making layers. 67Additionally, varying reimbursement criteria, unclear national requirements, and differences in value and price assessments contribute to these delays.
Health technology assessment
EU HTA bodies evaluate clinical trial evidence to determine the acceptability of new treatments, but their criteria may vary greatly (Fig. 2c). 67Notably, there is a lack of consensus on surrogate endpoints, crucial in most adjuvant and neoadjuvant studies.For instance, surrogate endpoints are accepted in Poland, often in Sweden, but infrequently in Portugal.England and Italy make decisions on a case-by-case basis.The absence of mature OS data complicates predicting long-term survival benefits, crucial for assessing cost-effectiveness.This leads to hesitancy in adopting therapies based on surrogate endpoints, causing delays in access times and regional inequalities.For example, France did not reimburse adjuvant atezolizumab, while Germany accepted it for all patients meeting the EMA label, and the Netherlands partially, for only a subgroup of patients with non-N2 stage III disease or those with an unforeseen postoperative N2 (Dutch Medicines Z-index, G-standaard February 2023). 84,85To address these discrepancies, the new Regulation (EU) 2021/2282 on Health Technology Assessment (HTAR), effective from January 2025, aims to harmonize HTA processes by performing a EU clinical assessment within a permanent framework for joint work including Member States, to remove the fragmentation of the internal market, reduce redundant assessments, and enhance transparency in evaluations, potentially speeding up patient access to new treatments. 86
Post-reimbursement access
Post-reimbursement patient access to new therapies varies considerably across EU countries. 67Despite a relatively short time to reimbursement (234 days on average), only 20% of patients in the Netherlands receive a novel cancer therapy within 12 months after national reimbursement.Poland has a longer delay (891 days) before reimbursement, and 24% of patients have access to novel cancer therapies within 12 months following definitive decision.France, with a delay of 579 days for reimbursement after EMA approval, achieves an 80% access rate within the first year.Germany stands out for its short delay to market access (134 days), facilitated by a temporary period of free pricing for EMA-authorized therapies, with a patient access rate of 50%.
These disparities arise from different healthcare decision-making approaches in Europe. 87Some countries, like Iceland and Croatia, centralize pricing processes and budget allocation at the national level.Others like Italy have mixed national and regional systems with budget allocations, managed by healthcare insurers or at the hospital level, leading to significant variability in treatment accessibility and timeliness.In Italy, the time between national authorization and regional availability of a drug can range from 29 to 293 days due to the need for the drug to pass through 20 distinct local processes across Italy's regions, from Lombardy in the north to Sicily in the south, even after a national price is set.Delays are also common between the reimbursement decision and their official publications in national gazettes, as seen in countries like Belgium, Italy, and Hungary.In Bulgaria, the reimbursement list is updated annually, potentially delaying access by up to a year.Additionally, outdated clinical guidelines can lead to delays in incorporating new therapies into treatment pathways and hinder the adoption of new treatments by prescribers.
Budgetary impact on healthcare systems
EU healthcare spending has risen notably over the past decade. 88For example, the Netherlands saw a rise from €56 billion to €79.1 billion in 2020. 89Despite a slight decrease in the total care budget from 8.9% to 8.3%, medicine spending increased to €6.6 billion (excluding pharmacy fees).The projected direct mean costs of the new adjuvant and neoadjuvant treatments for Dutch patients diagnosed with stage IB-IIIA resectable NSCLC over one year could range from €39.9 million to €57.1 million (Table 4; Supplementary Tables S1-S3).These costs come on top of the current direct costs.For example, in Italy, the current average direct costs per NSCLC patient in the first year post-diagnosis, in stages I, II, and III were €16,291, €19,530, and €21,938, respectively. 90Surgery seems to be the primary driver of costs in stage I (58.9%), decreasing to 45.9% and 15.0% in stage II and stage III, respectively. 91In France, the average costs of surgery are €9474 for video-assisted thoracoscopic surgery and €10,418 for thoracotomy. 92hese new adjuvant and neoadjuvant therapies, despite their potential DFS or EFS benefits and savings from reducing relapses, will significantly raise healthcare costs, considering their direct medicine costs but also indirect costs such as molecular testing, day treatment units, staff, and general healthcare expenses.
The financial impact of incorporating these treatments varies across countries, reflecting their GDP allocations and healthcare policies.In resectable NSCLC, for example, where the conclusive benefits of adjuvant immunotherapy following neoadjuvant chemoimmunotherapy are not fully established, particularly without mature survival data, countries must balance innovative care with budgetary constraints.Ultimately, the integration of these therapies depends on each country's financial capacity and healthcare approach, especially when the clinical value of these treatments is yet to be fully recognized.
Discussion
This Series paper examines the evolving neoadjuvant and adjuvant treatment approaches for resectable earlystage NSCLC, focusing on key phase III study findings and the journey of a treatment from its initial phase III results to its availability for patients.Our paper identifies disparities in patient care across EU countries.The accompanying Viewpoint on resectable NSCLC delves deeper into the challenges and unanswered questions presented by current studies and discusses the necessary measures to tackle issues of access inequalities from a clinician's perspective. 93he path of novel therapies from development to patient access involves several phases, each with unique challenges that need to be addressed.
Marketing authorization
During the marketing authorization phase, lengthy delays in the regulatory review and approval processes are a significant issue. 87Improving processes and reducing current timelines during this phase are crucial.Additionally, in certain situations, early access to life-saving medicines prior to formal approval is critical.Therefore, expanding compassionate use programs and early access schemes is vital, enabling patients to access essential treatments before they receive official approval.
Value assessment procedures
Value assessment procedures during HTA may suffer from variable and misaligned evidence requirements, leading to inefficiencies and delays in accessing new treatments. 87Harmonizing value assessment frameworks through initiatives such as the upcoming HTAR will establish uniform standards, vital to streamline drug evaluations and approvals.Additionally, it is essential to robustly acknowledge drug differentiation in value assessments, ensuring that the unique benefits and distinct advantages of new, innovative therapies are fully recognized and factored into healthcare decisions.
Pricing and reimbursement procedures
In the context of initiating pricing and reimbursement procedures after marketing authorization, substantial delays often occur in starting price negotiations, hindering timely access to medications. 87Beginning these negotiations immediately post-approval and streamlining national decision-making processes for pricing and reimbursement are essential to improve timely patient access to new treatments.
Conclusions
Treatment outcomes for patients with resectable earlystage NSCLC may be expected to improve in the near future due to the approval of new neoadjuvant and adjuvant systemic options as outcomes of recent trials have demonstrated encouraging results.Patient access to these innovative therapies varies among countries, and this disparity is expected to worsen.Steps to reduce inequalities in patient access should have higher priority.Declaration of interests I.H., C.D., N.R., and I.B. declare no competing interests.C.A.G. has received grants or contracts from Boehringer Ingelheim, Astellas, Celgene, Sanofi, Janssen-Cilag, Bayer, Amgen, Genzyme, Merck, Gilead, Novartis, AstraZeneca, Roche, NIH, and ASCERTAIN, all payments were made to the institute, outside the submitted work.M.P. has received research funding from MSD, AstraZeneca, Roche, Boehringer Ingelheim, and Takeda, outside the submitted work; consulting fees from Bristol-Meyers, Roche, MSD, AstraZeneca, Takeda, Eli Lilly, F Hoffman-La Roche, Janssen, Pfizer, and Takeda, outside the submitted work; honoraria for lectures, presentations, speakers bureaus, manuscript writing or educational events from Bristol-Meyers, Roche, MSD, AstraZeneca, Takeda, Eli Lilly, F Hoffman-La Roche, Janssen, and Pfizer, outside the submitted work; and support for attending meetings and/or travel from AstraZeneca, Boehringer Ingelheim, Bristol-Meyers Eli Lilly, F Hoffman-La Roche, Phierre Fabre Pharmaceuticals, and Takeda, outside the submitted work.A.L. has received grants for academic research from PharMamar, Beigene, Roche, AstraZeneca, and Amgen, outside the submitted work.R.D. has received payment or honoraria for lectures, presentations, speakers bureaus, manuscript writing or educational events from Roche, AstraZeneca, Takeda, Novartis, BMS, MSD, Pfizer, and Amgen, outside the submitted work; support for attending meetings and/or travel from Pfizer, outside the submitted work; drug samples from Novartis, outside the submitted work; and participated on a Data Safety Monitoring Board or Advisory Board of Glax-oSmithKline, outside the submitted work.C.P. has received payment or honoraria for lectures, presentations, speakers bureaus, manuscript writing or educational events from AstraZeneca, outside the submitted work.M.D.M. has received institutional research funding from Tesaro/GlaxoSmithKline and institutional funding for work in clinical trials/contracted research from Beigene, Exelixis, MSD, Pfizer and Roche, outside the submitted work; and personal fees for consultancy or participation to advisory boards from AstraZeneca, Boehringer Ingelheim, Janssen, Merck Sharp & Dohme (MSD), Novartis, Pfizer, Roche, GlaxoSmithKline, Amgen, Merck, and Takeda, outside the submitted work.M.T. has received institutional research funding from AstraZeneca, BMS, MSD, Roche, and Takeda, outside the submitted work; payment or honoraria (personal) for speakers bureaus from Amgen, AstraZeneca, Beigene, BMS, Boehringer Ingelheim, Celgene, Chugai, Daiichi Sankyo, GlaxoSmithKline, Janssen Oncology, Lilly, MSD, Novartis, Pfizer, Roche, Sanofi, and Takeda, outside the submitted work; and support for attending meetings and/or travel from AstraZeneca, BMS, Boehringer Ingelheim, Daiichi Sankyo, Janssen Oncology, Lilly, Merck, MSD, Novartis, Pfizer, Roche, Sanofi, and Takeda, outside the submitted work.A.B. has received consulting fees (personal) from AstraZeneca, BMS, MSD, and Roche, outside the submitted work; and payment or honoraria (personal) for lectures, presentations, speakers bureaus, manuscript writing or educational events from AstraZeneca, BMS, MSD, and Roche, outside the submitted work.S.P. has received consulting fees (personal) from Amgen, AstraZeneca, Bayer, Blueprint, BMS, Boehringer Ingelheim, Daiichi Sankyo, GSK, Guardant Health, Incyte, Janssen, Lilly, Merck Serono, MSD, Novartis, Roche, Takeda, Pfizer, Seattle Genetics, Turning Point Therapeutics, and EQRx, outside the submitted work; payment or honoraria (personal) for lectures, presentations, speakers bureaus, manuscript writing or educational events from AstraZeneca, Bayer, Guardant Health, Janssen, Merck Serono, Roche and Takeda, outside the submitted work; payment for expert testimony from Roche and Merck Serono, outside the submitted work; support for travel from Janssen and Roche, outside the submitted work; consulting fees for participation on an Advisory Board, outside the submitted work; unpaid leadership role in the British Thoracic Oncology Group, ALK Positive UK, Lung Cancer Europe, Ruth Strauss Foundation, Mesothelioma Applied Research Foundation, and ETOP-IBCSG Partners
Search strategy and selection criteria
We searched PubMed and Embase up to August 17, 2023, using search terms "nonsmall cell lung cancer", "adjuvant", "postoperative", "neoadjuvant", "preoperative", and "perioperative", with no restrictions by language.Only papers reporting on randomized phase II and III clinical trials with results were included.A search with similar search terms was conducted on ClinicalTrials.govand WHO ICTRP up to August 17, 2023.Only ongoing randomized phase III clinical trials were included.Articles were also identified through searches of the authors' own files.The final reference list was generated on the basis of relevance to the scope of this Series paper.
Fig. 1 :
Fig. 1: Timeline of key studies influencing the treatment of resectable NSCLC.Dashed red and blue lines indicate that marketing approval is pending.(#) Neoadjuvant treatment only; (*) perioperative treatment.Abbreviation: NSCLC, non-small cell lung cancer.
Fig. 2 :
Fig. 2: Inequalities in patient access in Europe: (a) the path for novel therapies, (b) assessing delays, and (c) evidence requirements for patient access.(a) Thee milestones must be achieved before patients have access to new therapies.(b) The median time to availability in days in European countries (2017-2020), assessed from the date of marketing authorization to, for most countries, the date of acceptance on the reimbursement list.(c) Different evidence requirements are used across Europe, delaying the time to medicine access (Figure 2a adapted and modified from the European Federation of Pharmaceutical Industries and Associations (EFPIA), figure 2b adapted and modified from IQVIA, and figure 2c adapted from EFPIA 67,83 ).Abbreviations: EU, European Union; EMA, European Medicines Agency; HTA, health technology assessment; P&R, pricing and reimbursement; UK-ENG, United Kingdom-England; IT, Italy; NL, the Netherlands; PL, Poland; PT, Portugal; SE, Sweden; PFS, progression-free survival; QoL, quality of life.
Table 1 :
Randomized phase II and III clinical trials with reported results in resectable NSCLC.
Table 3 :
Summary of advantages and disadvantages of key clinical endpoints used in adjuvant and neoadjuvant trials.
Series www.thelancet.comVol 38 March, 2024 Patients receive only one treatment.TNM staging is according to the seventh edition of the TNM classification system.Abbreviations: EMA, European Medicines Agency; NSCLC, non-small cell lung cancer.a Based on the list prices in the Netherlands including VAT. b Based on the proportion of patients who completed treatment in each trial.c Based on a 100% treatment completion rate.d Atezolizumab is not included in this scenario because, theoretically, nivolumab may also be indicated in the same population but is less expensive.e Atezolizumab is not included in this scenario because, theoretically, pembrolizumab may also be indicated in the same population but is more expensive.
Table 4 :
An overview of cost estimates of the novel adjuvant and neoadjuvant treatments based on the NSCLC incidence in the Netherlands. | 2024-03-14T05:05:43.569Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "bff3567376ef068c79c0b920b9b40e28b8396609",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.lanepe.2024.100840",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bff3567376ef068c79c0b920b9b40e28b8396609",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247167758 | pes2o/s2orc | v3-fos-license | Machine learning predicts cancer subtypes and progression from blood immune signatures
Clinical adoption of immune checkpoint inhibitors in cancer management has highlighted the interconnection between carcinogenesis and the immune system. Immune cells are integral to the tumour microenvironment and can influence the outcome of therapies. Better understanding of an individual’s immune landscape may play an important role in treatment personalisation. Peripheral blood is a readily accessible source of information to study an individual’s immune landscape compared to more complex and invasive tumour bioipsies, and may hold immense diagnostic and prognostic potential. Identifying the critical components of these immune signatures in peripheral blood presents an attractive alternative to tumour biopsy-based immune phenotyping strategies. We used two syngeneic solid tumour models, a 4T1 breast cancer model and a CT26 colorectal cancer model, in a longitudinal study of the peripheral blood immune landscape. Our strategy combined two highly accessible approaches, blood leukocyte immune phenotyping and plasma soluble immune factor characterisation, to identify distinguishing immune signatures of the CT26 and 4T1 tumour models using machine learning. Myeloid cells, specifically neutrophils and PD-L1-expressing myeloid cells, were found to correlate with tumour size in both the models. Elevated levels of G-CSF, IL-6 and CXCL13, and B cell counts were associated with 4T1 growth, whereas CCL17, CXCL10, total myeloid cells, CCL2, IL-10, CXCL1, and Ly6Cintermediate monocytes were associated with CT26 tumour development. Peripheral blood appears to be an accessible means to interrogate tumour-dependent changes to the host immune landscape, and to identify blood immune phenotypes for future treatment stratification.
Introduction
Carcinogenesis is a complex and multi-layered process involving various cellular and tissue networks. Although tumours can be recognised by the immune system, resulting in their statistical modelling to make predictions and inferences about tumour outcomes and biology. Predictive modelling and feature ranking was performed using Random Forest models, in conjunction with SHapley Additive Explanations and correlation matrices, to make inferences about the underlying immune biology of the tumour models. This relatively simple strategy successfully generated reasonably accurate models that are able to (i) confirm the presence of a tumour, (ii) differentiate between tumour types and (iii) predict current and future tumour burden, and highlighted that both tumour models generate unique blood immune signatures. This study aims to assess the utility of blood cellular and soluble immune signatures coupled with ML to predict cancer subtype and tumour progression in a tightly controlled preclinical environment. It provides evidence of potential clinical application of immune signaturebased systemic immune phenotyping to improve overall cancer diagnosis and surveillance. The study also identifies key immune features for predictive modelling and possible candidate parameters for therapeutic intervention based on those models.
Methods
To monitor changes to the systemic cellular and soluble immune signatures of tumour-bearing animals, a small volume of blood was obtained from the animals' tail veins in a minimally invasive, feature-rich and high-throughput strategy for clinical translation. Multiparameter flow cytometry was used to generate cell-surface immune signatures, while soluble immune profiles were obtained from the plasma using a bead-based immunoassay established on the same basic principles as sandwich immunoassays. This approach allows relatively high throughput generation of data and was coupled with statistical modelling to make predictions and inferences about tumour outcomes and biology. Predictive modelling and feature ranking was performed using Random Forest models, in conjunction with SHapley Additive Explanations and correlation matrices, to make inferences about the underlying immune biology of the tumour models. This relatively simple strategy successfully generated reasonably accurate models that are able to (i) confirm the presence of a tumour, (ii) differentiate between tumour subtypes and (iii) predict current and future tumour burden, and highlighted that both tumour models generate unique blood immune signatures.
Animals
Female BALB/c mice aged between 6-10 weeks sourced from the Australian Phenomics Facility (ANU) were used throughout the study. Animals were fed ad libitum, housed in a specificpathogen free environment and used under strict adherence to protocols approved by the institutional Animal Experimentation Ethics Committee (AEEC), ANU, under protocols A2017/43 and A2020/39. At experimental end points, animals were euthanised by cervical dislocation according to AEEC approved procedures. 0.05% (v/v) EDTA solution (15400054, ThermoFisher Scientific) then passaged and maintained at up to 70-80% confluency.
Tumour establishment
Tumour cells were injected at 1 x 10 5 cells in 50 μL of sterile normal saline solution subcutaneously in the right-hind flank of mice randomised across several housing cages. Fur around the injection site was removed by clippers prior to tumour inoculation. Tumours were left to grow for up to 14 days, and monitored daily to ensure wellbeing was maintained. In 21 of 98 of the 4T1-bearing mice, a single dose of the Src-inhibitor eCF506 (1914078-41-3, Sun-shine Chemical) at 0.1 (eC100), 1 (eC1000), or 10 (eC10000) mg/kg was administered i.p. 7 days posttumour establishment, which appeared to have little, if any impact on the parameters assessed in the study, and so these mice were included to increase sample size (S1 File). At end-point, the mice were humanely sacrificed by cervical dislocation, and their tumours excised and weighed.
Blood collection
At 7/8 (referred to as day 7) and 14 days post tumour establishment, mice were briefly heated (~4 minutes) under a lamp to promote vasodilation, placed in a restraint, their tail vein punctured with a 29G needle, and a 20 μL sample of blood collected into 4 μL of citrate-dextrose solution (ACD, Sigma) anticoagulant. A 5 μL sample of this blood was immediately used for antibody labelling and flow cytometry. The remaining blood was centrifuged at 16,000 x g for 10 minutes and 7 μL of plasma collected and stored in a sealed 96 well polypropylene microplate (249943, ThermoFisher Scientific) at -20˚C for future cytokine and chemokines measurements using the LEGENDplex assays.
Immunophenotyping of blood leukocytes by flow cytometry
The 5 μL blood samples for cellular analysis were initially incubated for 10 minutes on ice in wells of a v-bottom 96-well microplate with 25 μL of 5 mg/mL TruStain FcX TM (anti-mouse CD16/32) antibody (101320, Biolegend) diluted in 1x RBC BD Pharm Lyse lysing buffer (555899, BD Bioscience). Samples were then incubated with 25 μL of 1x RBC BD Pharm Lyse containing fluorescent antibodies listed in S1 Table for 30 minutes on ice in the dark. In addition, 5000 Flow-Count Fluorospheres (7547053, Beckman Coulter) were spiked in to each sample with the fluorescent antibodies to allow enumeration of total cells per sample. Cells were then washed twice by resuspension in a total of 200 μL of PBS containing 5 mM EDTA, sedimentation by centrifugation at 300 x g for 5 minutes and flicking off supernatant. Samples were then resuspended in 50 μL of PBS containing 5 mM EDTA, 1% BSA (w/v) and 1 μg/ml of the dead cell dye Hoechst 33258 ready for flow cytometry.
LEGENDplex assay
Frozen plasma samples were thawed on ice, then assayed using the Macrophage/microglial (Mac/Mic) 13-plex LEGENDplex kit and the Proinflammatory (Proinflam) 13-plex LEGEN-Dplex Kit (740451 and 780846, Biolegend). Assay methods were as described by the manufacturer, except the assay was scaled down to use 6 μL of sample/standards for each kit as follows. Seven μL of each plasma sample was diluted in 7 μL of kit assay buffer and 6 μL of this mix (or 6 μL of pre-titrated kit standard) was added to 12 μL of kit capture beads (pre-diluted 1:1 (v/v) with assay buffer) in a v-bottom 96-well microplate, and incubated with shaking for 2 hours. Beads were then pelleted at 250 x g for 5 minutes and the supernatant flicked off. Beads were washed with 50 μL of kit wash buffer and pelleted and supernatant removed as above. Twelve μL of kit biotinylated detection antibodies (pre-diluted 1:1 (v/v) in assay buffer) were then added to beads, then beads resuspended by pipetting and the mixture incubated with shaking for 1 hour at room temperature. Six μL of kit streptavidin-PE was then added to the mixture, which was incubated with shaking for a further 30 minutes. Beads were then pelleted and washed as described above and resuspended in 40 μL of kit 1x wash buffer ready for flow cytometry.
Flow cytometry
Flow cytometry was performed on a BD LSRII (BD Bioscience) flow cytometer with FACS-Diva, with quality assurance performed before each experimental run using BD FACSDiva Cytometer Setup and Tracking (CS&T) beads (655051, BD Bioscience). Application Settings were applied to standardise fluorescence intensity readings between experiments and these were monitored using Sphero TM 8-peaks Rainbow Beads Fluorescence (110620, BD Bioscience). Voltages were initially setup using unlabelled RBC-lysed blood leukocytes for cellular analysis and LEGENDplex Raw Setup beads (as described by the manufacturer). BD Comp-Beads (552843, BD Bioscience) were labelled with selected antibodies (S1 Table) as described by the manufacturer and used as compensation controls for cellular analysis. Cell samples were acquired until a total of 2000 Flow-Count Fluorosphere beads were collected based on side scatter (log) and forward scatter (linear) plot gating. LEGENDplex beads were acquired to a total of 4000 beads. Raw Flow Cytometry Standard (FCS) files of the data are available upon request at the ANU DATA COMMONS repository (https://dx.doi.org/10.25911/ 6153a8ab5747c).
Flow cytometry analysis
Blood cells and LEGENDplex beads were analysed using FlowJo v10 software (BD Bioscience). A combination of manual gating and unsupervised Fast Interpolation-based t-distributed Stochastic Neighbour Embedding (FIt-SNE) analysis was use to delineate leukocyte populations, which were then named based on this analysis (S1 Fig). LEGENDplex beads were gated for each analyte as describe in S2 Fig and median PE fluorescence-intensity generated for each bead analyte. Data was then normalised as describe below for analysis.
Data normalisation and processing
To reduce the influence of inter-experimental variability on conclusions, data was normalised at several levels. Firstly, cell numbers in each flow cytometry acquisition were normalised to counting beads spiked into the sample, with each sample normalised to 2000 counting beads (a fifth of the spike load), to give the number of cells in~2 μL of blood ("counting bead normalised" values). Secondly, these normalised counts were normalised to the mean counts from the blood of non-tumour bearing control animals within each experiment. These "nil normalised" values were used in machine learning pipelines. To get "normalised cell counts" per 2 μL of blood, for an estimate of the overall cells across the groups, the "nil normalised values" were multiplied to the overall mean of the "bead normalised cell count" from all non-tumour-bearing animals for each feature.
For the LEGENDplex assays, the raw PE median analyte values were normalised as a ratio to the mean PE median analyte values from the blood of non-tumour bearing control animals within each experiment. These "nil normalised" values were used in machine learning pipelines. To get "normalised plasma concentrations", the "nil normalised" values were multiplied to the overall mean PE median from the blood of non-tumour bearing control animals, and concentrations interpolated using mean standard curves pooled from all experiments, with Hyperbola, 5-parameter logistic regression (5PL) and Random Forest models employed. Since 5PL models failed for many data points and Random Forest resulted in non-gaussian multicluster distributions, hyperbola models were used as they overcame these issues. t-distributed stochastic neighbour embedding (t-SNE) unsupervised clustering was used to monitor experimental clusters within the pooled data and helped to confirm experimental cluster minimisation using the normalisation approach. All raw and calculated data are in S1 File.
Supervised machine learning
Supervised machine learning was performed using Orange 3 software. Random Forest modelling used 500 trees, with a maximum tree depth of 3, a maximum number of features considered at each node was 4 (except when considering smaller feature numbers in which case the hyperparameter changed accordingly), subsets smaller than 5 not split, and balanced class distribution enabled in the case of classification learning since data groups were unbalanced. Missing data (that included a single sample without the 13 Proinflammatory LEGENDplex panel) was imputed using the "hot deck" 1-NN learner, which replaces the missing values with the values from the most similar example (as implemented in Orange 3 software). Initially, a learning curve was generated by plotting progressively smaller data set size (randomly generated from the entire data set) against modelling skill (assessing classification of the tumour subtype; 4T1, CT26 and Nil) to evaluate if the data set size was sufficient for the outcomes targeted (S3 Fig). This revealed the data set size at 20% appeared to plateau in modelling skill, suggesting data size was sufficient for the outcomes targeted. For the rest of the study, Random Forest model training was performed and cross-validated on 100%, 80% and/or 60% of randomized sample data and tested on any remaining data. The training data was validated using leave-one-out cross-validation. Feature ranking was done using Random Forest (built in to the Random Forest model in Orange 3 software) and the explain model function on Orange 3, which uses the SHapely Additive exPlanation (SHAP) to explain feature importance. Feature number and model fitting was optimised for classification predictions based on area under curve of the receiver operating characteristics (AUC; to assess separability of the classes), classification accuracy (CA; proportion of correct classification), precision (ratio of correct positive prediction to all predicted positive), recall (ratio of correct positive prediction to actual positive), and F1 score (weighted average of precision and recall) classification scores and for regression using, Mean Squared Error (MSE), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) scores. Each train and test modelling was done a minimum of 3 times to assess variability. Once optimal features were assigned based on the above, the final predictions were modelled on all the data and results displayed using cross-validation via leave-oneout on the entire data set, either as a confusion matrix for classification analysis, or a bivariate plot to actual values for regression analysis (with Pearson correlation coefficient reported and associated p values calculated using prism software). Orange 3 workflows are provided in S2 File (for classification workflow) and S3 File (for regression workflow).
Statistical analysis
For means comparisons between Nil, CT26 and 4T1 cohorts, data was transformed using the formula Y = Log(Y+0.0001) to help normalise distributions and equalise variance, and then assessed by 2-way ANOVA using GraphPad Prism software. Multiple comparisons were performed between the cohorts for each feature using Tukey correction and p values reported to test the null hypothesis that the means are equal. For analysis of important features in tumour size, a bivariate correlation matrix was designed using the top assigned features from the machine leaning pipeline described above and Spearman's correlation coefficients and associated p values determined using R. To determine interaction of top assigned features, a distance matrix was constructed using the absolute Spearman's coefficients and global absolute Spearman distances summarised using multidimensional scaling with network lines (at maximum levels) using Orange 3 software (see S4 Files for Orange 3 workflow).
Composition of blood immune features in cancer models reveals unique tumour immune phenotypes
To characterise tumour-bearing animal blood immune profiles, a 4T1 breast cancer cell line tumour or a CT26 colorectal cancer cell line tumour was established subcutaneously in the right-hind flank of BALB/c mice. Animals with no tumours were used as controls (Nil), benchmarking the 'normal' immune landscape. Tumours were left to establish and grow for 14 days and immune features assessed from a single drop of blood taken at 7 (D7) and 14 (D14) days post tumour establishment ( Fig 1A). A total of 180 animals were included in the study with animal cohorts described in Fig 1B. Absolute leukocyte count per unit volume of blood were assessed using flow cytometry (S1 Fig). The cell populations were delineated using 17 leukocyte-reactive mAbs and identified using manual gating cross-checked with unsupervised dimensional reduction. The strategy also included simple light-scattering profiles to delineate lymphocytes and myeloid cells, to determine if this simple approach would be beneficial for the study aims. Blood plasma cytokine and chemokine concentrations were also assessed using two 13-plex LEGENDplex kits (S2 Fig). At the end-point (D14), solid tumours were extracted and weighed and revealed highly variable tumour mass in the two tumour models, ranging from 10 mg to >800 mg ( Fig 1C).
To gain an overall impression of the blood immune landscape, the means of blood leukocytes and plasma factor composition were quantified across the 3 groups from the 180 animals at both D7 and D14 time points (Fig 1D), and differences further highlighted by normalising the underlying data to the mean values from Nil animals to give fold-change above normal levels (S4 Fig). This revealed a large increase in leukocytes in the blood of 4T1-bearing mice, compared to the Nil mice, a difference that increased further over time ( Fig 1D). This was largely driven by expansion of myeloid cells but also a subtler trend of lymphocyte increase. In contrast, there was only a slight trend of myeloid cell increase and a concomitant trend of lymphocyte decrease in CT26-bearing animals, which became more exaggerated over time. The changes in myeloid cells in both models was largely attributed to an expansion of neutrophils and monocytes. Expansion of other minor myeloid cell populations was also apparent ( Fig 1D). The initial increase in lymphocytes in 4T1-bearing mice at D7 was mostly due to an increase in B cell count, which reversed with a decrease from normal at D14 and was compensated for by slight increases in CD4 T cells, CD8 T cells and NK cells at this later time point. The decrease in blood lymphocytes in CT26-bearing mice was mainly attributed to diminishing circulating B cells. There were also changes in minor subpopulations of leukocytes in tumour-bearing animals not obvious in the compositional data due to their small numbers; these included changes to CD4 T regulatory cells, DC, macrophages and PD-L1-expressing myeloid cell populations (S4 Fig). In terms of plasma factor composition, there was a notable increase in macrophage/microglial factors in 4T1-bearing mice at D7, mainly ascribed to a large increase in G-CSF, which decreased at D14, although was still several-fold above normal levels ( Fig 1D). Mice with 4T1 tumours also had a subtler increase in CXCL13 relative to normal levels, and a subtle increase in IL-6 and a subtle decrease in CXCL1 compared to CT26-bearing animals ( Fig 1D and S4 Fig). In CT26-bearing mice, there was a slight rise in the . Animals with no tumours (Nil) were used as control to provide normal blood immune phenotype (a). A total of 180 animals were included in the study, and animals were randomly allocated to groups at D0, as indicated in (b). End-point (D14) CT26 and 4T1 tumour mass are shown in (c) with mean and SEM overlayed in black. A 20 μl of blood sample from each animal was phenotyped for immune cell populations (using cell surface marker labelling and reported as normalised cells per~2μl of blood) and plasma analytes (using two LEGENDplex screening kits and reported as approximations of blood concentrations) at D7 and D14 by flow cytometry. Blood cell compositions at D7 (top, left panel) and D14 (bottom, left panel), and plasma analyte compositions at D7 (top, right panel) and D14 (bottom, right panel) in Nil, CT26-and 4T1-tumour bearing animals, respectively, are shown in (d). Cell data in (d) was reported as total absolute mean cell count of each population being a subset of upstream lineages. Plasma analytes were reported as a subset of total mean of analytes in the two LEGENDplex screening kits, which included the macrophage/microglial 13-plex kit (Mac/Mic) and the proinflammatory 13-plex kit (Proinflam). Three analytes overlapped in the kits, namely CCL22, CXCL1 and CCL17, and are labelled with a (1) if from the Mac/Mic panel or (2) if they are from the Proinflam panel.
https://doi.org/10.1371/journal.pone.0264631.g001 proinflammatory factors in plasma, which increased marginally over time, and appeared to be due to subtle changes in a number of factors such as CCL11, CXCL1, CXCL9 and CXCL10 ( Fig 1D). These changes, however, were not statistically significant from control animals (S4 Fig).
Classification of cancer models using blood immune signatures
From these initial results, it was clear that 4T1 and CT26 tumour growth results in aberrant blood immune parameters in mice, with some common changes (such as neutrophil and monocyte expansion), but also tumour-specific changes (such as the plasma factor changes), while overall changes appear to be subtler in CT26-bearing mice. To investigate how these changes might be used to predict tumour outcomes, supervised ML was used on the normalised data (S4 Fig). Random Forest was chosen as our learner, since it could be used for prediction of both our classification (tumour subtype) and regression (mass of current and future tumours) questions and has in-built feature ranking of importance in predictions allowing feature reduction and biological inference [13].
After hyperparameter tuning, Random Forest was initially used to investigate if blood immune phenotypes were unique enough to classify whether animals had no tumour (Nil), or had a CT26 or a 4T1 tumour. Our approach was to train and test the model using progressively reduced numbers of blood immune features, sorted based on importance rank. We scored the model using several prediction classification indicators (S5 Fig and Fig 2A). To train and test the model, we used data from both D7 and D14 time points, to see if there were features that could be used across time to classify a tumour-subtype. From this we found the modelling was stable and had congruent scores in both the training and test data sets across a range of features fed into the model. However, the minimum feature number needed to maintain this was 5, suggesting 5 key features could result in accurate predictions (Fig 2A). Looking at the top 21 Random Forest ranked features, there were several that were highly ranked at both the D7 and D14 time points (Fig 2B). Overall, the 5 highest ranked immune features, in descending order, were G-CSF, neutrophils, total myeloid cells, monocytes and total leukocytes. To look at how these features contributed to the model in more detail, we used the SHapley Additive exPlanations (SHAP) algorithm [8] (S5 Fig). SHAP highlighted the contribution of these 5 features: generally, as they increased, they tended to suggest a 4T1 phenotype, while there was a more complex relationship in distinguishing Nil from CT26-bearing animals. We performed dimensional reduction using t-distributed Stochastic Neighbour Embedding (t-SNE) to see if these 5 features could cluster tumour classes better than all features combined ( Fig 2C). Using this unsupervised approach showed the 5 top-ranked features appeared to separate the tumour classes better than all features combined, particularly the 4T1 class. Therefore, we generated the final model incorporating these features from both time points (Fig 2D). This resulted in successful classification of all 4T1-bearing animals and most of the CT26-bearing (CA~80%) and Nil (CA~75%) classes (2 of each being misclassed as the other out of 72 individuals in these classes). The 5 features showed the capacity to predict class at each time point alone, but generally predicted and separated classes best at the later time point (Fig 2E and 2F). Finally, looking at their quantity in the blood of all animals showed that, while these features were all significantly higher in 4T1-bearing animals compared to CT26-bearing and Nil animals, only neutrophils and monocytes showed a significant increase in CT26-bearing mice compared to Nil (while still being lower than in 4T1-bearing mice) (Fig 2G). This highlights the association of myeloid factors with tumour presence and their potential use in tumour classification and may also suggest an underlying association of G-CSF, neutrophils and monocytes in the development of some tumours. (Fig 1B), were used in Random Forest modelling to predict presence of tumour and tumour subtype (targets class being Nil, 4T1 and CT26). The model was trained initially on 80% (S5 Fig) and 60% of data, cross-validated using leave-one-out and tested using the remaining data. Modelling was done on a progressively smaller number of features, from lowest to highest ranked based on in-built Random Forest importance for class determination, and the process repeated 3 times. Model performance was assessed by several classification indicators, including area under curve of the receiver operating characteristics (AUC; to assess separability of the classes), classification accuracy (CA; proportion of correct classification), precision (ratio of correct positive prediction to all predicted positive), recall (ratio of correct positive prediction to actual positive), and F1 score (weighted average of precision and recall) with values being from 0 to 1 (and toward the latter being the best) (a). The Random Forest feature importance scores for classification of the top 21 features (ranked from lowest to highest) from the modelling are show in (b) from n = 6 modelling repeats. Based on peak modelling performance (S5 Fig), the top 5 features from both time points were compared with all features in t-distributed stochastic neighbour embedding (t-SNE)
Model fitting of CT26 tumour size using blood immune signatures resulted in moderate predictability
We next wanted to see if underlying blood immune signatures could be used to predict tumour size and growth, which are often fundamental to prognosis. To do this we used the D14 endpoint tumour mass as the target outcome. We first assessed whether blood immune signatures could predict current and future CT26 tumour mass with D14 and D7 blood data respectively. As with the classification approach, we trained and tested the model using progressively reduced numbers of blood immune features sorted based on importance rank, but scored the model using several regression prediction indicators (S6 Fig and Fig 3). Testing if D14 blood could predict current tumour mass, we found Random Forest modelling was stable and had similar scores in both the training and test data sets across a range of features fed into the model; however, the minimum feature number to maintain this was 3, suggesting 3 key features could result in optimal current tumour mass predictions ( S6 Fig and Fig 3A). Myeloid cell populations ranked high in modelling (Fig 3B), with Ly6C intermediate monocytes, total myeloid cells, and PD-L1-expressing Ly6C -Ly6G -(PD-L1 + ) myeloid cells contributing prominently to the model based on SHAP values (Fig 3C). Mice with higher numbers of these cells in the circulation typically had bigger tumours. We therefore generated the final Random Forest model with these 3 features to predict the current mass of CT26 tumour, which resulted in a significant moderate linear correlation with the actual mass ( Fig 3D).
Testing if D7 blood immune features could predict future D14 CT26 tumour mass, we found the minimum feature number to maintain modelling peaked at 10 features ( S6 Fig and Fig 3E). While myeloid cells were an important feature, there were also several plasma immune factors, notably CCL17, CXCL10, CXCL1 and CXCL13, that had high importance (Fig 3F and 3G). However, from the SHAP explanations, it was apparent that extreme values of many of these features in only a few animals impacted on the model, suggesting poor general association with tumour size (Fig 3G). Generating the final Random Forest model with these 10 features to predict the future mass of CT26 tumour resulted in a significant moderate linear correlation with the actual mass (Fig 3H).
CT26 tumour mass prediction modelling suggests several key blood immune features associate with tumour development
SHAP values of immune features predicting CT26 tumour mass suggest several features have a relationship with tumour size that together allow moderately strong tumour mass predictions to be made. To investigate this in more detail, and possibly infer some immune mechanisms supporting tumour growth, a correlation matrix was plotted of the 5 key predictive features from both D7 and D14 blood samples, and their monotonic relationship reported via Spearman's correlation coefficient (Fig 3I). While there appeared to be significant weak-to-strong relationships among the 5 key D7 features, only CXCL10 had a significant, but weak, relationship with end-point CT26 tumour mass. In contrast, 4 of 5 key D14 features of tumour growth unsupervised clustering to highlight capacity of reduced features to maximise class distinction based on the overlap of groups, with dot sizing representing relative end-point tumour mass to assess for how this relates to clusters (c). These top 5 features from both time points were used to generate the final classification modelling, which was performed on the entire data set and assessed using leave-one-out cross-validation and results shown as a confusion matrix of all animals (d). The top 5 features from D7 (e) or D14 (f) samples were also used in modelling (presented as confusion matrices) and t-SNE analysis to highlight time differences. The top 5 features were plotted as estimated quantities in blood for all animals at each time point (Fig 1B) and their means and SEM displayed in yellow, and means equality tested using 2-way ANOVA on Log (y+0.0001) transformed data and multiple comparison with Tukey's correction shown (g).
https://doi.org/10.1371/journal.pone.0264631.g002 had significant direct association with tumour mass and one another. The relationship of the key D14 features and the key D7 features was complex, with both negative and positive significant relationships (Fig 3I). Generally, CCL17 weakly and positively correlated with early factors of tumour grow (CXCL10), but then weakly and negatively correlated with late factors of tumour growth (myeloid cell populations); CXCL1 and CCL2 acted like CCL17 in this respect. To summarise all these interactions, the distance of absolute values of the Spearman's correlations was plotted using multidimensional scaling, which shows the global relationships of the features and tumour size in 2 dimensions (Fig 3J). This emphasised the key association of D14 neutrophils, PD-L1 + myeloid cells and Ly6C intermediate monocytes with CT26 tumour size, and a more distant relationship with D7 myeloid cells, CCL17, CXCL1, CCL2, CXCL10 levels, and D14 IL-10 level. From this we could postulate that CCL17, CXCL1 and CCL2 act early and in a similar way to indirectly help tumour growth, possibly by upregulating CXCL10 production and myeloid cell expansion, which act more directly on tumour growth. These correlations change with time, with early low expression of CCL17, CXCL1 and CCL2 eventually promoting myeloid cell development that maintains/promotes larger tumours. A possible model of blood immune features associated with CT26 tumour growth is depicted in Fig 3K.
Model fitting of 4T1 tumour size using blood immune signatures resulted in strong predictability
To determine if 4T1 tumour growth could also be predicted by blood immune phenotype, a similar work flow to the above was employed. First, we tested if D14 blood features could predict current 4T1 tumour mass. Random Forest modelling was stable and had similar scores in both the training and test data sets across a range of features fed in to the model, with scores peaking with 3-5 features (S7 Fig and Fig 4A). Myeloid cells and neutrophils ranked highest in modelling, and high values of these associated with larger tumours (Fig 4B and 4C). B cell count was also among the 3 top ranked features and, generally, lower B cell numbers correlated with a higher 4T1 tumour mass (Fig 4B and 4C). There was a more complex relationship (Fig 1B), were used in Random Forest modelling to predict CT26 tumour size at D14. The model was trained initially on 100%, 80% and 60% of data (S5 Fig) and cross-validated using leave-one-out and tested using the remaining data. Modelling was done on a progressively smaller number of features, from lowest to highest ranked, based on in-built Random Forest importance, and the process repeated 3 times (mean and standard error of mean shown). Model performance was assessed by several regression indicators, including the error scores, Mean Squared Error (MSE), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) (which we hoped to minimise), and the coefficient of determination score R2 (which we hoped to maximise). Initially, D14 tumour size was used as the target using D14 blood samples to assess if blood immune features could predict current tumour size (a, b, c and d). Then D14 tumour size was used as the target using D7 blood samples to assess if blood immune features could predict future tumour size (e, f, g, h). Model performance was summarised showing the 60%:40%, training:testing split and equality of test and train performance score means (using the top assigned features) assessed using ANOVA (a) and (e). The Random Forest feature importance scores for regression of the top-10 features from the modelling are show in (b) and (f), and the SHAP scores of these shown in (c) and (g). Based on peak modelling performance, the top-3 features from D14 blood data (d) or top-10 features from D14 blood data (h) were used to generate the final regression modelling to predict current and future tumour mass respectively. Final modelling was performed on the entire data set and assessed using leave-one-out cross-validation and predicted mass of tumour plotted against actual tumour mass (the y-axis), in scatter-plots with dot sizing representing actual end-point tumour mass to assess for how this relates to any clusters, and the linear relationship assessed using Pearson correlation coefficient (r) and associated two-tailed p-values (d and h). Using the top-5 ranked features at each time point a correlation matrix was constructed, which displayed all pairwise bivariate plots with loess curve fitting (lower-left half), feature names and distributions (diagonal) and Spearman's correlation coefficient (rs) with associated p-values to test for monotonic relationships (upper-right half), which was also colour-scaled based on rs values that had p-values <0.05) (i). A distance matrix of the absolute rs (|rs|) from the correlation matrix was calculated and distances plotted in 2D using multidimensional scaling (j) and a model of the interactions summarised in (k). (Fig 1B), were used in Random Forest modelling to predict D14 4T1 tumour size. The modelling and assessment was performed as described in Fig 3. Initially, D14 tumour size was used as the target using D14 blood samples to assess if blood immune features could predict current tumour size (a, b, c and d). Then D14 tumour size was used as the target using D7 blood samples to assess if blood immune features could predict future tumour size (e, f, g, h). Model performance was summarised showing the 60%:40% training:testing split and equality of test and train performance score means (using the top assigned features) assessed using ANOVA (a and e). The Random Forest feature importance scores for regression of the top-10 features from the modelling are shown in (b) and (f), and the SHAP scores of these shown in (c) and (g). Based on peak modelling performance, the top-3 features from D14 blood data (d) and D7 blood data (h) were used to generate the final regression modelling to predict current and future tumour mass respectively. Final modelling was performed on the entire data set and assessed using leave-one-out cross-validation and predicted mass of tumour plotted against actual tumour mass (the y-axis), in scatter-plots with dot sizing representing actual end-point tumour between the next highest ranked feature, PD-L1 + myeloid cells and the model, with lower numbers of these cells associating with high and lower tumour size. Using the top 3 key features in the final model resulted in predictions with a significant strong linear relationship with actual current 4T1 tumour mass (Fig 4D).
Testing if D7 features could be used to predict future 4T1 tumour mass at D14, the Random Forest modelling had peak performance with~3 key features (S7 Fig and Fig 4E). The 3 main model drivers were plasma G-CSF, CXCL13 and IL-6 levels, with higher plasma amounts of these factors generally associating with larger 4T1 tumours (Fig 4F and 4G). Using these 3 features in the final model resulted in predictions with a significant strong linear relationship with actual future 4T1 tumour mass (Fig 4H).
4T1 tumour mass prediction modelling suggests a few key blood immune features associate with tumour development
SHAP values of immune features predicting 4T1 mass suggest there were 6-7 features that have a relationship with tumour size that together have strong tumour mass prediction value. A correlation matrix was plotted of the 7 key features collectively from D7 and D14 blood samples, and their monotonic relationship reported via Spearman's correlation coefficient (Fig 4I). These interactions were also summarised using multidimensional scaling to plot the distance matrix of the Spearman's correlations' absolute values (Fig 4J). From this it appeared that plasma G-CSF level associated directly with 4T1 tumour mass and blood neutrophil count; the latter also associated directly with 4T1 tumour growth. Plasma CXCL13 level also had a direct positive association with 4T1 tumour growth, but did not appear to correlate with plasma G-CSF level or myeloid cell counts. In contrast, plasma IL-6 level had no direct association with 4T1 tumour size, but correlated positively with factors that did, namely plasma G-CSF and CXCL13 levels. The role of B cells and PD-L1 + myeloid cells is unclear using monotonic measures, suggesting that if they do have a role, it is more complex. From this, we could postulate and form a model (Fig 4K) that IL-6 acts to promote CXCL13 and G-CSF production, which may act independently to aid 4T1 growth, and that G-CSF also promotes neutrophil expansion that supports 4T1 tumour growth.
Summary of key tumour mass associated features
From the above analysis, there were a number of features that were important in modelling predictions for tumour growth and that associated directly or indirectly with specific tumour size. The estimated quantities of these features in blood and their comparisons between the models are summarised in Fig 5. From these pairwise comparisons it is apparent that most of the blood features that are important for modelling and correlating with CT26 growth, namely CCL17, CXCL10, total myeloid cells, CCL2, IL-10 and PD-L1 + myeloid cells, were not significantly different from the healthy levels in Nil mice (Fig 5). Indeed, of the identified important features for CT26 growth, only CXCL1, Ly6C intermediate monocytes and neutrophils had quantities in CT26-bearing animals significantly different from normal blood of Nil animals, and in all cases higher than normal. mass to assess for how this relates to any clusters, and the linear relationship assessed using Pearson correlation coefficient (r) and associated two-tailed p- values (d and h). Using the top ranked features at each time point a correlation matrix was constructed, which displayed all pairwise bivariate plots with loess curve fitting (lower-left half), feature names and distributions (diagonal) and Spearman's correlation coefficient (rs) with associated p-values to test for monotonic relationships (upper-right half, which was also colour-scaled based on rs values that had p-values <0.05) (i). A distance matrix of the absolute rs (|rs|) from the correlation matrix was calculated and distances plotted in 2D using multidimensional scaling (j) and a model of the interactions summarised in k. https://doi.org/10.1371/journal.pone.0264631.g004 In contrast, most features associated with 4T1 tumour growth were significantly different from normal levels in Nil animals, with G-CSF level and neutrophil count being >10-fold higher, PD-L1 + myeloid cell count being~2-fold higher, and both CXCL13 and IL-6 levels being~<2-fold higher than normal (Fig 5). B cell number, an important early feature for 4T1 growth modelling, was the only key feature not significantly different from normal levels, although the cells at D14 had a trend of being lower than normal in these mice.
Based on the modelling there were only 2 main features in common contributing to both tumour models' growth, D14 blood neutrophil count and D14 PD-L1 + myeloid cell count ( Fig 5). The unique features associated with each tumour were mostly plasma factors. Overall, the The quantities in blood of key immune features associated with both CT26 and 4T1 tumour growth from both D7 (early) and D14 (late) samples were plotted for animals with no tumours (Nil), CT26 tumours and 4T1 tumours, displaying all values, as well as box plots with min to max whiskers and means as '+'symbols. These were divided into tumour-specific features and common features between the tumour subtypes. Number of samples was as described in Fig 1B. p-values to investigate significance between the cohort means as assessed using 2-way ANOVA on Log (y+0.0001) transformed data using Tukey's multiple comparisons with � , p � 0.05. �� , p � 0.01. ��� , p � 0.001. and ���� , p � 0.0001. https://doi.org/10.1371/journal.pone.0264631.g005 blood immune phenotype of 4T1-bearing mice was definitively abnormal with a few obvious aberrant immune parameters, while CT26-bearing mice had less drastic changes, making inference of key immune factors more difficult without further study.
Discussion
In this study we aimed to investigate the utility of a high-throughput multiparameter flow cytometry method, coupled with a machine learning (ML)-based statistical analysis, to screen blood for immune features capable of predicting cancer presence and growth, and also make inferences about underlying cancer-immune biology. Using two syngeneic solid tumour models, a 4T1 breast cancer model and a CT26 colorectal cancer model, our workflow revealed that myeloid factors in the blood, such as neutrophils, monocytes and the levels of the myeloid cellpropagator G-CSF, feature prominently as key determinants of tumour classification (Fig 2). Myeloid cells, specifically neutrophils and PD-L1-expressing myeloid cells, were also common associates of tumour size in both models (Fig 5). Tumour-specific blood immune features were also identified, with elevated levels of G-CSF, IL-6 and CXCL13, and B cell counts associating with prediction of 4T1 growth, while blood CCL17, CXCL10, CXCL1, total myeloid cells, CCL2, Ly6C intermediate monocytes, and IL-10 levels were involved with predicting CT26 tumour growth. Many of these factors have been implicated in cancer progression showing the potential utility of our approach.
With a growing appreciation of immune responses as a hallmark of cancer development, immune phenotyping is becoming an increasingly interesting area of research in cancer management [2]. ML is recognised as an important approach to optimising future cancer diagnosis, prognosis and treatment personalisation, and is ideally suited for interpretating the abundant and complex immune parameters involved [14]. ML approaches can also be used to help make inferences about the underlying biological mechanisms that are modelled for, with the development of model explanatory algorithms [8]. In this study, we have chosen to use the Random Forest model [13] as our learner, since it is flexible (in that it can be used in both classification and regression questions), has in-built feature ranking (to help with feature selection), has fewer overfitting issues than some other models, is relatively interpretable, and performs well in real-life clinical applications compared to other shallow models and more extensive deep learning modalities [15,16]. The applied Random Forest modelling presented here identified several key blood immune features that, in combination, predicted tumour class (with misclassification of only 4 animals of 130) and size (with moderate to strong linear correlation of predicted to actual current and future tumour sizes). In addition, we used the combination of Random Forest feature ranking [13,17], SHAP explanatory values [8] and Spearman's-based bivariate correlations [18] to help make inferences about underlying features important for outcomes. Intriguingly, while these factors ranked highly in predictive modelling, and several had significant correlations either directly or indirectly with tumour growth, many did not differ significantly from levels measured in non-tumour bearing animals. This raises the question of potential additive or even synergistic roles for these factors in tumour development; the alternative possibility of chance association, however, cannot be discounted. This latter hypothesis can only be probed with further experimental input, such as blocking and/or knockdown/out studies of the identified key features in in vivo studies. While this is beyond the scope of the current study, we note that independent reports support a role for these factors in cancer development and these will be discussed below.
One of the most upregulated factors we identified as a potential early driver of 4T1 growth was G-CSF. Previous observations have shown that 4T1 tumour cells are potent producers of G-CSF [19,20] and that abrogating G-CSF production significantly diminishes tumour growth in preclinical breast cancer models [19]. We also showed that elevated neutrophils (annotated CD11b + Ly6G + cells) strongly correlated with advancing tumours (Fig 5). Previous reports show 4T1 tumour cells induce profound granulocytosis in vivo [9,21] and separate reports reveal a critical role for G-CSF in 4T1 growth and metastasis through changes in granulocyte frequencies (referred to in those reports as myeloid-derived suppressor cells, MDSCs, which can have a CD11b + Ly6G + phenotype) [22]. Clinically, G-CSF can be significantly higher in the plasma of breast cancer patients and plasma levels correlate with more advanced disease [23], as do blood levels of neutrophils [24]. Intriguingly, IL-6, another early signature of 4T1 growth that we identified, cooperates with G-CSF to promote pro-tumour function of neutrophils [25]. IL-6 is often associated with the tumour microenvironment [26] and clinically, circulating IL-6 level is associated with poor prognosis and low survival rate in patients with breast cancer [27], while IL-6 polymorphisms are linked to increased breast cancer risk [28]. Thus, IL-6 and G-CSF may work in concert on neutrophil function to promote breast cancer growth.
We also identified CXCL13 as another early factor correlating with 4T1 tumour growth, and its role in breast cancer has been widely reported [29][30][31]. However, published studies are conflicting with regards to its role in the 4T1 model, with support for both pro-tumour activity [32] and anti-tumour activity [33]. Indeed, generally, CXCL13 has been shown to drive growth and invasive signals in many tumours, but also correlates with improved survival in other tumours [34], suggesting a context-dependent role for this cytokine in cancer progression. A further intriguing aspect of CXCL13 biology is that it acts as a chemoattractant for B cells [34], which were also identified as an important feature of 4T1 tumour growth in our analysis. The contribution of B cells in antitumour immunity remains controversial [35], with both pro-and anti-tumour effects. In addition, CXCL13 production from bone marrow endothelial cells occurs in response to IL-6 [36], which is also known to be a B cell differentiation and activation factor [34]. Based on these reports, and our data, we can formulate a model for all these factors that potentiates breast cancer growth (Fig 4K). Here, IL-6, promoted by the tumour microenvironment, may interact in concert with G-CSF to drive neutrophil protumour activity and also production of CXCL13. CXCL13 may then act as a protumour factor and, with IL-6, promote B cell responses which also act on tumour growth. Finally, we have identified a PD-L1-expressing myeloid population, a third top feature of our 4T1 ML model (Fig 4I), which correlates with circulating B cell number and thus may also act indirectly to support tumour growth. While circulating PD-L1-expressing myeloid populations are less well documented than the factors described above, it has been reported that, in lung cancer, treatment with PD-1/PD-L1 blockade response correlated with systemic PD-L1 + CD11b + myeloid cell frequency, suggesting a potential for stratification based on systemic PD-L1 + myeloid cell subsets [37]. Further study of these cells is warranted.
In the CT26 tumour model, we identified early levels of CXCL1, CCL2 and CCL17 as having important roles in predicting tumour growth as well as similar pairwise correlations with factors directly correlating with tumour growth and one another (Fig 3I), suggesting they played similar roles in this context. CXCL1, is known to promote recruitment and activation of neutrophils [38], premetastatic niche formation [39], tumour invasive potential [40] and tumorigenicity in metastatic colorectal cancer patients [41], and therefore, not surprisingly, serves as a biomarker for poor prognosis. Similarly, CCL2 promotes the recruitment of immunosuppressive tumour-associated macrophages [42], promotes CT26 tumour growth [43] and associates with poor outcomes in metastatic human colorectal cancer [42]. In contrast, CCL17 has been reported to play a complex and somewhat contradictory role in cancer development and progression [44]. CCL17 can promote anti-CT26 tumour immune response [45], and high serum levels are associated with improved survival rates in advanced melanoma patients [46]. On the other hand, tumour-associated neutrophils can produce CCL17, recruiting CD4 T regulatory cells that promote immune evasion and cancer development in non-small cell lung cancer [47,48]. It is possible that the location, timing and context of CCL17 expression determines its impact on cancer establishment and progression. Indeed, this may also be the case with CXCL1 and CCL2, since all these three factors associated positively with early correlates of CT26 growth, such as blood myeloid cells and plasma CXCL10 levels, but then also associated negatively with late factors correlating with CT26 growth, such as monocytes with a Ly6G -Ly6C intermediate phenotype and neutrophils (Fig 3).
CXCL10 was identified as an early weak correlate of CT26 growth in our analysis. Clinically, CXCL10 been associated with pro-and anti-tumour responses in colorectal cancer patients [49,50]. A recent study across 3,763 colorectal cancer patients suggested lower CXCL10 expression was significantly associated with disease spread, recurrence and overall survival, and this association was dependent on other factors such as age and populationbased genetic differences [50]. This suggests that CXCL10 expression may have potential as a predictive biomarker in colorectal cancer management, once these variables are taken into account. Similarly, IL-10, a feature involved in prediction of CT26 growth in our analysis, is also associated with colorectal cancer patient prognosis, but in a context dependent manner, being generally lower in patients compared to controls, but higher in patients with poor prognosis [51].
While several myeloid cells were identified as late associates of CT26 growth, the late appearance of monocytes with a Ly6G -Ly6C intermediate phenotype had the strongest association with tumour size (Fig 4). Tumour monocyte subsets are known to have diverse roles in tumour progression [52]. Related to this, CCL2 is a primary recruiter of tumour monocyte subsets [52] and CXCL10 is known to be a monocyte recruitment factor [53]. In our study, early levels of CCL2 and CXCL10 were associated with one another and early CXCL10 levels had a weak correlation with the later appearance of Ly6G -Ly6C intermediate monocytes. Based on these observations and reports by others, a potential model for the role of key blood immune factors identified can be postulated in colorectal cancer development (Fig 3K). Here, early production of CCL2 and CXCL1 may help with shaping the initial myeloid cell compartment in cancerbearing individuals, which promotes tumour development and production of CCL17 and CXCL10 [54] which in turn modulates recruitment of leukocytes. The early soluble factors may then help shape later tumour associated factors such as IL-10, neutrophils, PD-L1-expressing myeloid cells and Ly6G -Ly6C intermediate monocytes, which play roles in tumour development.
Undoubtedly, our work is limited by the choice of models used to develop the workflow. While murine syngeneic cell line models are among the most widely used tools for studying cancer [55,56] and have been involved in landmark discoveries [57,58], there are several limitations to this approach. Cell line-derived models are non-autochthonous, and thus may not have the normal architecture or development that occurs in tumours evolving de novo. Indeed, the injection of the cell lines may in itself alter the inflammatory environment in a way that would not be seen in de novo tumour growth [59]. The loss of genetic heterogeneity and irreversible changes in gene expression resulting from long-term in vitro propagation of tumour cell lines may also mean that we do not observe the same level of intra-individual heterogeneity that is common in human tumours [56,60]. Furthermore, the use of inbred mouse strains does not reflect the vast inter-individual heterogeneity present in the clinic [56]. While we have attempted to overcome some of these issues by using two distinct and diverse cell lines, there would be obvious benefit to increasing this diversity with additional models, given the resources. Nevertheless, there is clinical evidence to support our findings (as discussed above) and thus our study provides an approach that may work clinically, which is the ultimate goal.
While beyond the scope of this current study, the workflow developed here is now being modified for clinical implementation in cancer patients. This will involve initial high-dimensional screens (using protein arrays and LEGENDScreen TM technologies) to identify blood cell and plasma features that may be associated with cancer-specific progression. Key features will then be rationalised in a high-throughput assay/machine learning pipeline analogous to that reported here and used to phenotype blood of cancer patients and closely matched healthy controls to assess capacity to predict patient outcomes over time.
In summary, our work demonstrates the benefit of a high-dimensional data pipeline for the identification of key immune features that interact with tumour development. Our analysis has highlighted the great complexity in the relationship between the immune response and tumour development, where expression of a single molecule may well be insufficient to predict or explain tumour progression. Indeed, it is clear that many immune factors have contextdependent roles in cancer development [34,44,61]. With this in mind, we believe a multivariate approach to "biomarker" identification for use in the prognostication and treatment personalisation of cancer is well warranted. Furthermore, we are confident that this work demonstrates the utility of an immune-based workflow in combination with ML to enable identification of context-dependent predictive immune features for the study of tumour outcome. It will be of further interest if such an approach can be utilised to predict treatment outcomes, justifying a role for assessing multivariate immune biomarkers for cancer treatment personalisation.
Supporting information S1 Fig. Gating and population names for leukocyte subsets. FlowJo software was used to delineate leukocyte populations using manual and boolean gates on concatenated samples with the scheme shown in (a) acting as a template for the entire study. FIt-SNE plots from concatenated live CD45 + samples, generated with default FlowJo setting, were overlayed with each manual gated population to ensure the gating scheme generated similar populations to those generated from the unsupervised approach (b), with the manual gate population identified by colour and name (c). The process was refined until the two approaches were good approximations of each other, resulting in the manual gates displayed in a. Generic and short form names of each population were then assigned based on marker expression and used throughout the manuscript (d). Fig 1B), were used in Random Forest modelling to predict presence of tumour and tumour subtype (targets class being Nil, 4T1 and CT26). Modelling was done on a progressively smaller number of random samples and model performance assessed using cross-validation with a training set of 80% of randomly obtained data and tested on the remaining data and this repeated 100 x. Model performance was assessed by several classification indicators, including area under curve of the receiver operating characteristics (AUC; to assess separability of the classes), classification accuracy (CA; proportion of correct classification), precision (ratio of correct positive prediction to all predicted positive), recall (ratio of correct positive prediction to actual positive), and F1 score (weighted average of precision and recall) with values being from 0 to 1 (and toward the latter being the best). (TIFF) S4 Fig. Normalised blood immune phenotypes in animal tumour models. CT26 or 4T1 tumours were grown in female, BALB/c mice and blood immune phenotype determined at D7 and D14, as described in Fig 1. Animals with no tumours (Nil) were used as normal immune phenotype controls. A total of 180 animals were included in the study, and animals divided into the groups indicated in Fig 1B. A 20 μl of blood sample from each animal at each time point was phenotyped for leukocyte populations and plasma analytes (Fig 1). Cell and plasma analytes were reported as fold-changes from the mean of Nil mice or "nil normalised", as described in the methods for both the D7 and D14 time points and presented on a log-scale (with numbers of 0-value data points indicated on the axis). Means and SEM are indicated (shown in yellow) and mean equality was tested using ANOVA on Log (y+0.0001) transformed data using Tukey's multiple comparisons correction, with 2-way ANOVA and multiple comparison p-values indicated ( � , p � 0.05. �� , p � 0.01. ��� , p � 0.001. ���� , p � 0.0001.). Heatmap summaries of the data highlighting the changes are also shown. Three analytes overlapped in the LEGENDplex kits, namely CCL22, CXCL1 and CCL17, and are labelled with a (1) if from the Mac/Mic panel or (2) (Fig 1B), were used in Random Forest modelling to predict presence of tumour and tumour subtype (target classes being Nil, 4T1 and CT26). The model was trained on 80% (a) and 60% (b) of randomly selected data and cross-validated using leave-one-out, and tested using the remaining data. Modelling was done on a progressively smaller number of features, from lowest to highest ranked, based on in-built Random Forest importance for class determination, and the process repeated 3 times. Model performance was assessed by several classification indicators, including area under curve of the receiver operating characteristics (AUC; to assess separability of the classes), classification accuracy (CA; proportion of correct classification), precision (ratio of correct positive prediction to all predicted positive), recall (ratio of correct positive prediction to actual positive), and F1 score (weighted average of precision and recall) with values being from 0 to 1 (and toward the latter being the best). The SHapley Additive exPlanations (SHAP) algorithm feature importance scores for classification using the top-15 features (ranked from highest to lowest) from the SHAP values are shown in (c), and show how the feature values impact on classification of each animal cohort, namely healthily control (Nil), CT26-bearing and 4T1-bearing cohorts. (Fig 1B) were used in Random Forest modelling to predict CT26 tumour size at D14. The model was trained initially on 100%, 80% and 60% of randomised data and cross-validated using leave-one-out (Train panels) and tested using the remaining data (Test panels). Modelling was done on a progressively smaller number of features, from lowest to highest ranked based on in-built Random Forest importance, and the process repeated 3 times (mean and standard error of mean shown). Model performance was summarised showing the 60%:40%, training:testing split, and equality of test and train performance score means (using the top assigned features) assessed using ANOVA and shown in the main Fig (Fig 3). The Random Forest rank (RF rank) scores for the top-10 features are shown. Model performance was assessed by several regression indicators, including the error scores, Mean Squared Error (MSE), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) (which we hoped to minimise), and the coefficient of determination score R2. D14 tumour size was used as the target using D14 blood samples to assess if blood immune features could predict current tumour size (a). D14 tumour size was used as the target using D7 blood samples to assess if blood immune features could predict future tumour size (b). (TIFF) S7 Fig. Random Forest modelling to predicting 4T1 tumour size and growth using blood immune phenotypes. Normalised blood immune features (S4 Fig) taken from 58 4T1-bearing animals that had both D7 and D14 blood samples (Fig 1B), were used in Random Forest modelling to predict 4T1 tumour size at D14. The model was trained initially on 100%, 80% and 60% of randomised data and cross-validated using leave-one-out (Train panels) and tested using the remaining data (Test panels). Modelling was done on a progressively smaller number of features, from lowest to highest ranked based on in-built Random Forest importance, and the process repeated 3 times (mean and standard error of mean shown). Model performance was summarised showing the 60%:40%, training:testing split and equality of test and train performance score means (using the top assigned features) assessed using ANOVA and shown in the main Fig (Fig 4). The Random Forest rank (RF rank) scores for the top-10 features are shown. Model performance was assessed by several regression indicators, including the error scores, MSE, MAE and RMSE (which we hoped to minimise), and the coefficient of determination score R2. D14 tumour size was used as the target using D14 blood samples to assess if blood immune features could predict current tumour size (a). D14 tumour size was used as the target using D7 blood samples to assess if blood immune features could predict future tumour size (b). (TIFF) S1 Table. List of antibodies for cell surface labelling. | 2022-03-02T05:14:24.634Z | 2022-02-28T00:00:00.000 | {
"year": 2022,
"sha1": "b2f11587d80ef9ce793fe5d6485ff342511c1a32",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0264631&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2f11587d80ef9ce793fe5d6485ff342511c1a32",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256856183 | pes2o/s2orc | v3-fos-license | The Burden of Pneumocystis Pneumonia Infection among HIV Patients in Ethiopia: A Systematic Review
Pneumocystis pneumonia (PCP) is a leading cause of death among patients with AIDS worldwide, but its burden is difficult to estimate in low- and middle-income countries, including Ethiopia. This systematic review aimed to estimate the pooled prevalence of PCP in Ethiopia, the second most densely populated African country. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were used to review published and unpublished studies conducted in Ethiopia. Studies that reported on the prevalence of PCP among HIV-infected patients were searched systematically. Variations between the studies were assessed by using forest plot and I-squared heterogeneity tests. Subgroup and sensitivity analyses were carried out when I2 > 50. The pooled estimate prevalence with 95% CI was computed using a random-effects model of analysis. Thirteen articles, comprising studies of 4847 individuals living with HIV, were included for analysis. The pooled prevalence of PCP was 5.65% (95% CI [3.74–7.56]) with high heterogeneity (I2 = 93.6%, p < 0.01). To identify the source of heterogeneity, subgroup analyses were conducted by study design, geographical region, diagnosis methods, and year of publication. PCP prevalence differed significantly when biological diagnostic methods were used (32.25%), in studies published before 2010 (32.51%), in cross-sectional studies (8.08%), and in Addis Ababa (14.05%). PCP prevalence differences of 3.25%, 3.07%, 3.23%, and 2.29% were recorded in studies based on clinical records, published since 2017, follow-up studies, and north-west Ethiopian studies, respectively. The prevalence of PCP is probably underestimated, as the reports were mainly based on clinical records. An expansion of biological diagnostic methods could make it possible to estimate the exact burden of PCP in Ethiopia.
Introduction
Pneumocystis pneumonia (PCP) is a significant opportunistic infection (OI) among patients with human immunodeficiency virus (HIV). It is caused by Pneumocystis jirovecii, which remains the major cause of morbidity and mortality among HIV-infected patients, despite the use of highly active antiretroviral therapy [1,2]. Even though the problem is very high in low-and middle-income countries, it has also become a major life-threatening infection among transplant patients and patients with immunosuppressive therapies or cancer in high-income countries [3,4].
PCP was first recognized in undernourished children and in patients who were immune-compromised with malignancies, immunosuppressive therapy, or congenital immune deficiencies, but it was uncommon until 1980. The rate of PCP infection increased
Searching Strategy
Different search strategies were used to find studies to be included in the review and meta-analysis. We searched studies published in the English language online in different databases: PubMed, HINARI, Web of Science, Google Scholar, and university repositories. Searches were carried out from 10 December 2020 to 7 January 2021. The search terms used for the PubMed database were (Pneumocystis OR pneumonia); OR (Pneumocystis OR Pneumocystis jirovecii OR Pneumocystis carinii OR AIDS-related opportunistic infections); AND (HIV AND (patients OR persons)); AND (Ethiopia).
Separate search terms also were used to find candidate studies for this study using Google Scholar and Google databases, using the terms "Pneumocystis"; OR "Pneumonia, Pneumocystis"; OR "Pneumocystis carinii"; OR "Pneumocystis jirovecii"; OR "AIDS-related opportunistic infections"; AND "HIV"; AND "patients" OR "persons"; AND "Ethiopia". The selection and exclusion of studies for the systematic review and meta-analysis followed the PRISMA guidelines.
Inclusion and Exclusion Criteria
Epidemiological studies that assessed the prevalence of PCP among HIV/AIDS patients in Ethiopia were included. An article was included if it was conducted among HIV/AIDS patients. Articles published between 1 January 2000 and 1 January 2021 in peer-reviewed journals or university repositories were included. Both cross-sectional and follow-up (trials and cohort) studies published in the English language were included.
Study Selection
All retrieved records were screened against the inclusion criteria. Initially, the decision was based on the title and abstract of the article. For studies that fit the inclusion criteria, or when a definitive decision could not be made based on the title/abstract alone, the full paper was reviewed. The eligibility of each study was assessed independently by two investigators, and the paper was given to a third reviewer to establish a consensus when a discrepancy occurred between the reviewers.
Data Extraction
Data were collected independently by two individual reviewers from each eligible publication and recorded on a standardized form. During data extraction, variables such as study characteristics, year of publication, design, study population, region, ART coverage, cotrimoxazole preventive therapy (CPT), CD4 count, diagnostic method, and PCP prevalence were captured.
Strategy for Data Synthesis
Data were imported into R studio (R version 4.0.2 (22 June 2020)) from an Excel (Microsoft) sheet for analysis. Both qualitative synthesis and quantitative analysis were performed to present the data extracted from each study. The pooled prevalence of PCP with a 95% confidence interval (95% CI) was calculated using a random-effects model. Heterogeneity between studies was examined using the I squared statistic. According to the test result, an I squared (I 2 ) estimate greater than 50% was considered as being indicative of a moderate to high level of heterogeneity [22]. The Dersimonian and Laird random-effects method was used to incorporate an additional between-study component to estimate the variability [23]. Subgroup analyses were performed by outcome ascertainment, study setting, study design, and year of publication as a possible source of heterogeneity between studies. Sensitivity analyses were performed by excluding each study one by one and calculating a pooled prevalence for the remainder of the studies. Publication bias was checked by funnel plot and Egger's test [24]. Duval and Tweedie's trim and fill analysis was carried out to adjust the pooled estimate of the prevalence of publication bias [25].
Results
Initially, 7722 articles were identified. After screening, 13 articles were included for analysis ( Figure 1). studies. Sensitivity analyses were performed by excluding each study one by one and calculating a pooled prevalence for the remainder of the studies. Publication bias was checked by funnel plot and Egger's test [24]. Duval and Tweedie's trim and fill analysis was carried out to adjust the pooled estimate of the prevalence of publication bias [25].
Results
Initially, 7722 articles were identified. After screening, 13 articles were included for analysis ( Figure 1).
Characteristics of the Studies
This systematic review included thirteen studies published between 2003 and 2020, comprising studies of 4847 individuals living with HIV. The sample size of included studies ranged from 131 [21] to 744 [26]. All except one study [27] included participants ≥15 years. Five studies were carried out in Addis Ababa, Ethiopia's capital city. Regarding the design, the majority of studies (eight, 61.54%) followed a cross-sectional study design. Among the 13 studies that reported PCP prevalence, 11 (84.6%) used clinical records to assess the diagnosis (Table 1).
Characteristics of the Studies
This systematic review included thirteen studies published between 2003 and 2020, comprising studies of 4847 individuals living with HIV. The sample size of included studies ranged from 131 [21] to 744 [26]. All except one study [27] included participants ≥15 years. Five studies were carried out in Addis Ababa, Ethiopia's capital city. Regarding the design, the majority of studies (eight, 61.54%) followed a cross-sectional study design. Among the 13 studies that reported PCP prevalence, 11 (84.6%) used clinical records to assess the diagnosis (Table 1).
Prevalence of PCP
The overall analysis of these 13 studies, using the Dersimonian and Laird randomeffects model, found that the pooled prevalence of PCP among people living with HIV was 5.65% (95% CI [3.74-7.56]) with high heterogeneity (I 2 = 93.6%, p < 0.01) ( Figure 2).
Prevalence of PCP
The overall analysis of these 13 studies, using the Dersimonian and Laird randomeffects model, found that the pooled prevalence of PCP among people living with HIV was 5.65% (95% CI [3.74-7.56]) with high heterogeneity (I 2 = 93.6%, p < 0.01) ( Figure 2).
Subgroup Analysis
We carried out a subgroup analysis of four variables (PCP ascertainment methods, geographical region where the study was carried out, year of publication, and study design), and a random-effects model was used to test for subgroup differences.
The studies were carried out using different designs (cross-sectional and cohort studies). As PCP is a subacute disease, the design of the study could contribute to a difference in the pooled prevalence related to the timing of the occurrence of the disease. The pooled effect for each subgroup (design) differed substantially (8.08% vs. 3.23% for cross-sectional and cohort/follow-up studies, respectively) in the test of subgroup difference through the random-effects model (I 2 for CS = 95.4%, I 2 for cohort = 83.8). This result was not expected since PCP can evolve progressively over several weeks; thus, cross-sectional studies are more likely to miss PCP than follow-up studies ( Figure 3).
Subgroup Analysis
We carried out a subgroup analysis of four variables (PCP ascertainment methods, geographical region where the study was carried out, year of publication, and study design), and a random-effects model was used to test for subgroup differences.
The studies were carried out using different designs (cross-sectional and cohort studies). As PCP is a subacute disease, the design of the study could contribute to a difference in the pooled prevalence related to the timing of the occurrence of the disease. The pooled effect for each subgroup (design) differed substantially (8.08% vs. 3.23% for cross-sectional and cohort/follow-up studies, respectively) in the test of subgroup difference through the random-effects model (I 2 for CS = 95.4%, I 2 for cohort = 83.8). This result was not expected since PCP can evolve progressively over several weeks; thus, cross-sectional studies are more likely to miss PCP than follow-up studies (Figure 3). The studies were published between 2003 and 2020. As the treatment guidelines were constantly changing within this period, the overall time frame was organized into three subgroups, to take this into account (Figure 4).
The studies were carried out in three main regions of the country (Addis Ababa city, eastern Ethiopia, and northern Ethiopia). The pooled effect (prevalence of PCP) of the subgroups differed substantially (14.05%, 3.92%, and 2.29%, respectively). However, the two studies by Aderaye, et al. [21,28] might have outweighed the other studies of the Addis Ababa city subgroup as the heterogeneity of this subgroup was very high (I 2 = 97%) ( Figure 5). The studies were carried out in three main regions of the country (Addis Ababa city, eastern Ethiopia, and northern Ethiopia). The pooled effect (prevalence of PCP) of the subgroups differed substantially (14.05%, 3.92%, and 2.29%, respectively). However, the two studies by Aderaye, et al. [21,28] might have outweighed the other studies of the Addis Ababa city subgroup as the heterogeneity of this subgroup was very high (I 2 = 97%) ( Figure 5). The outcomes (PCP or other pulmonary infection) of the 13 studies were ascertained by different methods (clinical records or biological methods, including nested PCR and direct examination by immunofluorescence). The prevalence of PCP in studies based on clinical charts and in studies based on microbiological diagnosis differed substantially (3.25% vs. 32.51%, respectively), with Q = 24 (p-value < 0.0001) meaning that the diagnostic method greatly influenced the estimates of PCP prevalence, with biological diagnosis yielding higher prevalence rates ( Figure 6).
One limitation of a systematic review and meta-analysis is that not all studies carried out are published. Additionally, studies that yield statistically significant results are more likely to be submitted and published than works with insignificant results, which can lead to a publication bias that needs to be assessed. This was assessed subjectively through a funnel plot and objectively through Egger's test. The funnel plot found that, while some studies had statistically significant results, others did not. In addition, there was a trend for large-sized studies to be non-significant to partially significant, while the significance became stronger when the size of the study decreased. The asymmetry was much larger for small studies. This indicates that publication bias might indeed have been present in our analysis (Figure 7). One limitation of a systematic review and meta-analysis is that not all studies carried out are published. Additionally, studies that yield statistically significant results are more likely to be submitted and published than works with insignificant results, which can lead to a publication bias that needs to be assessed. This was assessed subjectively through a funnel plot and objectively through Egger's test. The funnel plot found that, while some studies had statistically significant results, others did not. In addition, there was a trend To measure the publication bias objectively, Egger's test of the intercept was used to quantify this by performing a statistical test. The results of the test confirmed the presence of a significant funnel with slight asymmetry (Q = 186.248, degree of freedom = 12, p < 0.001). Even though this result showed the presence of publication bias, the result of the trim and fill analysis found that data were unchanged after applying trim or if no trim was performed (Q = 186.248, degree of freedom = 12, a moment-based estimate of between-studies variance = 10.544).
Discussion
Based on modeling using available data from well-defined risk populations, it was estimated that fungi infect approximately 8% of people in Ethiopia every year [37]. According to the authors of that study, the estimated incidence of PCP was 12.1 cases per 100,000 person-years in the whole population. However, as far as the authors were aware, no published systematic review had determined the pooled prevalence of PCP among people who live with HIV in Ethiopia. Therefore, this systematic review and meta-analysis aimed to synthesize and estimate the prevalence of PCP in HIV-infected patients in Ethiopia. Accordingly, the pooled prevalence of PCP in HIV-infected patients in Ethiopia was found to be 5.65% (95% CI [3.74-7.56]). The prevalence of PCP had shown a decrease over time. It was 33% in 2003-2010, while it dropped to 4% and 3% in 2011-2016 and 2017-2020, respectively (Q = 25.9, p-value < 0.0001). A related finding, which is supported by a previous study conducted in Uganda, is that the prevalence of PCP decreases when the CD4 cell count increases [38]. Over the period in question, this links to the expansion of ART coverage, reaching 76% among adults (aged ≥15) in 2017, compared to none in 2003. ART was introduced in 2003 in selected health facilities, and free ART was launched in 2005 [18]. Expanding the accessibility and availability of ART has fundamentally improved the survival rate of people living with HIV by lowering the incidence of OI [39]. Another explanation might be the models To measure the publication bias objectively, Egger's test of the intercept was used to quantify this by performing a statistical test. The results of the test confirmed the presence of a significant funnel with slight asymmetry (Q = 186.248, degree of freedom = 12, p < 0.001). Even though this result showed the presence of publication bias, the result of the trim and fill analysis found that data were unchanged after applying trim or if no trim was performed (Q = 186.248, degree of freedom = 12, a moment-based estimate of betweenstudies variance = 10.544).
Discussion
Based on modeling using available data from well-defined risk populations, it was estimated that fungi infect approximately 8% of people in Ethiopia every year [37]. According to the authors of that study, the estimated incidence of PCP was 12.1 cases per 100,000 person-years in the whole population. However, as far as the authors were aware, no published systematic review had determined the pooled prevalence of PCP among people who live with HIV in Ethiopia. Therefore, this systematic review and meta-analysis aimed to synthesize and estimate the prevalence of PCP in HIV-infected patients in Ethiopia. Accordingly, the pooled prevalence of PCP in HIV-infected patients in Ethiopia was found to be 5.65% (95% CI [3.74-7.56]). The prevalence of PCP had shown a decrease over time. It was 33% in 2003-2010, while it dropped to 4% and 3% in 2011-2016 and 2017-2020, respectively (Q = 25.9, p-value < 0.0001). A related finding, which is supported by a previous study conducted in Uganda, is that the prevalence of PCP decreases when the CD4 cell count increases [38]. Over the period in question, this links to the expansion of ART coverage, reaching 76% among adults (aged ≥15) in 2017, compared to none in 2003. ART was introduced in 2003 in selected health facilities, and free ART was launched in 2005 [18]. Expanding the accessibility and availability of ART has fundamentally improved the survival rate of people living with HIV by lowering the incidence of OI [39]. Another explanation might be the models for patients' eligibility to begin ART. Since 2017, Ethiopia has launched a testand-treat strategy. All HIV-positive patients are eligible for ART, regardless of their WHO clinical stage or CD4 count, and ART initiation is offered on the same day if the patients are mentally prepared to start ART. Before that, from 2011, teenagers and adults only began ART when they presented with a serious or progressed HIV clinical infection (WHO clinical stage 3 or 4), or with a CD4 count ≤ 350 cells/mm 3 . Before 2011, they began ART when they presented with advanced HIV clinical infection (WHO clinical stage 3 or 4) or a CD4 count ≤ 200 cells/mm 3 [19]. The relatively new test-and-treat system and the commencement of ART before advanced illness may account for the decrease in the incidence of PCP and other OIs in Ethiopia. Today, the rate of patients with a CD4 count < 200 cells/mm 3 is low among people who start ART and cotrimoxazole preventive therapy (CPT), thus lessening the rate of PCP and other OIs. The hypothesis that the early treatment is responsible for this effect is in agreement with a previous study in America, where ART and CPT diminished the proportion of patients with CD4 counts < 200 cells/mm 3 [4].
The risk of developing OIs in HIV patients depends on the use of antimicrobial prophylaxis, the level of host resistance, and the virulence of pathogens [40]. Cotrimoxazole (a combination of sulfamethoxazole and trimethoprim) is a broad-spectrum, safe, welltolerated, low-cost, and widely available antimicrobial agent used as standard care for people living with HIV, and it is utilized in primary healthcare to treat various infections [41]. The Ethiopian national guidelines suggest CPT for individuals living with HIV for the prevention of Pneumocystis pneumonia and other OIs, whether bacterial or toxoplasmosis [41]. The guidelines recommend starting CPT when the CD4 count is <350 cell/mm 3 , regardless of WHO clinical stage and/or in WHO clinical stages 3 or 4, regardless of CD4 counts. Recently, the availability and accessibility of CPT reached almost 100% in Ethiopia, which may emphatically impact the decrease in the PCP rate among people living with HIV. Among the studies included in this review, the overall rate of PCP was low; however, the frequency of PCP was high among HIV+ individuals who did not take CPT. Previously, the availability of CPT at health facilities was low, which might explain the high incidence of PCP in the 2000s.
The other reason for the reduction in the PCP rate might be the awareness and testing uptake for HIV, which are increasing in Ethiopia. They make it possible for those living with HIV to start care before they develop complications secondary to HIV. Additionally, adherence to care is increasing due to the reduction of stigma and discrimination in the community.
In addition, the prevalence of PCP varied according to the diagnostic method. It was low in studies that established PCP diagnosis from clinical signs or medical records (3%), compared to studies that used biological diagnostic methods (32%). One possible reason for low PCP prevalence in clinical studies is that PCP might be overlooked if another diagnosis is obtained (for example, TB, using GeneExpert ® screening), or it is misdiagnosed as a bacterial infection [42]. As cotrimoxazole is a widely used drug, PCP might be unrecognized, yet successfully treated. Additionally, prevalence rates observed in biological studies could be overestimated if patients were not included prospectively, or if only the most severe patients were screened using biological tools.
The pooled prevalence was in line with studies carried out in Vietnam (5%) [43] and India (5%) [44]. However, it was lower than those observed in Kenya (37%) [45], Uganda (39%) [46], Botswana (31%) [47], Senegal (43%) [48], Thailand (19%-57%) [49,50], and Chile (38%) [51]. The survey periods might account for this disparity as most of the above studies were carried out before 2010. At that time, the incidence of PCP was high in developing countries, including Ethiopia, as shown by our subgroup analysis. Alternatively, the abovenoted discrepancies may be attributed to diagnostic methods, as most of the studies (85%) were based on clinical diagnosis, with the inherent bias discussed above. Our pooled PCP prevalence was higher than in two studies conducted in South Africa (0.5%) [52] and Malawi (1%) [53], but the authors noted that their studies were cohort studies and excluded close to half (48.5%) of participants for various reasons. Therefore, PCP cases could have been missed before they could be followed up. The other reason might be that participants took CPT, which is ideal for both the primary and secondary prophylaxis of PCP [54].
Conclusions
This study provides a national figure of PCP in Ethiopia, to help policymakers and program managers design appropriate and cost-effective strategies to minimize the burden of PCP. The prevalence of PCP is decreasing over time. Overall, approximately 6% of individuals infected with HIV experienced PCP, which is relatively low but could be underestimated. The expansion of biological diagnostic methods may enable a greater understanding of the exact burden of PCP in Ethiopia. Molecular diagnosis of PCP is now considered the gold standard [55]. As Pneumocystis PCR has been recently added to the list of essential diagnostic tools by the WHO, the question of PCP prevalence is timely, and this review could motivate the implementation of this diagnosis in Ethiopia [56,57]. | 2023-02-15T16:12:58.954Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "598bc779b6861e2faae683280b49028179c9a8a2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2414-6366/8/2/114/pdf?version=1676275020",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34c1861c076ff6ae1905d67a86c6cbb4b403dcc4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
259677939 | pes2o/s2orc | v3-fos-license | Trends in Off-Label Indications of Non-Vitamin K Antagonist Oral Anticoagulants in Acute Coronary Syndrome
Acute coronary syndrome (ACS) is a leading cause of mortality worldwide. Despite optimal antiplatelet therapy recommendation after ischemic events, recurrent thrombotic complications rate remains high. The recurrent events maybe in part due to increased thrombin levels during ACS which may underscore the need for an additional anticoagulation therapy. Given the advantages of non-vitamin K antagonist oral anticoagulants (NOACs) over warfarin, they have the potential to prevent thrombus formation, in the presence or absence of atrial fibrillation, but at the cost of increased risk of bleeding. NOACs have also shown a promising efficacy in managing left ventricular thrombus and a potential benefit in avoiding stent thrombosis after percutaneous coronary revascularization. Taken as a whole, NOACs are increasingly used for off-licence indications, and continue to evolve as essential therapy in preventing and treating thrombotic events. Herein, this review discusses NOACs off-label indications in the setting of ischemic coronary disease.
Introduction
Acute coronary syndrome (ACS) is a medical emergency that occurs because of coronary artery occlusion leading to myocardial hypoperfusion [1].ACS is associated with morbidity and mortality, particularly during hospitalization and 30 days after the event.However, the risk of recurring cardiovascular events persists beyond that period.The history of anticoagulation agents to treating ACS started since 1930s in animal studies with intravenous heparin proven to reduce formation of thrombus.Further clinical studies followed in 1940s with the use of oral anticoagulants [2].The introduction of non-vitamin K antagonist oral anticoagulants (NOACs) has revolutionized the landscape of anticoagulation therapy [3], and NOACs have become the cornerstone in the management of thrombosis in various cardiovascular contexts [4].NOACs are either direct thrombin inhibitors, namely dabigatran, or factor Xa inhibitors, including apixaban, betrixaban, edoxaban, and rivaroxaban, that are characterised by predictable pharmacokinetic properties, quick action at onset and offset, fixed dosing-regimen, less frequent monitoring or followup needs, acceptable safety profile, few drug-food and drug-drug interactions, and comparable safety and efficacy with warfarin in the approved indications, i.e., atrial fibrillation (AF) and venous thromboembolism [4,5].In addition to the potential cost-saving benefit on the long run [5].Consequently, NOACs were labelled for many indications by regulatory bodies and recommended by international guidelines [3], hence there was a growing interest in NOACs, and their use has been increasing in several off-label indications.This review discusses the off-label indications of NOACs in ischemic coronary disease such as in peri-percutaneous coronary procedures, post cardiac and non-cardiac surgeries, and left ventricular (LV) thrombus following ischemic events.
Simplistic Mechanism of Ischemic Coronary Thrombosis
Ischemic heart disease is a top cause of death, with 30% of deaths are caused by coronary artery disease (CAD) worldwide.Coronary thrombosis in patients with CAD leads to ACS and death [6,7].Patients presenting with ACS require immediate antithrombotic therapy [1].The acute ischemic event occurs due to plaque rupture leading to coronary artery occlusion, either partially or totally.Then the vascular damage exposes von Willebrand factor (i.e., tissue factor) and collagen [7] to platelets which adhere to both collagen and von Willebrand factor in the ruptured plaque [2], with eventual platelet activation and aggregation [8].The sub-endothelium-released tissue factor provokes the coagulation cascade and, subsequently, thrombin release which is implicated in thrombus formation and further platelet activation [2,6,7].Anticoagulation agents act at several stages of the coagulation cascade to prevent thrombus formation and eventually new or recurrent thrombotic event [3].Thus, the need for antithrombotic therapy (i.e., anti-platelet and anticoagulation agents) in ACS.NOACs have specific targets (factor Xa and thrombin) in the coagulation cascade, hence reducing the formation and progression of thrombus (Fig. 1) [7,9].Dual antiplatelet therapy (DAPT) by combing aspirin and a P2Y 12 receptor inhibitor, is recommended for secondary prevention of recurrent events after ACS, and despite the benefit, the rate of recurrent cardiovascular events within 12 months may range from 9% to 12% [2,7,10].When the duration of DAPT is prolonged beyond 12 months, it provides only minimal thromboembolic prevention without reduction in mortality risk [6].The thrombotic risk may be attributed to the elevated levels of thrombin and factor Xa by-products that linger for weeks or months after the acute coronary event, resulting in a prolonged hypercoagulable state [7].Thus, it was plausible to consider adding an oral anticoagulation agent to DAPT (i.e., dual-pathway approach) for secondary prevention to reduce the level of thrombin and improve clinical outcomes [1,2].Very early studies investigated combining vitamin K antagonists (VKAs) with aspirin in patients with acute or chronic coronary syndromes and demonstrated reduction in thrombotic complications, but at the cost of elevated bleeding risk [11][12][13].NOACs may offer a supplemental role in managing ACS such as in the secondary prevention of cardioembolic events [6].Table 1 summarizes the key characteristics of the approved NOACs [8,14,15], and Fig. 2 presents the timeline of NOACs approval and key studies that are discussed below.
Acute Coronary Syndrome
The addition of NOACs to DAPT post ACS has been explored in various studies.The ESTEEM (efficacy and safety of the oral direct thrombin inhibitor ximelagatran in patients with recent myocardial damage) Phase II randomized study evaluated the first antithrombin NOAC to emerge, ximelagatran, in patients with ACS.Patients who received aspirin alone were randomized to receive ximelagatran or a placebo in four different doses.Ximelagatran significantly reduced composite of death, myocardial infarction and severe recurrent ischemia (i.e., primary endpoint) without significant bleeding episodes encountered between ximelagatran and placebo arms [16].The ES-TEEM sub-study showed that ximelagatran provided longterm thrombin generation reduction [17].The drug was withdrawn from usage due to significant side effects on the liver [16].In another sub-study, ximelagatran was found to be associated with reduction in D-dimer which is linked to cardiovascular complications [18].Dabigatran was tested in the REDEEM (Randomized Dabigatran Etexilate Dose Finding Study in Patients with Acute Coronary Syndromes Post Index Event with Additional Risk factors for Cardiovascular Complications Also Receiving Aspirin and Clopidogrel) Phase II trial as an add-on to DAPT after ACS events.The bleeding episodes were significantly higher when compared to placebo, and the secondary outcome measures (all-cause death) were significantly lower with dabigatran [19].The APPRAISE-1 (Apixaban for Prevention of Acute Ischemic and Safety Events) is a dose-finding study that randomized apixaban into four groups.The two groups of higher doses (20 mg once and 10 mg twice daily) were terminated early due of excessive bleeding.The study concluded a dose-associated increased bleeding with only a trend to decreased ischemic episodes.Apixaban as an addon to aspirin plus clopidogrel caused more bleeding and less benefit in term of reducing ischemic events in comparison with aspirin alone [20].Further examination of apixaban followed in Phase III APPRAISE-2 trial which was termi- nated early because of excessive bleeding events without significant benefit in terms of recurrent ischemic events [21].The conclusion of the APPRAISE-2 trial did not change when the findings were analysed according to the background dual or single antiplatelet therapy [22].Further analysis of the bleeding events in APPRAISE-2 trial demonstrated that apixaban increased both short-and longterm bleeding complications.The most frequent source of bleeding was the gastrointestinal tract [23].
The ATLAS ACS-TIMI (Anti-Xa Therapy to Lower Cardiovascular Events in Addition to Standard Therapy in Subjects with Acute Coronary Syndrome-Thrombolysis In Myocardial Infarction) Phase II (ATLAS ACS-TIMI 46) and III (ATLAS ACS 2-TIMI 51) trials showed that rivaroxaban reduced major ischemic episodes with a dosedependent elevated bleeding risk [24,25].Several analyses of the ATLAS ACS 2-TIMI 51 trial have been performed.Rivaroxaban showed benefit in reducing cardiovascular episodes which appeared early and maintained during the treatment without significant rise in fatal bleeding [26].The majority of myocardial infarction events, i.e., endpoints in ACS patients after stabilization, were spontaneous, rivaroxaban significantly reduced them especially those associated with ST-segment elevation and substantial release of cardiac biomarkers [27].The use of 2.5mg dose had more favourable safety and efficacy outcomes than 5-mg dosing regimen [28].A meta-analysis of four studies by Yuan and colleagues [29] (n = 40,148) found that combining rivaroxaban with antiplatelet therapy in patients presenting with ACS, was an effective strategy but with a doubtful safety benefit.In the United States, unlike in Europe, the Food and Drug Administration has not labelled add-on rivaroxaban after ACS for secondary prevention despite the reported benefit because of the large burden of missing data in ATLAS ACS 2-TIMI 51 trial [30].Moreover, the increased risk of bleeding rendered this strategy to be scarcely used.Komócsi et al. [31] pooled the results of seven randomized trials (n = 31,286) that used NOACs on top of antiplatelet therapy in ACS patients and found a significant increase in major bleeding by three folds (odds ratio (OR) 3.03; 95% CI: 2.20-4.16)without overall mortality or net clinical (i.e., composite of ischemic and major bleeding events) benefits.When rivaroxaban was combined with a P2Y 12 receptor inhibitor instead of aspirin, the GEMINI-ACS-1 (Randomized, Double-Blind, Double-Dummy, Active-Controlled, Parallel-group, Multicenter Study to Compare the Safety of Rivaroxaban Versus Acetylsalicylic Acid in Addition to Either Clopidogrel or Ticagrelor Therapy in Subjects With Acute Coronary Syndrome) trial concluded that low-dose rivaroxaban (i.e., 2.5 mg twice daily) had comparable bleeding or safety profile to DAPT [32].
Edoxaban was not studied in combination with antiplatelet therapy in ACS patients.Darexaban combined with DAPT in the RUBY-1 (randomized, double blind, placebo-controlled trial of safety and tolerability novel oral factor Xa inhibitor darexaban (YM150) following acute coronary syndrome) Phase II randomized trial significantly increased bleeding risk without an observed benefit in lowering cardiovascular events.Thus, further investigation with darexaban was put on hold by the manufacturer [33].With regards the impact of antiplatelet therapy, Khan et al. [34] in their meta-analysis found that combining NOACs with single antiplatelet agent did not decrease ischemic episodes or increased bleeding complications.Whereas, adding NOACs to DAPT significantly increased bleeding (hazard ratio (HR) 2.24; 95% CI: 1.75-2.87)and modestly reduced major adverse cardiovascular events (HR 0.86; 95% CI: 0.78-0.93).On the other hand, Oldgren and colleagues [35] pooled efficacy and safety outcomes from the trials discussed above (ESTEEM, REDEEM, APPRAISE-1, APPRAISE-2, ATLAS ACS-TIMI 46, ATLAS ACS 2-TIMI 51, GEMINI-ACS-1, RUBY-1) and demonstrated that combining NOACs with dual or single antiplatelet therapy significantly reduced major adverse cardiovascular events [(HR 0.70; 95% CI: 0.59-0.84)or (HR 0.87; 95% CI: 0.80-0.95),respectively], but the combination with either antiplatelet therapy regimen caused more clinically significant bleeding events [(HR 1.79; 95% CI: 1.54-2.09)and (HR 2.34; 95% CI: 2.06-2.66),respectively].Heterogeneity was low between the trials, and the results did not differ when the analysis was restricted to Phase II trials [35].Otamixaban was tested in the SEPIA-ACS1 TIMI 42 (Study Program to Evaluate the Prevention of Ischemia with direct Anti-Xa inhibition in Acute Coronary Syndromes 1-Thrombolysis in Myocardial Infarction 42) phase II study against heparin plus eptifibatide in non-ST-segment elevation myocardial infarction.Parenteral otamixaban use showed a trend towards lowering ischemic episodes without a difference in safety outcomes between the two arms [36].Subsequently, TAO (Treatment of Acute Coronary Syndromes with Otamixaban) Phase III trial did not confirm any benefit of otamixaban in decreasing ischemic episodes rate but found an increase in bleeding events [37].Finally, the factor Xa inhibitor letaxaban, in AXIOM ACS (Safety and efficacy of TAK-442 in subjects with acute coronary syndromes) Phase II dose-ranging randomized trial, was tested for tolerability and safety.As compared with placebo, letaxaban in varying doses did not increase major bleeding rate (i.e., primary endpoint) or improve efficacy endpoint [38].There was no further testing of this agent in ACS.The summary of the key studies is shown in Table 2 (Ref.[19][20][21]24,25,32]).
Chronic Coronary Syndrome
NOACs have also been investigated as monotherapy or combined with antiplatelet therapy in stable ischemic or atherosclerotic diseases [8].The large COMPASS (Cardiovascular OutcoMes for People Using Anticoagulation StrategieS) trial demonstrated that low-dose rivaroxaban (i.e., 2.5 mg twice daily) as add-on to aspirin significantly reduced composite cardiovascular events and mortality by 24% and 18%, respectively, in comparison with aspirin monotherapy but at the expense of significant rise in major bleeding events by 70%.When compared with as-pirin alone, 5-mg twice daily rivaroxaban increased bleeding events without a difference in cardiovascular benefit [39].Gastrointestinal tract (1-2%) was the most frequent source of major bleeding in the study participants [40].Patients from international registries who were described to be eligible for enrolment in COMPASS trial experienced more cardiovascular adverse events than those participated in the trial [41,42].Given that the presence of heart failure may activate thrombin-associated pathways, it was hypothesized that rivaroxaban can decrease thrombin generation in patients who have underlying CAD and presenting with decompensated heart failure.In the COMMANDER-HF (A Study to Assess the Effectiveness and Safety of Rivaroxaban in Reducing the Risk of Death, Myocardial Infarction, or Stroke in Participants With Heart Failure and Coronary Artery Disease Following an Episode of Decompensated Heart Failure) trial, rivaroxaban (2.5 mg twice daily) did not significantly decrease cardiovascular complications in CAD patients presenting with decompensated heart failure [43].A post-hoc analysis of COMMANDER-HF trial concluded that rivaroxaban decreased the rate of thromboembolic events (HR 0.83; 95% CI: 0.72-0.96)[44], and another analysis found that rivaroxaban reduced transient ischemic attack or stroke rates versus placebo (adjusted HR 0.68; 95% CI: 0.49-0.94)with similar bleeding rates [45].
The key two studies in chronic coronary syndrome are summarised in Table 2.
Acute Coronary Syndrome and Atrial Fibrillation
In acute or chronic coronary syndromes, AF is a common finding.Patients with AF could have five-fold increase in stroke which renders stroke prevention therapies such as anticoagulation, the cornerstone of therapy [46].Among individuals with CAD, the reported prevalence of AF is 12.5% [8], and in ACS, the incidence of AF ranges from 2% to 23% [46].Five to 10% of patients presenting with ACS have AF and using oral anticoagulation therapy [47].Patients with AF and ACS have less favourable clinical outcomes [46,48].Patients with concurrent myocardial infarction and AF usually have higher stroke rate (3.1%) than those without AF (1.3%) [49].As ACS requires DAPT, the presence of AF makes it a challenging scenario where healthcare providers must balance risks and benefits of the indicated triple antithrombotic therapy (TAT) with regards to prevention of ischemic episodes, stroke, stent thrombosis, systemic embolism, and bleeding [48].
Strategy in Subjects with Atrial Fibrillation who Undergo Percutaneous Coronary Intervention) trial compared lowdose rivaroxaban (15 mg once daily) plus P2Y 12 inhibitor, or very-low-dose rivaroxaban (2.5 mg twice daily) plus DAPT, with warfarin plus DAPT to detect difference in the clinically significant bleeding between groups.Bleeding rates in the two rivaroxaban arms were significantly lower than in warfarin arm [(HR 0.59; 95% CI: 0.47-0.76)and (HR 0.63; 95% CI: 0.50-0.80),respectively].Rates of stroke, myocardial infarction, and cardiovascular death were similar between the trial arms.Of note, the most used P2Y 12 inhibitor was clopidogrel [50].A post hoc analysis of the PIONEER AF-PCI trial showed that rivaroxaban in either regimen was associated with reduced recurrent hospitalization and all-cause mortality compared with traditional TAT in AF patients undergoing PCI [51].In RE-DUAL PCI (Randomized Evaluation of Dual Antithrombotic Therapy with Dabigatran versus Triple Therapy with Warfarin in Patients with Nonvalvular Atrial Fibrillation Undergoing Percutaneous Coronary Intervention) trial, double or dual antithrombotic therapy (DAT) regimen (dabigatran plus clopidogrel or ticagrelor) showed lower major or clinically relevant non-major bleeding when compared with TAT (warfarin plus DAPT).Dabigatran 110-mg regimen was superior to TAT regimen for primary outcome (HR 0.52; 95% CI: 0.42-0.63;p < 0.001 for noninferiority; p < 0.001 for superiority), while the 150-mg regimen was non-inferior to TAT regimen (HR 0.72; 95% CI: 0.58-0.88;p < 0.001 for noninferiority).However, the trial was not powered to examine efficacy outcomes such as systemic thromboembolism or stent thrombosis.The comparison of dabigatran doses together with TAT showed non-inferiority in the rate of a composite of thromboembolic events, death, or unplanned revascularization (HR 1.04; 95% CI: 0.84-1.29;p = 0.005 for noninferiority) [52].
Apixaban was compared with VKAs in the two-bytwo factorial AUGUSTUS (Aspirin Placebo in Patients with Atrial Fibrillation and Acute Coronary Syndrome or Percutaneous Coronary Intervention) trial.Patients on a P2Y 12 inhibitor were assigned to apixaban or warfarin and aspirin or placebo.Apixaban was superior to warfarin in reducing major or clinically relevant non-major bleeding (HR 0.69; 95% CI: 0.58-0.81;p < 0.001 for noninferiority and superiority).Patients assigned to aspirin arm showed higher bleeding rate (HR 1.89; 95% CI: 1.59-2.24)compared with patients in placebo arm.Death or hospitalization rate was similar to that of ischemic events in aspirin and placebo groups.Apixaban showed similar ischemic outcomes to warfarin with less death or hospitalization composite events (HR 0.83; 95% CI: 0.74-0.93)[53].The SAFE-A (SAFety and Effectiveness trial of Apixaban use in association with dual antiplatelet therapy in patients with atrial fibrillation undergoing percutaneous coronary intervention) randomized controlled trial evaluated the withdrawal of P2Y 12 inhibitors from triple antithrombotic therapy after one or six months of therapy in AF patients undergoing PCI.The rate of primary endpoint (i.e., any bleeding) was not different between study arms.However, the enrolment of participants was slow which caused a premature trial termination [54].In the ENTRUST-AF PCI (Evaluation of the Safety and Efficacy of an Edoxaban-Based Compared to a Vitamin K Antagonist-Based Antithrombotic Regimen in Subjects With Atrial Fibrillation Following Successful Percutaneous Coronary Intervention With Stent Placement) trial, AF patients who underwent PCI were randomly assigned to edoxaban (60 mg once daily) plus P2Y 12 inhibitor or VKA plus DAPT.Edoxaban-regimen was non-inferior to VKAregimen for composite of major or clinically relevant nonmajor bleeding (HR 0.83; 95% CI: 0.65-1.05;p = 0.0010 for non-inferiority) but failed to show superiority.Both regimens had similar ischemic outcomes [55].The key studies are summarized in Table 3 (Ref.[50,[52][53][54][55]).
In summary, TAT increased bleeding when compared with DAT without significant difference in mortality or stroke outcomes.Although the four trials (PIONEER AF-PCI, RE-DUAL PCI, AUGUSTUS, and ENTRUST-AF PCI) have shown the safety of DAT in the first year after PCI with regards to bleeding risk, they were not powered to assess the efficacy outcomes such as myocardial infarction, stroke, and cardiovascular death.A meta-analysis of the four randomized studies (n = 10,969) concluded that the combination of antiplatelet agents with NOACs caused lower major bleeding rates by 37% than warfarin (relative risk 0.63; 95% CI: 0.50-0.80)without increasing thrombotic or ischemic episodes [56].Similarly, a metaanalysis (n = 10,234) showed that DAT caused lower major or clinically relevant non-major bleeding rates than TAT (risk ratio (RR) 0.66; 95% CI: 0.56-0.78)but at expense of more stent thrombosis events (RR 1.59; 95% CI: 1.01-2.50)[57].Another meta-analysis reported similar findings where DAT was associated with less major bleeding (OR 0.598; 95% CI: 0.491-0.727)and higher stent thrombosis episodes (OR 1.672; 1.022-2.733)when compared with TAT [58].NOAC-based DAT when compared with VKA-TAT was associated with fewer intracranial haemorrhage events (RR 0.33; 95% CI: 0.17-0.65)[57], which is consistent with lower major bleeding risk reported with NOAC-based regimens versus VKA-based regimens (OR 0.577, 0.477-0.698)[58].Both DAT and TAT regimens showed comparable mortality and stroke rates [57,58] [61] • AF patients beyond 1 year after stenting
Chronic Coronary Syndrome and Atrial Fibrillation
The first trial to compare rivaroxaban alone with rivaroxaban plus single or dual antiplatelet agent(s) in stable CAD was the Japanese AFIRE (Atrial Fibrillation and Ischemic Events with Rivaroxaban in Patients with Stable Coronary Artery Disease) trial.Stable CAD was defined as undergoing PCI or coronary artery bypass grafting (CABG) more than one year earlier or having confirmed CAD not requiring revascularization.Monotherapy with rivaroxaban was non-inferior to DAT for composite of death, myocardial infarction, stroke, systemic embolism, or unstable angina requiring revascularization (HR 0.72; 95% CI: 0.55-0.95) and was superior for major bleeding (HR 0.59; 95% CI: 0.39-0.89).The trial was terminated early due to the surprisingly increased death in the combination group (Table 3) [60].While AFIRE trial is the only one powered to detect efficacy outcomes, it is necessary to note the dissimilarities in comparison with earlier trials in ACS such as enrolment of only Japanese participants, dose of rivaroxaban adopted (i.e., 10 or 15 mg as approved in Japan), and patients were with stable CAD.Another Japanese study is the OAC-ALONE (Optimizing Antithrombotic Care in Patients With Atrial Fibrillation and Coronary Stent) trial that also examined oral anticoagulants alone in comparison with the combination therapy of an oral anticoagulant and an antiplatelet agent but only 26% of patients were on NOACs [61].The study did not demonstrate the inferiority of oral anticoagulation monotherapy to combined therapy in patients with concurrent AF and stable CAD after one year of PCI.However, the study was underpowered due to premature enrolment termination (Table 3) [61].
Periprocedural Management
The adequateness of NOACs efficacy during angioplasty has been reported with conflicting findings between studies.A preclinical study demonstrated that peak dabigatran levels were insufficient to inhibit catheter-induced thrombosis unless additional heparin is administered [65].Vranckx et al. [66] investigated the efficacy of dabigatran in suppressing coagulation during elective angioplasty in patients who were using NOACs for a long period.In an exploratory Phase II study (n = 50), pre-procedural 110-mg or 150-mg twice daily dabigatran in comparison with standard heparin regimen did not sufficiently suppress coagulation during PCI.The insufficient effect was evident by elevated prothrombin fragment 1+2 and thrombinantithrombin complexes levels, in addition to more bailout anticoagulants required with dabigatran because of adverse clinical outcomes (e.g., stent thrombosis and myocardial infarction) [66].On the other hand, data from the Dresden NOAC registry showed that either short-term interruption or continuation of NOACs during invasive procedures was safe [67].Furthermore, Vranckx et al. [68] found that rivaroxaban (either 10 or 20 mg with or without heparin) was more effective in suppressing coagulation than standard heparin during angioplasty in the X-PLORER trial (Exploring the Efficacy and Safety of Rivaroxaban to Support Elective Percutaneous Coronary Intervention), an exploratory Phase II trial (n = 108).There were low levels of prothrombin fragment 1+2 and thrombin-antithrombin complexes without bailout anticoagulation, thrombotic or bleeding events with rivaroxaban [68].
In-Stent Thrombosis
The incidence of in-stent thrombosis after angioplasty usually ranges between 0.6% and 3.3% at up to one year of follow-up, regardless of the stent type.In high-risk population, the incidence may be higher after a drug-eluting stent implantation; 2.7% within one month and ranging from 5.2% to 8.3% at 1-5 years of follow-up, respectively.Although the incidence is considered relatively low, mortality has been reported in approximately 10% to 25% of affected patients at one-year follow-up.The formed in-stent thrombi contain both platelets and fibrin suggesting that the platelet activation and thrombin generation sequence resemble thrombus formation in ACS [69].In APPRAISE-2 trial, the incidence of stent thrombosis did not significantly differ between apixaban and placebo arms.However, the trial was terminated early because of excessive bleeding with apixaban [21].On the other hand, rivaroxaban, in AT-LAS ACS 2-TIMI 51, decreased stent thrombosis events by 31% (HR 0.69; 95% CI: 0.51-0.93)[25].This benefit was confirmed when the outcomes of only stented patients in ATLAS ACS 2-TIMI 51 trial were analysed separately (HR 0.65; p = 0.017).When breaking down the results according to rivaroxaban dose, the 2.5-mg twice-daily dose reduced definite or probable stent thrombosis events (HR 0.61; p = 0.023), a benefit that was not observed with 5mg twice-daily dose (p = 0.89).In addition, twice-daily 2.5-mg dose showed favourable mortality outcome as well (HR 0.56; 95% CI: 0.35-0.89).However, reduction in stent thrombosis by combined rivaroxaban doses was not maintained beyond the active DAPT duration, i.e., in participants on aspirin as single antiplatelet (HR 0.68; 95% CI: 0.50-0.92).Thus, rivaroxaban may only be effective with DAPT (Table 4, Ref. [21,25,70]) [70].A preclinical study that examined rivaroxaban alone or combined with DAPT reported consistent results [71].
Post Coronary Artery Bypass Grafting
Early graft failure after CABG surgery occurs in 30% of patients.Lamy et al. [72] conducted a pre-planned substudy (n = 1448) of COMPASS trial (COMPASS-CABG) to examine rivaroxaban (either alone or combined with aspirin) in preventing early graft failure post CABG procedure.Rivaroxaban regimens did not lower the graft failure rate but only 2.5-mg twice-daily rivaroxaban dose combined with aspirin was associated with lower major adverse cardiovascular events than aspirin alone (HR 0.69; 95% CI: 0.33-1.47)(Table 4) [72].
Myocardial Injury after Noncardiac Surgery
Myocardial injury after non-cardiac surgery (MINS), defined as myocardial infarction coupled with isolated ischemic cardiac troponin rise, usually occurs within 30 days following surgery and should not comprise non-ischemic causes such as AF, sepsis, or pulmonary embolism.MINS is correlated with a four-fold increased death rate at 30 days and increased death and cardiovascular complications at two years after surgery.Devereaux et al. [73] in their MANAGE (Management of myocardial injury After Non-cArdiac surGEry) trial (n = 1754) concluded that dabigatran (110 mg twice daily) lowered major vascular complications (11% vs 15%, HR 0.72; 95% CI: 0.55-0.93)without an increase in bleeding (3% vs 4%, HR 0.92; 95% CI: 0.55-1.53)(Table 4).
Prevention of LV Thrombus
The routine short-term use of anticoagulants to prevent the formation of LV thrombus after myocardial infarction should be individualized, considering the advantages and disadvantages of this approach as it is not supported by robust evidence.Published observational studies did not show benefit in term of major adverse cardiovascular events, rather increased major bleeding episodes.VKAs, particularly warfarin, have been the traditional agents of choice [74].Recently, an open-label study (n = 279) by Zhang et al. [79], supported the 30-day use of low-dose (i.e., 2.5 mg twice daily) rivaroxaban on top of DAPT to decrease the chance of LV thrombus formation following anterior myocardial infarction compared with DAPT alone (0.7% vs 8.6%; HR 0.08; 95% CI: 0.01-0.62),without increasing bleeding risk between the study arms at the prespecified follow-up periods (Table 5, Ref. [79]).
Treatment of LV Thrombus
The formed LV thrombus following myocardial infarction is a source of further thromboembolic events with an estimated increase in risk by 5.5 folds in comparison with no thrombus.If left untreated, the annual rate of systemic embolization and stroke is approximately 10% to 15% [74].Moreover, the presence of LV thrombus may increase mortality risk.LV thrombus regression due to the use of anticoagulation therapy reduced mortality [80].International guidelines consider VKAs as the first-choice treatment for LV thrombus, with a little guidance on NOACs use as alternative therapeutic option instead of warfarin in this scenario [74].The off-label NOACs use in treating LV thrombus has been increasing substantially since 2020 [81].Earlier reports were limited to case reports or series, their metasummaries [82,83], or centres experience [84].In a metasummary of case reports, rivaroxaban use accounted for 47.2% of NOACs use whereas 27.8% of patients used dabigatran and 25% used apixaban [82].LV thrombus resolution occurred in 88% to 92% of patients within a median of 30-32 days [82,83].Overall, NOACs seemed effective and safe in treating patients with LV thrombus [82][83][84].
Low-Dose NOACs
The addition of an oral anticoagulant agent to the pharmacological management of ACS has been promising particularly with the use of low-dose regimen to optimize benefit and reduce bleeding risk [2].However, NOACs studies in patients with AF and undergoing PCI were powered for safety rather than efficacy outcomes.Thus, the protection against stroke in AF patients presenting with ACS or undergoing PCI is undetermined and may be unsuitable [102].Rubboli et al. [103] examined the interpretation of lower NOACs doses in non-valvular AF by distributing a 14-statement questionnaire to physicians of different specialties.There was a wide agreement regarding the clinical implications of using lower factor Xa inhibitors doses but not dabigatran doses [103].Cappato et al. [104] evaluated NOACs dose selection on all-cause mortality risk by pooling data from four major trials including ATLAS ACS-2 TIMI 51, REDEEM, COMPASS, and CAD substudy of edoxaban landmark study in AF (n = 49,125), in which all patients had established atherosclerosis.Lower NOACs dose, but not higher NOACs dose (RR 0.95; 95% CI: 0.87-1.05),was associated with significantly lower allcause mortality rate (RR 0.80; 95% CI: 0.73-0.89)than with control.In addition, when comparing lower versus higher NOACs dose, the benefit of lower dose was confirmed (RR 0.84; 95% CI: 0.76-0.93)[104].Szapáry et al. [105] in their meta-analysis of 15 randomized studies (n = 73,536) analysed the efficacy and safety of the therapeutic options.The risk of major adverse cardiac events was significantly reduced with apixaban and dabigatran use [(RR 0.75; 95% CI: 0.58-0.98)and (RR 0.56; 95% CI: 0.39-0.80),respectively] and not with edoxaban, rivaroxaban, or VKAs use.Their use was associated with significant increase in risk of bleeding (RR 5.47, 3.66, or 1.66, respectively).When reducing NOACs dose, there was a non-significant tendency of reduced bleeding but increased risk of major adverse cardiac events [105].
NOACs in Combination with Antiplatelet Therapy
The evidence on efficacy of NOACs when combined with antiplatelet therapy is still conflicting.Szapáry et al. [105] in their meta-analysis analysed the use of NOACs with aspirin which did not reduce risk of major adverse cardiac events but was associated with a trend towards nonsignificant increase in risk of bleeding (66%).As low-dose rivaroxaban combined with aspirin and clopidogrel aimed to lower cardiovascular adverse events in ACS patients, intensification of antiplatelet regimen by using ticagrelor or prasugrel instead of clopidogrel may also enhance efficacy but warrant investigation [2].The components and optimal duration of thrombotic regimen (i.e., DAT or TAT) in ACS patients with or without AF is still debatable [106].A post hoc analysis of the AUGUSTUS trial reported that the use of aspirin for up to 30 days after ACS resulted in more bleeding but fewer ischemic events (i.e., equal tradeoff) than placebo.Whereas its use after 30 days and up to six months caused more bleeding but similar ischemic event rates [107].In AF patients of 65 years of age or older who underwent PCI (n = 4959), Hess et al. [108] found that 27.6% of patients were discharge on TAT.In comparison with DAPT, patients who received TAT experienced significantly more bleeding that required hospitalization (adjusted HR: 1.61; 95% CI: 1.31-1.97)or intracranial haemorrhage (adjusted HR: 2.04; 95% CI: 1.25-3.34)without a difference in risk of major adverse cardiac events (adjusted HR 0.99; 95% CI: 0.86-1.16)[108].Overall, it is acceptable to consider one-week duration of TAT in AF patients with low ischemic risk who underwent uncomplicated PCI and longer period (e.g., four to six weeks) for patients with higher thrombotic risk.The subsequent DAT may be continued for six to 12 months according to patients risk factors [106].
Risk of Myocardial Infarction
NOACs may have a more balanced benefit-risk profile in comparison with warfarin.However, the RE-LY (Randomized Evaluation of Long-term Anticoagulant Therapy) trial in atrial fibrillation has reported higher rate of myocardial infarction with dabigatran than warfarin [(relative risk 1.35; 95% CI: 0.98-1.87for 110-mg), (relative risk 1.38; 95% CI: 1.00-1.91 for 150-mg regimen)] [109].In RE-DUAL PCI trial, there was a non-significant higher myocardial infarction rate in dabigatran group.However, the study was not powered to detect a difference in ischemic episodes between the study arms [52].In contrast, there was numerically lower myocardial infarction events rate with factor Xa inhibitors use [110].A meta-analysis of nine trials (n = 53,827) in any indication for NOACs, concluded that rivaroxaban was correlated with significantly reduced myocardial infarction risk (OR 0.82; 95% CI: 0.72-0.94) in comparison with any control (i.e., warfarin, enoxaparin, or placebo) which was confirmed by trial sequential analysis [111].Real-world evidence has not confirmed the reported myocardial infarction risk with dabigatran use [112].As an example, Lee et al. [113] used the Danish registers to investigate the risk of myocardial infarction in association with NOACs and VKAs use in patients with AF (n = 31,739).Standardized one-year risk of myocardial infarction was 1.6% (95% CI: 1.3-1.8),1.2% (95% CI: 0.9-1.4),1.2% (95% CI: 1.0-1.5),and 1.1% (95% CI: 0.8-1.3)for VKAs, apixaban, dabigatran, and rivaroxaban, respectively.When performing various comparisons, there were not differences in myocardial infarction risk in the direct comparisons between individual NOACs, and in comparison with VKAs, all NOACs were associated with significantly lower risk [113].On the other side, evidence from meta-analyses reported an increased myocardial infarction risk in association with dabigatran use specifically [114][115][116].Kupó et al. [117] pooled the data of 28 randomized trials (n = 196,761) in a network meta-analysis and demonstrated that in comparison to dabigatran, rivaroxaban (relative risk 0.70; 95% credible interval (CrI): 0.53-0.89),apixaban (0.76; 95% CrI: 0.58-0.99),or VKAs (0.81; 95% CrI: 0.65-0.98)use was correlated with reductions in the relative risk of myocardial infarction.In addition, rivaroxaban was also associated with myocardial infarction risk reduction in comparison to placebo (relative risk 0.79; 95% CrI: 0.65-0.94)and its computed probability was 61.8% as being the first or best treatment option [117].Grajek et al. [112] conducted a meta-analysis of eight randomized trials (n = 81,943), two landmark Phase III trials for each of the four NOACs; one pivotal trial in AF patients and another in AF patients undergoing PCI.The rate of myocardial infarction was 2.1% of all patients.In comparison with warfarin, dabigatran was associated with a significant increase in the risk of myocardial infarction by 38% regardless of the dose, whereas factor Xa inhibitors (apixaban, edoxaban, rivaroxaban) were associated with a non-significant trend to-wards reducing the risk by 4-5% with a significant difference between dabigatran and factors Xa inhibitors.In addition, the authors estimated the ranking of tested agents' effectiveness in lowering myocardial infarction risk, i.e., protection from myocardial infarction.The weakest effectiveness was for dabigatran (8% for 110-mg and 14% for 150-mg regimen) and the highest was for rivaroxaban 15 mg (90%) and apixaban 5 mg (80%), which might not support the class effect concept in the NOACs group [112].Several mechanisms have been postulated for the increased risk of myocardial infarction in association with dabigatran which may have pro-thrombotic effects [110].Direct thrombin inhibition by dabigatran is weaker than that of warfarin and is dependent on dabigatran's serum level.A paradoxical generation of thrombin can occur when its level decreases.The hypercoagulability paradox may occur due to the suppression of thrombin-thrombomodulin complex and inhibition of protein C activation and hence potentiating negative feedback cycle.In the presence of increased tissue factor levels resulting from plaque rupture, thrombin-drug complex may cleave [112].In-vitro, dabigatran potentiated platelet adhesion and enhanced thrombosis on human plaque material which depends on the platelet altered thrombin-glycoprotein Ibα interaction that augments von Willebrand factor binding.In addition, dabigatran may also potentiate thrombin-induced platelet aggregation.Analysis of platelets protease activated receptors (PAR) demonstrated that dabigatran can acutely inhibit thrombin-induced PAR-1 activation, cleavage, and internalization in a dose-dependent fashion [110].Moreover, prolonged thrombin inactivation by dabigatran may enhance PAR-1-surface expression [110,112].Inflammatory makers may also increase during treatment with direct thrombin inhibitors [112].Conversely, in-vitro rivaroxaban decreased platelet aggregation triggered by tissue-factor or thrombin receptor activating peptide [110].
Areas of Uncertainty and Future Direction
NOACs showed benefit in secondary prevention of major adverse cardiovascular outcomes after ACS given the potential role of thrombin and other relevant factors in the coagulation process.However, their benefit was counteracted by the major bleeding complications [8].It remains uncertain whether triple therapy with low-dose rivaroxaban can be extended beyond the first year or whether lowdose rivaroxaban may be combined with DAPT using aspirin and ticagrelor/prasugrel instead of clopidogrel [6].In the presence of comorbid AF, it is still uncertain which is the optimal antithrombotic therapy beyond 12 months following the ACS events.Currently, experts' consensus is to continue with NOACs monotherapy after dropping the antiplatelet therapy [8].The AQUATIC (Assessment of Quitting Versus Using Aspirin Therapy In Patients Treated With Oral Anticoagulation for Atrial Fibrillation With Stabilized Coronary Artery Disease; NCT04217447) trial may address the limitations of the published AFIRE study in patients with AF and stable CAD.In addition, the optimal duration of TAT before switching to DAT and the combination of NOACs with P2Y 12 inhibitors remain to be confirmed [6].The ongoing RT-AF randomized study is investigating the combination of rivaroxaban and ticagrelor in AF patients presenting with ACS and undergoing PCI (NCT02334254) [118].The results of a feasibility study on the efficacy and safety of rivaroxaban in acute phase of ACS in comparison with enoxaparin 1 mg/kg twice daily has just been published and showed non-inferiority of rivaroxaban 5 mg twice daily to the standard subcutaneous enoxaparin.The study provided important information to design future trials with adequate sample sizes [119].The current evidence for the use of NOACs in managing LV thrombus is limited to small-scale studies.Larger randomized trials are vital to support the effectiveness and safety of NOACs in preventing and treating LV thrombus.Three studies are testing rivaroxaban in treating LV thrombus (NCT03764241, NCT04970381, NCT04970576) and two more in preventing thrombus formation after acute myocardial infarction (NCT03786757, NCT05077683).Finally, more basic studies are also needed to confirm or refute the hypothesis of increased myocardial infarction risk in association with direct thrombin inhibition and whether the risk is reduced with factor Xa inhibition [110].
Conclusions
Despite optimal antiplatelet therapy in ACS, cardiovascular events may recur, in part due to thrombin generation.Adjunctive NOAC therapy has the potential to prevent the formation of thrombus, in the presence or absence of AF, but at the expense of increased episodes of bleeding.NOACs have also shown a promising efficacy in the management of LV thrombus and a potential benefit in preventing stent thrombosis after PCI.Taken as a whole, NOACs are increasingly used for off-licence indications, and continue to evolve as essential therapies in preventing and treating thrombotic events.The unmet need for more active and possibly more targeted anticoagulation strategy is still a problem in the field of the treatment of ACS.
Fig. 2 .
Fig. 2. Timeline of non-vitamin K antagonist oral anticoagulants approval and key studies.FDA, Food and Drug Administration; NOACs, non-vitamin K antagonist oral anticoagulants.
ACS, Acute coronary syndrome; AF, Atrial fibril-lation; AFIRE, Atrial Fibrillation and Ischemic Events with Rivaroxaban in Patients with Stable Coronary Artery Disease; APPRAISE, Apixaban for Prevention of Acute Ischemic and Safety Events; ATLAS ACS-TIMI, Anti-Xa Therapy to Lower Cardiovascular Events in Addition to Standard Therapy in Subjects with Acute Coronary Syndrome-Thrombolysis In Myocardial Infarction; AU-GUSTUS, Aspirin Placebo in Patients with Atrial Fibrillation and Acute Coronary Syndrome or Percutaneous Coronary Intervention; AXIOM ACS, Safety and efficacy of TAK-442 in subjects with acute coronary syndromes; CABG, coronary artery bypass grafting; CAD, coronary artery disease; COMMANDER-HF, A Study to Assess the Effectiveness and Safety of Rivaroxaban in Reducing the Risk of Death, Myocardial Infarction, or Stroke in Participants With Heart Failure and Coronary Artery Disease Following an Episode of Decompensated Heart Failure; COMPASS, Cardiovascular OutcoMes for People Using Anticoagulation StrategieS; CrI, credible interval; DAPT, dual antiplatelet therapy; DAT, double or dual antithrombotic therapy; ENTRUST-AF PCI, Evaluation of the Safety and Efficacy of an Edoxaban-Based Compared to a Vitamin K Antagonist-Based Antithrombotic Regimen in Subjects With Atrial Fibrillation Following Successful Percutaneous Coronary Intervention With Stent Placement; ESTEEM, Efficacy and safety of the oral direct thrombin inhibitor ximelagatran in patients with recent myocardial damage; GEMINI-ACS-1, Randomized, Double-Blind, Double-Dummy, Active-Controlled, Parallel-group, Multicenter Study to Compare the Safety of Rivaroxaban Versus Acetylsalicylic Acid in Addition to Either Clopidogrel or Ticagrelor Therapy in Subjects With Acute Coronary Syndrome; HR, hazard ratio; LA, left ventricular; MAN-AGE, Management of myocardial injury After NoncArdiac surgery; MINS, Myocardial injury after non-cardiac surgery; NOACs, non-vitamin K antagonist oral antico-agulants; OAC-ALONE, Optimizing Antithrombotic Care in Patients With Atrial Fibrillation and Coronary Stent; PCI, percutaneous coronary intervention; PIONEER AF-PCI, OPen-Label, Randomized, Controlled, Multicenter Study ExplorIng TwO TreatmeNt StratEgiEs of Rivaroxaban and a Dose-Adjusted Oral Vitamin K Antagonist Treatment Strategy in Subjects with Atrial Fibrillation who Undergo Percutaneous Coronary Intervention; OR, odds ratio; PAR, protease activate receptors; REDEEM, Randomized Dabigatran Etexilate Dose Finding Study in Patients with Acute Coronary Syndromes Post Index Event with Additional Risk factors for Cardiovascular Complications Also Receiving Aspirin and Clopidogrel; RE-DUAL PCI, Randomized Evaluation of Dual Antithrombotic Therapy with Dabigatran versus Triple Therapy with Warfarin in Patients with Nonvalvular Atrial Fibrillation Undergoing Percutaneous Coronary Intervention; RE-LY, Randomized Evaluation of Long-term Anticoagulant Therapy; RR, risk ra-tio; RUBY-1, randomized, double blind, placebo-controlled trial of safety and tolerability novel oral factor Xa inhibitor darexaban (YM150) following acute coronary syndrome; SAFE-A, SAFety and Effectiveness trial of Apixaban use in association with dual antiplatelet therapy in patients with atrial fibrillation undergoing percutaneous coronary inter-vention; SEPIA-ACS1 TIMI 42, Study Program to Evaluate the Prevention of Ischemia with direct Anti-Xa inhibition in Acute Coronary Syndromes 1-Thrombolysis in Myocardial Infarction 42; TAO, Treatment of Acute Coronary Syndromes with Otamixaban; TAT, triple antithrom-botic therapy; VKAs, vitamin K antagonists; WOEST, What is the Optimal antiplatElet & Anticoagulant Therapy in Patients With Oral Anticoagulation and Coronary StenT-ing; X-PLORER trial, Exploring the Efficacy and Safety of Rivaroxaban to Support Elective Percutaneous Coronary Intervention. | 2023-07-12T07:11:56.074Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "0bdf834c8bd8caf1a66a5ec0f8677efd934241c1",
"oa_license": "CCBY",
"oa_url": "https://www.imrpress.com/journal/RCM/24/6/10.31083/j.rcm2406180/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75c621a302c2638b7632adc5a6895856e82a9cf6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261390519 | pes2o/s2orc | v3-fos-license | Design and Experiments of a New Internal Cone Type Traveling Wave Ultrasonic Motor
In order to simplify the motor structure, to reduce the difficulty of rotor pre-pressure application and to obtain better output performance, a new internal cone type rotating traveling wave ultrasonic motor is proposed. The parametric model of the internal cone type ultrasonic motor was established by the ANSYS finite element software. The ultrasonic motor consists of an internal cone type vibrator and a tapered rotor. The dynamic analysis of the motor vibrator is carried out, and two in-plane third-order bending modes with the same frequency and orthogonality are selected as the working modes. The other advantages of this motor are that pre-pressure can be imposed by the weight of the rotor. The prototype was trial-manufactured and experimentally tested for its vibration characteristics and output performance. When the excitation frequency is 22260.0 Hz, the pre-pressure is 0.1 N and the peak-to-peak excitation voltage is 300 V, the maximum output torque of the prototype is 1.06 N ⋅ mm, and the maximum no-load speed can reach 441.2 rpm. The optimal pre-pressure force under different loads is studied, and the influence of the pre-pressure force on the mechanical properties of the ultrasonic motor is analyzed. It is instructive in the practical application of this ultrasonic motor.
Introduction
The ultrasonic motor is a new type of microtechnical motor that uses the inverse piezoelectric effect of piezoelectric materials to produce ultrasonic frequency vibration in the vibrator and uses the friction between the vibrator and the rotor to achieve the rotor rotation, linear or multi-degree-of-freedom motion. Ultrasonic motors have the advantages of a simple structure, high power density, fast response, no electromagnetic radiation, and high positioning accuracy (Zhao, 2011). Therefore, more and more scholars have explored and researched them from the aspects of structure design, drive control principle and friction materials, and have achieved certain results (Tian et Ultrasonic motors can be divided into standing wave ultrasonic motors and traveling wave ultrasonic motors from the point of view of vibration characteristics. In commercial applications, the latter are widely used because of their high efficiency and simpler drive control. From the viewpoint of a motion output, they can be divided into rotary, linear and multidegree-of-freedom ultrasonic motors (Ryndzionek, Sienkiewicz, 2021). Among them, rotary ultrasonic motors are more well developed and the technology is more mature. Among various types of ultrasonic motors, squiggle and in-plane bending travelling wave ultrasonic motors are often suitable for miniaturiza-tion and integration (Xu et al., 2021;Lu et al., 2020;Mashimo, Oba, 2022;Li et al., 2021).
An important factor affecting the application of the ultrasonic motor is the overall structural size. Therefore a millimeter scale thick film rotating traveling wave ultrasonic motor based on the chemical mechanical thinning and polishing process is proposed (Zhang et al. 2022). The vibration mode of the motor is the B02 mode under the resonant frequency of 26.2 kHz. The motor can achieve stable bidirectional rotation under the excitation of four sinusoidal voltages. Moreover, when the excitation voltage is 50 V p−p , the maximum speed can reach 766 rpm under the preload force of 0.686 mN. A miniature flat cross-shaped rotating ultrasonic motor was designed and manufactured (Čeponis et al. 2020). The motor rotates the rotor by exciting the first-order in-plane bending vibration of the cross-shaped vibrator. The results of the experimental study show that the motor has a maximum speed of 972.62 rpm at a peak-to-peak of 200 V when a preload force of 22.65 mN is applied. The miniature cross-shaped motor can be mounted directly to a printed circuit board or integrated into other systems with a limited installation space.
The oblate-type ultrasonic motor, extensively desired in small-scale robotics, fuzing, and biomedical technology, however, has not obtained abundant development. A flat ultrasonic micro-motor with multilayer piezoelectric ceramics and a chamfered driving tip is proposed in order to realize a low-voltage drive for ultrasonic motors (Zhao et al. 2016). The vibrator is fabricated with a multilayer piezoelectric ceramic glued to a copper ring with a thickness of 0.5 mm. There are six driving tips on the copper ring as a whole. The driving tips are chamfered in the proper direction and their height is 1 mm. The motor can work smoothly and reach a rotation speed of about 2000 r/min at a voltage amplitude of 20 V p−p . It shows the characteristics of high speed and low load capacity.
As can be seen from the above-mentioned articles, many authors have paid attention to motor miniaturization and structural innovations. Therefore, this paper proposes an internal cone type rotating traveling wave ultrasonic motor, which consists of an internal cone type vibrator and a tapered rotor, and uses friction to drive the rotor in a rotational motion. The internal cone type vibrator and the tapered rotor are in trapezoidal teeth contact with each other, which facilitates the smooth operation of the motor while having a large output speed and an output torque.
Ultrasonic motor structure
and working principle
Ultrasonic motor structure
The structure of the internal cone type ultrasonic motor vibrator is shown in Fig. 1. The internal cone type vibrator is based on a cylindrical structure with a tapered hole inside. Several uniform inner trapezoidal teeth are designed inside the cylinder, which is conducive to enlarging the amplitude of the inner surface in the circumferential direction. The number of teeth in the vibrator is 45, and the width of the tooth slot is 0.2 mm. Four rectangular piezoelectric ceramic sheets of 8 × 4 × 1 mm are pasted on the outer surface of the internal cone type vibrator. The diameter of the outer cylindrical surface of the internal cone type vibrator of the rotating ultrasonic motor is set to 30 mm. In Fig. 1, the tapered rotor of the motor and the internal cone type vibrator are in contact with the bevel tooth surface, which is very different from the point contact structure in the contact process between the vibrator and the rotor of the previous motor, which can ensure the stable contact between the vibrator and the rotor and reduce energy loss. And it dissipates heat well, as well as it avoids the problems of unstable operation and small driving torque of the ultrasonic motor in the past. The polarization directions of the two groups of piezoelectric ceramic sheets are shown in Fig. 1.
Bending vibration of cylindrical shells
The piezoelectric oscillator described in this paper is a thin-walled structure, and its vibration modes can be analyzed by using the cylindrical shell vibration theory. The coordinate system of the cylindrical shell is shown in Fig. 2, which is the radial coordinate, the angular coordinate and the axial coordinate. It is assumed that the vibration displacement is tangential and radial. The displacement distribution of the in-plane vibration mode of the cylindrical shell is a constant along the axial direction (axis), and the displacement distribution along the radial direction (axis) is also considered as a constant due to the thin-walled structure, so each displacement component is a function of the angular coordinate. Soedel (2004) proposed the equation for the in-plane free vibration of a cylindrical shell: where is the short cylinder correlation constant, h is the radial thickness, R is the neutral plane radius, ρ is the material density, µ is the material Poisson's ratio, E is the material Young's modulus. According to the periodicity of the ring structure, there are solutions of the following form: where ω is the circular frequency of the short cylinder, A n1 , A n2 , B n are the amplitude coefficients. The aforementioned formula is substituted into the vibra-tion equation to obtain: Solving Eq. (5) yields: where Combining Eqs. (3) and (4) yields: where ω n1 is the intrinsic frequency of the n-th-order in-plane expansion mode, ω n2 is the intrinsic frequency of the n-th-order in-plane bending mode. B n1 , B n2 are the in-plane bending modes of the short cylinder. Figure 3 shows the working principle of the inner cone ultrasonic motor proposed in this paper. The internal cone type vibrator structure has a certain symmetry. When the two-phase piezoelectric ceramic sheets arranged at 90 ○ intervals are excited by the sine and cosine excitation voltages, respectively, the vibrator will generate a third-order bending resonance, and the vibrator will be excited: A and B two-phase standing waves are superimposed on the vibrator to obtain bending traveling waves:
Principle of operation
where W is the amplitude of the vibration of the A and B phases, n is the modal order of the bending vibration, θ is the angular coordinate along the circumferential direction, ω is the natural frequency of the third-order bending mode.
For the third-order bending mode, the two-phase ceramic sheets are separated by three-quarter wavelengths in this paper. When the two standing waves with equal amplitudes excited by the two modes A and B have a phase difference of π/2 in time, they will be superimposed on the internal cone type vibrator to form a traveling wave running in the circumferential direction. After the traveling wave is formed on the internal cone type vibrator, the two orthogonal inplane third-order bending modes of the same frequency are superimposed on each other to generate an elliptical motion trajectory on the particle on the inner tooth surface. Finally, under the action of a certain pre-pressure, the rotary motion of the tapered rotor is realized through the friction coupling between the inner teeth and the tapered rotor.
Finite element simulation of piezoelectric vibrator
In this paper, modal and harmonic response analyses were performed with the help of the ANSYS finite element software to design and build an internal cone type vibrator model. Figure 4 shows the two thirdorder bending vibration patterns of the designed tapered vibrator under free boundary conditions. In selecting the vibrator vibration mode, the modal analysis results show that the vibrator is not only orthogonal but also similar in frequency in the thirdorder bending resonance mode. At the same time, the amplitude of the low-order mode is larger than that of In order to ensure that the vibrator does not have interference modes in a certain wide working frequency band, the ANSYS finite element software is used to analyze the harmonic response of the ultrasonic motor vibrator. An excitation signal with a peak value of 40 V and the frequency range of 20000 Hz to 25000 Hz was applied to the two sets of ceramic sheets, respectively. The amplitude-frequency characteristics of the vibrator are obtained through the analysis and solution of the post-processing module of the ANSYS finite element software. The amplitude displacement peak appeared at the frequency of 22960 Hz, and no other amplitude displacement peaks appeared in the frequency range 20000∼25000 Hz. The results show that the vibrator has no interference mode in the frequency range, which verifies that the motor has good stability in a wide frequency band. The analysis results of the A and B phases are shown in Fig. 5.
Prototype ultrasonic motor
The prototype of the cone type ultrasonic motor was made according to the structural dimensions given in Fig. 1. The vibrator material is 45# steel (high quality carbon structural steel with a carbon content of 0.45%), and the motor vibrator is boiled black in order to prevent the vibrator from being corroded by long working hours. Under certain pre-pressure, four rectangular PZT-81 piezoelectric ceramic sheets polarized along the thickness direction were attached to the four positioning slots on the outer cylindrical surface of the vibrator using epoxy resin. The length of PZT-81 piezoelectric ceramic sheet is 8 mm, the width is 4 mm, and the thickness is 1 mm. The detailed parameters are shown in Table 1. The bottom edge of the rectangular piezoelectric ceramic sheet is aligned with the end of the small aperture of the vibrator. In order to reduce the wear on the vibrator during the long working hours of the motor, the tapered rotor material is 2A12 (series 2 aluminum alloy with serial number 12) with a weight of 10 g. The prototype is shown in Fig. 6.
Ultrasonic motor vibrator test experiment
The vibration characteristics of the internal cone type vibrator were tested by the arbitrary waveform/function signal generator Tektronix AFG320, the 2713 Power Amplifier from B&K Denmark, the Germany Polytec OFV-505/5000 Laser Vibrometer, the multi-channel high frequency digital storage oscilloscope Agilent DS06014A and the precision vibration isolation platform as shown in Fig. 7. The frequency sweep test was carried out on the amplitude distribution of the midpoint P of the tooth structure end face of the inner tooth surface of the vibrator, as shown in Fig. 8. The experimental results show that the resonance frequencies of the two third-order bending modes of the vibrator are 22248.5 Hz and 22260 Hz, respectively, while the resonance frequencies obtained by modal analysis are 22959.7 Hz and 22960.8 Hz, respectively. The frequency difference between the two is 711.7 Hz and 700.8 Hz respectively, and the errors are 3.09% and 3.05%, respectively. The frequency of the third-order bending mode is basically consistent with the numerical simulation results of ANSYS software. The vibrator has no other interference modes in the frequency range of 20000∼25000 Hz.
The amplitude distribution of the midpoint P of the tooth structure end face of the inner tooth surface of the vibrator was tested by a vibration testing instrument. The actual vibration measurement results are shown in Fig. 9. When the excitation frequency is 22260 Hz, the in-plane third-order bending mode of the vibrator can be well excited, and the vibrator can realize the expected traveling wave motion, which also proves the feasibility of the motor.°F ig. 9. 360 ○ amplitude distribution of piezoelectric vibrator.
Ultrasonic motor vibrator test experiment
The output characteristics test rig was built (Fig. 10). The output characteristics of the motor are experimentally tested when the excitation voltage peak-to-peak value is 300 V and the excitation frequency is 22260 Hz using the multi-function driver. In the experimental test, a photoelectric tachometer was used to measure the rotational speed of the tapered rotor under different excitation voltages. When the excitation voltage peak-to-peak value is 300 V, the prepressure is 0.1 N, the excitation frequency is 22260 Hz, and the excitation voltage is increased to 300 V, the no-load speed of the ultrasonic motor can reach up to 441.2 rpm, as shown in Fig. 11. In the experimental test of torque and rotational speed, the magnitude of the torque is adjusted by lifting weights of different masses by the tapered rotor, while the rotational speed is still measured with a photoelectric tachometer. When the peak-to-peak value of the excitation voltage is 300 V, the pre-pressure is 0.1 N, and the excitation frequency is 22260 Hz, the motor speed decreases smoothly with the increase of torque, which is approximately linear. The maximum output torque of the motor is 1.06 N ⋅ mm, as shown in Fig. 12.
Ultrasonic motor pre-pressure analysis
The optimum ultrasonic motor pre-pressure depends on the design parameters and operating torque of the motor. When assembling a motor, choosing different pre-pressure for specific operating conditions and load torques will positively affect its efficiency and performance. The test was performed with the output characteristics test rig above (Fig. 10). The platform is capable of applying loads and pre-pressure forces and testing the corresponding speeds. In the experimental tests of pre-pressure, load torque and rotational speed, the pre-pressure was adjusted by changing the weight and load of the tapered rotor. The torque is regulated by tapered rotors that lift different masses, while speed is still measured by a photoelectric tachometer.
The motor speed decreases as the load torque increases until the motor is locked. On the other hand, the motor speed increases as the pre-pressure increases and then decreases, as shown in Fig. 13. As can be seen from the figure, the pre-pressure and load torque do not affect the motor speed independently; the coupling between pre-pressure and load torque is as follows: as the load torque increases, the value of the optimal motor pre-pressure corresponds to the inflection point of the speed increase. At present, the speed regulation methods of ultrasonic motors mainly include frequency regulation, voltage regulation, and phase regulation. The existing speed regulation methods often have the problem of coupling the speed and torque, as well as the narrow adjustment range. For such problems, we propose to change the pre-pressure speed regulation scheme and conduct a pre-pressure speed regulation experiment for this motor. According to the experimental results, the relationship between motor speed and pre-pressure under different loads can be obtained, as shown in Fig. 14.
As can be seen from Fig. 14, the motor speed increases and then decreases with increasing pre-pressure for all cases with different load torques. By using this monotonic relationship fragment before and after the pre-pressure reaches a specific value, the motor speed can be adjusted. In addition, for fast speed regulation and to avoid non-monotonic relationship in speed regulation, the pre-pressure should be gradually increased from the left side for small load torque and gradually decreased from the right side for large load torque until the motor reaches the desired speed. The dashed line marks the trend of the pre-pressure corresponding to the maximum speed of the motor at different load torques. It can also be seen that there is no sudden blocking of the motor when increasing the prepressure; theoretically a full range of speed regulation can be achieved.
Conclusion
With the help of the ANSYS finite element software, a parametric model of an internal cone type rotating traveling wave ultrasonic motor with trapezoidal teeth was established. The modal analysis and harmonic response analysis of the motor vibrator were carried out, and the structural parameters and working modes were determined. A prototype was fabricated, and the vibration characteristics of the motor vibrator were tested by the laser vibration measurement system, and the excitation frequency of the two orthogonal modes with the same frequency was 22260 Hz. An output performance test device was built, and the output characteristics of the prototype were tested experimentally. The prototype runs stably, has a high-speed output, and has good motion and power adjustment characteristics. When the excitation voltage peak-topeak value is 300 V, the pre-pressure is 0.1 N, and the excitation frequency is 22260 Hz, the maximum output torque of the ultrasonic motor is 1.06 N ⋅ mm, and the maximum no-load speed is 441.2 rpm. The optimal pre-pressure of the motor under different loads is studied and analyzed. There is a coupling relationship between the influence of pre-pressure and load torque on the speed of the ultrasonic motor. Adjusting the prepressure according to the load and rotational speed can improve the output efficiency of the ultrasonic motor. This has important implications for the practical use of this ultrasonic motor. | 2023-08-31T15:04:46.852Z | 2023-08-29T00:00:00.000 | {
"year": 2023,
"sha1": "5a846ee77e9e55cb82283ee5c37c712070c5467a",
"oa_license": "CCBYSA",
"oa_url": "http://journals.pan.pl/Content/128258/PDF/aoa.2023.145242.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "24a20d8aa44c80f973906ffcad86027902df8e2f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
258462984 | pes2o/s2orc | v3-fos-license | Creatinine clearance is key to solving the enigma of sex difference in in-hospital mortality after STEMI: Propensity score matching and mediation analysis
Background The precise impact of sex difference on in-hospital mortality in ST-elevation myocardial infarction (STEMI) patients are unclear, and the studies are no longer consistent. Therefore, we sought to evaluate the impact of sex differences in a cohort of STEMI patients. Methods We analyzed the data of 2647 STEMI patients enrolled in the Kermanshah STEMI Cohort from July 2017 to May 2020. To accurately clarify the relationship between sex and hospital mortality, propensity score matching (PSM) and causal mediation analysis was applied to the selected confounder and identified intermediate variables, respectively. Results Before matching, the two groups differed on almost every baseline variable and in-hospital death. After matching with 30 selected variables, 574 male and female matched pairs were significantly different only for five baseline variables and women were no longer at greater risk of in-hospital mortality (10.63% vs. 9.76%, p = 0.626). Among the suspected mediating variables, creatinine clearance (CLCR) alone accounts for 74% (0.665/0.895) of the total effect equal to 0.895(95% CI: 0.464–1.332). In this milieu, the relationship between sex and in-hospital death was no longer significant and reversed -0.233(95% CI: -0.623–0.068), which shows the full mediating role of CLCR. Conclusion Our research could help address sex disparities in STEMI mortality and provide a consequence. Moreover, CLCR alone can fully explain this relationship, which can highlight the importance of CLCR in predicting the short-term outcomes of STEMI patients and provide a useful indicator for clinicians.
Introduction
Over the years, many studies have assessed differences in mortality following STEMI between men and women [1][2][3]. However, the results obtained in this area are inconsistent, independently of the differences between the studies in terms of study type, study location and sample size. Although many studies have proven the role of the female sex in this field, others have disputed distinctions between the two groups [1][2][3][4]. The kind and number of variables included in statistical models employed in research, mainly regression models, may attract our attention at first glance, which is acceptable since the research on this subject is not totally or even partially consistent in terms of the factors evaluated.
Is it the kind and number of factors that have generated this heterogeneity? It is impossible to give health professionals and specialists a solid clinical standard in this field. This study's scope does not allow for a clear response to this issue. However, aside from the relevance of clinical information in variable selection, the statistical approaches utilized in this field are quite effective in both variable selection and analysis. Depending on the type of outcomes, the majority of the available data is based on observational research. As a result, designing clinical trials in this area is difficult to eliminate potential and hidden confounding factors between the two groups [5,6].
One of the solutions advocated by researchers and epidemiologists in this field is to employ propensity score matching (PSM) methods since they can help us explain the impact of confounding factors more precisely, which standard regression methods cannot achieve [5,6]. It should be emphasized that the findings produced in these models, in turn, are in the group of variables chosen for study, and a lack of suitable variable selection might lead to biased results [7][8][9]. In other words, only confounder and not intermediate variables should be included in the PSM analysis to reduce bias [10]. An intermediate variable is a variable that interacts between exposure/ treatment and outcome causal chain. In this context, causal mediation analysis is an acceptable solution, but these methods are often ignored by researchers, not only in studies using PSM analysis but also in other studies [11].
Taken together, despite the caveats and recommendations mentioned in the evidence, many studies utilizing PSM to explain the variations in mortality between men and women in STEMI patients have not followed or addressed these guidelines. Hence, we attempt to correctly describe this issue by selecting and identifing the counfander and intermediate varibles respectively. As a result, we aimed to avoid using mediator variables in the model as much as feasible, hence minimizing the model's final bias [9]. It is hoped that the findings of this study will pave the way for sound clinical decisions.
Study population
The current study is based on a cohort of STEMI patients from the Imam Ali Hospital at the Kermanshah University of Medical Sciences. From July 2017 to May 2020, we included data for all STEMI patients enrolled in the cohort, totaling 2,816 patients. Since there have been several studies on the characteristics of the cohort, we will only briefly mention it. Adults 18 years of age and older with angina or other comparable symptoms lasting more than 20 minutes in the previous 24 hours before admission, as well as the presence of left bundle branch block (LBBB) and ST-segment elevation in the diagnostic elektrokardiogramm (EKG), are classified as having suspected STEMI in this cohort. STEMI was first diagnosed by the emergency doctor and then confirmed by the study quality control doctor. Patients hospitalized for another reason and then developed STEMI and patients who developed STEMI while undergoing percutaneous coronary angiography (PCI) or bypass surgery were excluded from the cohort. Also, patients admitted to another hospital 24 hours before admission to Imam Ali Hospital were excluded from the study. Our research has been ethically approved by the Ethics Committee at Kermanshah University of Medical Sciences, Kermanshah, Iran (NO. KUMS.REC.1395.252) and is in accordance with the principles presented in the Declaration of Helsinki. All participants in this study provided written informed consent (patients without written informed consent were not included in the study). Nevertheless, in the case of extreme illness or death in the hospital, the patient's relatives signed the informed consent form.
Variable selection
First of all, we attempted to impute missing data for study variables using a multiple imputation (MI) approach. According to an interesting study [12], we did not use the proportion of missing data for each variable to decide whether to remove it from the analysis. Instead, we used the fraction of missing information (FMI) as a selection criterion (the FMI for the variables examined and other information on this are reported in S2 File). On the other hand, using all of the obtained variables for propensity score matching (PSM) analysis is not required based on causal inference principles [7,13]. In other words, the variable under consideration must be a correct confounder or not be in the causal path between the exposure and the outcome (mediating variable) [8]. Hence, we attempted to identify the most relevant variables by reviewing the literature [1,[14][15][16][17] and considering the experts' opinions. In such a way, we finally included 30 variables out of 55 variables in the PSM model (Tables 1 and 2) based on outcome adaptive lasso. Likewise, the creatinine clearance (CLCR) and early hemoglobin (Hb) as complete mediators and the lowest hemoglobin (Hb), PIC, and diabetes status variables as partial mediators were not included in the model, and their effect was estimated separately. The details of the selection of variables are given in S2 File. In addition, the method of calculating CLCR is also mentioned in S2 File.
Outcome
In-hospital death was the outcome of our study. As a result, we only consider the data of 2,647 persons in our study out of 2,816 patients whose data were collected from July 2017 to May 2020. Because 32 persons went lost during the follow-up, 137 people's information was linked to follow-up death. These 137 people were alive until the moment they were discharged from the hospital, and in the follow-up, death was imminent for them. Out of 172 in-hospital deaths, only ten were non-cardiovascular, and 162 were cardiovascular diseases.
Statistical analysis
The t-test or Mann-Whitney test was applied in terms of normality for quantitative variables, and the chi-square test was utilized before matching for qualitative variables to compare the discussed variables between males and females [18]. Similarly, following matching, the paired t-test or Wilcoxon rank-sum test was used to compare quantitative variables, and the chisquare test was used to evaluate qualitative factors between the two groups.
Propensity score matching. We utilized the propensity score matching (PSM) technique in this study to balance the distribution of variables between men (as the control) and women [9]. In this way, we estimated propensity scores: using the generalized linear model (GLM), applying the optimal matching technique (in this approach, there is no need to determine the caliber matching value) and with a case-to-control ratio of 1:1. The balance or unbalance of the studied variables was next evaluated, and the effect of hidden confounders was ultimately considered using sensitivity analysis. We utilized Rosenbaum's Sensitivity Analysis approach for this purpose. All statistical analyzes were performed in R software version 4.1.2. Mediating analysis. In the medical sciences, describing causal pathways is essential to precisely estimate the causal effects. As a result, we explored the causal relationship between sex, in-hospital mortality, and suspected mediating variables, as depicted in Fig 1. Hence, we estimated the direct effects (the total effect of sex on the outcome without considering the mediator variable) and indirect effects (the effect of sex on the outcome, considering the effect of the mediator variable) of sex [19]. We used the method presented in the work of Qingzhao et al. [20], which is provided in the mma package of R software.
Basic and admission information
Overall, 21.69% of the subjects were female. The mean and standard deviation of age in women was 65.34 ±11.34 and 58.83 ±11.95 in men, which shows that the women in our study are older. Tables 1 and 2 showed the other characteristics of the subjects before and after using PSM.
Before the matching, in terms of basic and admission covariates, only 5 of the 14 variables examined were not significantly different between the two groups. The females were shown to have a relatively higher BMI, higher rate of hypertension (HTN) (68.12% vs. 34.70%, p = 0.001), congestive heart failure (CHF) (5.05% vs. 2.51%, p = 0.001), hyperlipidemia (HLP), and Peripheral vascular disease (PVD) than man (Table 1). However, regarding the history of myocardial infarction (MI) (12.93% vs. 8.36%, p = 0.011) and smoking (59.14% VS. 12.37%, p = 0.001), men snatched the lead from women (Table 1). After matching, the significant variable in this category was HLP, HTN, and old MI, that HTN being more in women (68.12% VS. 56.97%, p0.001) and MI in men (10.80 vs. 8.36, p = 0.043) ( Table 1). The characteristics of other variables not included in the analysis are shown in S2, S3 Tables in S2 File.
Initial assessment, PCI procedure, and hospitalization
Among the remaining 16 variables related to initial assessment, PCI procedure, and hospitalization, 7 variables had significant differences between the two groups before matching ( Table 2). Women experienced higher mean erythrocyte sedimentation rate (ESR) (16.47 vs. 10.18, p = 0.001) and high-density lipoprotein (HDL) (44.03 vs. 40.54, p = 0.001) than men. Also, in terms of in-hospital atrial fibrillation (AF) (6.62 vs. 4.25, p = 0.021) and Worst KLLIP class, higher values were observed in women (Table 2). Although, thrombectomy was more common in men than women (27.06% vs. 20.21%, p = 0.001) the early creatinine (Cr) was higher in men than women ( Table 2). After matching, ESR, HDL, and early Cr had significant differences between the two groups with the same previous pattern (Table 2). Finally, females experienced higher in-hospital mortality than men before matching (10.63% vs. 5.35%, p = 0.001) ( Table 2). After using PSM, the relationship between gender and death was no longer significant, although a higher percentage of women experienced death (10.63% vs. 9.76%, p = 0.626) ( Table 2). The characteristics of other variables not included in the analysis are shown in S2, S3 Tables in S2 File.
Propensity score matching
In our study, PSM with the optimal technique resulted in 574 matched male-female pairs that were balanced for most of the studied variables (Fig 2 and Table 2 and S2, S4 Tables in S2 File).
Hence, based on the standardized mean difference (SMD), the distribution of 5 of the 30 variables in the PSM analysis was not balanced between the two groups (S2, S4 Tables in S2 File). Additionally, one of the studied variables was not balanced between the groups regarding variance ratio (VR) S2, S4 Tables in S2 File.
Sensitivity analysis. Matching estimates are valid whenever hidden confounders do not influence them; therefore, we used sensitivity analysis to assess the impact of these variables. The findings of Rosenbaum Sensitivity Analysis revealed that unobservable confounders slightly influenced the differential assignment of patients, whereas 0.1 increments of the quantity of gamma (differential likelihood of being allocated to the treatment group owing to hidden confounders) increased the p-value from 0.162 to 0.308. In other words, hidden confounders have no impact in discriminating between two outcome classes [21].
Mediation analysis
As mentioned before, due to the importance of mediation analysis, in this section we tried to estimate the direct effects of gender as well as its indirect effects through mediating variables. As shown in Fig 3 and Table 3, the total effect (sum of direct and indirect effects) was equal to 0.895 (95% CI: 0.464-1.332) which is significant (P = 0.001).
Likewise, the indirect effect or Average Causal Mediation Effect (ACME) related to the CLCR variable had the greatest impact on the relationship between gender and in-hospital mortality 0.665 (95% CI: 0.468-0.885).
In other words, CLCR alone can explain 74% (0.665/0.895) of the sexual difference in inhospital mortality. Afterward, the PIC variable had a positive and significant effect on this relationship and covered 24% (0.212/0.895) of the relationship between sex and mortality of STEMI patients. However, the effect of diabetes, early Hb, and lowest Hb variables were insignificant (Table 2). Similarly, the sex effect or the average direct effect (ADE) not remained significant and reversed from 0.742(95% CI: 0.415-1.069) to -0.233(95% CI: -0.623-0.068). The results of this section are extremely clinically important because they show that CLCR can greatly explain the relationship between sex and in-hospital mortality in STEMI patients.
The characteristics of mediating variables between men and women are shown in S2 Table in S2 File.
Discussion
Overall, our work assisted in elucidating the causal association between sex and in-hospital mortality in STEMI patients by accurately identifying the variables needed for PSM analysis and by doing a mediating analysis. In other words, this study provided a decent step in explaining the clinical care of STEMI patients based on sex differences. Our results revealed that the difference between men and women was no longer significant after the PSM application. However, the differences persisted in some investigated variables, such as HLP, HTN, old MI, ESR, HDL, and early Cr. These results suggest that the other 25 variables may well neutralize the independent role of gender in in-hospital mortality Fig 2 and Tables 1 and 2. Similarly, as an essential clinical insight, our study revealed solid evidence of the CLCR variable's mediating role in the causal link between sex and in-hospital death in STEMI patients (Figs 1 and 3). Thus, the indirect effect of CLCR accounts for about 74% of the contribution of sex in this causal connection. This finding can create a revolution in the clinical management of patients. Therefore, it is hoped that more women's in-hospital deaths will be controlled by proper management of CLCR. We attempted to examine the topic from two perspectives to compare the current study's findings with other relevant research. First, we compared our findings to previous information regarding the higher involvement of the female sex in in-hospital mortality of STEMI patients to generate a complete picture. Next, we tried to compare the existing evidence on the effect of CLCR on in-hospital mortality in STEMI patients with our findings to obtain more precise conclusions. In addition, we will also briefly examine the role of other mediating variables studied. From the first standpoint, we first compared our findings with Siabani et al. study [1]. This study aimed to explain the sex differences in our study population. Although the results of this study at first glance support the findings of our study that women are not at higher risk of in-hospital death after PSM. However, due to the kind of statistical analysis used (regular regression analysis) and the number of variables analyzed, the study's results cannot be appropriately compared to the current study. In addition, the process of selecting the variables under examination is not described in Siabani et al. study [1], our study has a more extended period and, subsequently, a larger sample size than the mentioned study, and finally, we have clearly stated how to handle missing data, but it is not clear in the comparative study. In a similar study [15], PSM analysis was used to explain the differences between men and women with primary PCI. In the first 30 days after STEMI, even after matching, women experienced higher mortality rates, which is not consistent with the result of our study. However, in the follow-up, the two groups of survivors were the same. Perhaps this contradiction is due to selecting and managing related variables. Because in this research, variables with P-value less than 0.2 were selected for PSM, which is challenging, and mediating variables cannot be identified. Similarly, 3194 patients were included in multicenter research in India [16] between 2013 and 2017. When 510 pairs of patients were compared following PSM analysis, women had higher death rates throughout the one-year follow-up, which is inconsistent with the results of our study. However, the length of the follow-ups of the two studies is different. In another study conducted in Iran [22] between 2008 and 2013, conventional regression methods were employed to assess sex disparities in 1017 patients. The researchers stated in this study that after controlling for the confounding variables, women were no longer at a greater risk of in-hospital death than males, which is consistent with the results of our study. In this study, only 7 variables of age, HLP, diabetes, smoking, history of ischemic heart disease, and reperfusion therapy were included in the regression model; moreover, their variable selection approach was unclear. Recent meta-analysis result is also inconsistent with our result. However, the result of this study should be evaluated critically due to its low generalizability, only by searching the PubMed database, unclear eligibility criteria regarding studies with the PSM method, and not paying attention to the variable selected in the included studies and how they were selected [23]. By summarizing the points presented, it can be said that the enigma of sex differences in in-hospital mortality after STEMI has not yet been solved and remains strong.
Suppose we intend to look at the association between CLCR and in-hospital mortality in STEMI patients. As one of the first studies in this field, we can refer to the study of Santopinto et al. [24]. This study used data collected from 94 hospitals in 14 countries to assess the association between CLCR and two subsets of acute coronary syndromes (ACS), including STEMI. Finally, in this unique study, the independent role of CLCR on in-hospital mortality and patient bleeding was established. The results of this study provided the necessary basis for enhancing the management of cardiac patients with renal dysfunction. Although this study verifies our findings, it does not address sex differences or the role of CLCR as a mediator. Another study looked into the relationship between varying levels of creatinine and CLCR and in-hospital mortality and established a dose-response association between various levels of CLCR and in-hospital mortality in STEMI patients, where more severe levels of CLCR were related with increased mortality. Although the sex differences and, consequently, the mediating role of CLCR was not mentioned in this study, it nevertheless supports the results of our study [25]. Another similar study reported the effect of CLCR, similar to the two previous studies discussed [26]. The association between chronic kidney disease (CKD) and short-term (in-hospital) and long-term mortality in STEMI patients was explored in another study [27], which can be stated to have the most confirming role on the results of our research. So that, the decrease in glomerular filtration rate (eGFR), based on CLCR, was reported as an influential independent variable in reducing mortality in STEMI patients. The study's most intriguing finding was that by including the eGFR term in the multiple regression model, the significant link between sex and outcome disappeared and women were no longer at a higher risk of mortality. Although the mediating effect is not addressed in this work, it coincidentally showed the mediating role of CLCR. Consequently, our work is the first or one of the first studies to reveal the mediating role of CLCR in the causal chain of sex and in-hospital death from an epidemiological standpoint.
As a secondary and clarifying result, even though most of our study participants were males, with around one woman for every 4-5 men, this is exceptionally usual, and past research has shown a similar pattern [16,17,28]. One explanation for this is that males have a higher occurrence of the illness at a younger age and, subsequently, a higher risk of disease. Women, on the other hand, are more vulnerable to heart disease as they become older [29,30]. The complete explanation for this is beyond the scope of this research. Concerning other mediating variables, fortunately, there is good evidence that shows the validity of our results. In this regard, evidence has proven the influence of Hb on in-hospital mortality and one-year mortality [31,32]. Similarly, there is similar evidence for PCI [33][34][35]. The role of diabetes has also been proven in reliable sources [36][37][38][39]. In general, it can be said that the selection of these variables as mediating variables is correct, and probably the studies that did not consider the potential influence of these factors have gone the wrong way.
Limitations and strengths
Although this study was conducted in a tertiary and a reference center in the west of Iran, it may be argued that one of the limitations of the current study is that it is not a multicenter study to assess the external validity of findings, as a multicenter study depends on a larger population and provides more reliable results. However, the evidence presented in the preceding paragraphs seems sufficient to address this problem, as most of the themes support our findings. Another drawback that readers may consider for the present research is its non-randomized design, which does not seem to be completely reasonable based on the available information. Because, as far as we know, this is the first research to investigate the causal role of CLCR, and it may serve as the foundation for future randomized trials. Although clinical trial studies in this field are also very limited, the study by Jennifer et al. is one such study [40].
On the other hand, one of the current study's strengths is its relatively high accuracy in choosing variables before employing PSM analysis. As a result, the essential variables of PSM were selected based on past clinical information, suggestions, and methodological principles. It adequately explained the sex difference in STEMI patients' in-hospital mortality and identified a mediating association between CLCR and the causal chain between sex and in-hospital death, which can be clinically and medically relevant. As mentioned, previous evidence has supported our results in this regard. In other words, previous evidence reporting this relationship is anecdotal and unaware of its mediating role, which leaves no room for doubt. However, to complete the argument, we recommend that other researchers from around the world evaluate our results for probable correctness or inaccuracy. It is hoped that clinical management of these patients will soon change.
Conclusions
Altogether, our study could help resolve sex difference in in-hospital mortality after STEMI. Sex no longer played an independent role in STEMI mortality after matching. In addition, the complete mediating role of CLCR in this causal chain was well elucidated, and CLCR alone could fully explain this relationship. Without exaggeration, this clinical finding can completely revolutionise existing knowledge in this field and remove ambiguities. As a result, adopting the proper epidemiological viewpoint can assist us in clarifying the network etiology of diseases.
Supporting information S1 File. Patients data before propensity score matching. | 2023-05-04T06:17:46.702Z | 2023-05-03T00:00:00.000 | {
"year": 2023,
"sha1": "901ccb59e60ccda3d184c1a5c10294b56c1ad5d1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0284668",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d238614f062447578b7c1e7048246b439e33a19",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
198854936 | pes2o/s2orc | v3-fos-license | Induction of Hyalurosome by Topical Hyaluronate Fragments Results in Superficial Filling of the Skin Complementary to Hyaluronate Filler Injections
Hyaluronate (HA) plays a major role in the process of skin aging. The main use of HA has been for hydration and dermal fillers. Another approach, based on the discovery of the signaling effects of topically applied hyaluronate fragments (HAF), has subsequently been developed. It has been thoroughly demonstrated that topical applications of HAF of a very specific size induce HA filling of the epidermis and the upper dermis. These effects are particularly visible in dermatoporotic patients. Moreover, the combination of HA-based filler injections with topical applications of HAFs/retinoids showed an optimization of the effects of HA. Thus, a new classification of the different effects of HA is proposed here.
Hyaluronate (HA, hyaluronan, hyaluronic acid) was discovered in 1934 by Meyer and Palmer. It can be found in skin, joints, eyes, and most other organs and tissues. HA is a major component of the extracellular matrix, of which the skin is the main reservoir [1]. It is synthesized at the cell membrane of many cells, in particular in fibroblasts and keratinocytes [2]. It is composed of repeating units of the disaccharide GlcNAcβ(1→4)-GlcUAβ(1→3) throughout a molecule that can have a molecular weight of up to 10 6 Da. HA belongs to the glycosaminoglycan family with the particularity that it is not sulfated in comparison with other glycosaminoglycans. HA acts through a large number of proteins with the ability to bind to it, called hyaladherins [3]. The main cell surface receptor of HA is CD44, which can exist in numerous isoforms through alternative splicing of 10 variant exons in different combinations [4].
In the skin, HA provides a highly hydrated medium facilitating the cell movement that occurs in the early stages of injury, inflammation, and wound healing [5,6]. It may also contribute to the elastic properties of the dermis by forming a network of helicoidal structures, to epidermal differentiation, and also to lipid synthesis/secretion [7][8][9].
HA is a crucial molecule in the process of skin aging since levels of HA gradually decrease with age [10]. Cosmetics often propose HA as an active ingredient, but topical HA applications do not lead to an increase in HA-specific staining intensity, suggesting that the large molecule is poorly able to permeate the stratum corneum of viable human epidermis, both ex vivo and in vitro [11,12]. Another solution to correct wrinkles and folds is the use of HA injections [13]. HA fillers provide structural support for the collagen fibers. Treatment with HA injections is not permanent, and visible results last about 6 months. Several fillers received approval by the US Food and Drug Administration. HA dermal fillers are considered to be safe and effective [14][15][16]. Their advantage is that they offer longer-lasting correction than collagen fillers [17,18].
Hyaluronate Fragments: From Now On
In addition to its viscoelastic properties, HA may be cleaved into fragments of smaller molecular weight, called hyaluronate fragments (HAF), which are the degradation products derived from the action of hyaluronidases, β-glucuronidase, or hexosaminidase on HA. HAF have distinct physiological properties, such as stimulation of cell turnover, angiogenesis, tissue remodeling, or activation of the innate immune defense [19][20][21].
Many studies suggest that cell responses after HA treatment depend on the size of the fragments as well as on the cell type. Thus, HAF may exert differential regulation on the wound-healing process [22]. They also induce epithelial cell proliferation and HA synthase expression, thereby stimulating endogenous HA production [12]. However, we have demonstrated that only HAF between 50 and 400 kDa (intermediate size, HAFi) were able to induce an epidermal hyperplasia, to increase the density of the dermis and HA content through a CD44-dependent pathway [12]. In a recent study that we conducted on 10 subjects, we showed that daily topical application of HAFi for 5 days increased the HA content in the superficial dermis of the skin already injected by a noncross-linked HA (Fig. 1). Therefore, HAFi act as an epicutaneous (or topical) HA filler (Fig. 2).
Dermatoporosis
The term "dermatoporosis" was proposed to cover the different manifestations and implications of the chronic cutaneous insufficiency/fragility syndrome in the elderly and to facilitate the understanding that, as osteoporosis, dermatoporosis should be treated to prevent complications [23,24]. We have shown that dermatoporosis is due to the dysfunction of hyalurosome, which is a putative multimeric macromolecule complex composed of molecules involved in HA metabolism and cell signaling in keratinocytes, such as CD44, heparinbinding epidermal growth factor (HB-EGF), and its receptor erbB1 [25]. Epidermal Lrig1+ progenitor cells, Wnt/β-catenin pathway, calcium signaling, and p16 Ink4a pathway also play a role in the pathogenesis of dermatoporosis [24,26,27]. HAFi might constitute a target molecule for the prevention of cutaneous aging and the reversal of the skin atrophy observed in dermatoporotic patients [12]. Inhibition of a putative hyalurosome complex in keratino- Kaya et cytes seems to be the molecular mechanism for corticosteroid-induced dermatoporosis [28]. We have recently demonstrated that HAFi induced hyalurosome complex resulting in EGFR activation via CD44v3 [12,28]. The resulting effect was the stimulation of cell proliferation and epidermal hyperplasia in dermatoporotic patients [12]. Topical HAFi induced the expression of hyalurosome components in dermatoporotic skin [29].
HAFi in Association with Retinoids
Retinoids, including retinaldehyde (RAL), are well known in the treatment of skin aging [30]. RAL induces keratinocyte proliferation through a CD44-dependent mechanism and a b c d upregulates the HA synthesizing enzymes in mouse skin [25,31]. We have recently demonstrated that topical applications of RAL and/or HAFi also induce CD44 expression in the epidermis, as well as increasing the dermal and epidermal HA content. These effects are more significant than treatment with RAL or HAFi alone, suggesting that these 2 components have a synergistic action [25,29]. A recent clinical study showed that applications of a topical preparation containing a combination of RAL and HAFi optimized the results obtained with injectable HA for both parameters measured, skin texture and dermal thickness [32]. These results suggest that topical HAFi and HA fillers have complementary biological activities.
Proposal of a New Classification of the Biological Activities of HA
A new classification of the biological activities of hyaluronic acid may be proposed based on size and method of administration ( Table 1). The 2 biological activities of HA involving (1) viscoelastic filling (type 1) and (2) its role as an intercellular messenger, HAFi (type 2), could be used with a view to prevent and reverse cutaneous aging and dermatoporosis. It follows from this reasoning that HA skin treatment should combine type 1 and type 2 activities, with a variable type 1/type 2 ratio. Type 3 activity corresponds to HA's function in water retention and its ability to combine with water to form a complex in the intercellular environment and in connective tissue in particular. In this way, the space it occupies enables the retention of smaller molecules, such as growth factors or electrolytes.
Disclosure Statement
The authors declare no conflicts of interest. | 2019-07-26T14:45:40.583Z | 2019-06-26T00:00:00.000 | {
"year": 2019,
"sha1": "430c4d4f57130cb5f362bbad553a671766df97ef",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/500493",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "128e75b656274ceacc6a54a96af718c69abd122f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
16741671 | pes2o/s2orc | v3-fos-license | Constraints on high-energy neutrino emission from SN 2008D
SN 2008D, a core collapse supernova at a distance of 27 Mpc, was serendipitously discovered by the Swift satellite through an associated X-ray flash. Core collapse supernovae have been observed in association with long gamma-ray bursts and X-ray flashes and a physical connection is widely assumed. This connection could imply that some core collapse supernovae possess mildly relativistic jets in which high-energy neutrinos are produced through proton-proton collisions. The predicted neutrino spectra would be detectable by Cherenkov neutrino detectors like IceCube. A search for a neutrino signal in temporal and spatial correlation with the observed X-ray flash of SN 2008D was conducted using data taken in 2007-2008 with 22 strings of the IceCube detector. Events were selected based on a boosted decision tree classifier trained with simulated signal and experimental background data. The classifier was optimized to the position and a"soft jet"neutrino spectrum assumed for SN 2008D. Using three search windows placed around the X-ray peak, emission time scales from 100 - 10000 s were probed. No events passing the cuts were observed in agreement with the signal expectation of 0.13 events. Upper limits on the muon neutrino flux from core collapse supernovae were derived for different emission time scales and the principal model parameters were constrained.
Introduction
Observations in recent years have given rise to the idea that core collapse supernovae (SNe) and long duration gamma-ray bursts (GRB) have a common origin or may even be two different aspects of the same physical phenomenon, the death of a massive star with M > 8 M (for a review, see Woosley, Bloom 2006). Like GRBs, SNe could produce jets, though less energetic and collimated and possibly "choked" within the stellar envelope. Observed associations of supernovae with XRFs, short X-ray flashes with similar characteristics to long GRBs, suggest including XRFs in the SN-GRB connection as well. Although XRFs are considered a separate observational category from GRBs, a common origin and a continuous sequence connecting them have been suggested (Lamb et al. 2004, Yamazakia et al. 2004). XRF could be long GRBs with very weak jets or simply long GRBs observed off-axis. Several XRFs or or long duration, soft-spectrum GRBs have been observed in coincidence with core collapse SNe thus far: SN 1998bw (Galama et al. 1998), SN 2003lw (Malesani et al. 2004, SN 2003dh (Hjorth et al. 2003), SN 2006aj (Pian et al. 2006), and of course SN 2008D (Soderberg et al. 2008, Modjaz et al. 2009, Mazzali et al. 2008. For SN 2007gr (Paragi et al. 2007) and SN 2009bb (Soderberg et al. 2010), two core collapse SNe not associated with an XRF or GRB, recent radio observations provide strong evidence for jets with bulk Lorentz factors of Γ > 1. If some core collapse SNe indeed form such "soft" jets, protons accelerated within the jet could produce TeV neutrinos in collisions with protons of the stellar envelope (Razzaque et al. 2005, Ando & Beacom 2005. The soft jet scenario for core collapse SNe can be probed with high-energy neutrinos even if the predicted jets stall within the stellar envelope and are undetectable in electromagnetic observations. On January 9, 2008, the X-ray telescope aboard the SWIFT satellite serendipitously discovered a bright X-ray flash during a pre-scheduled observation of NGC 2770. Optical follow-up observations were immediately triggered and recorded the optical signature of SN 2008D, a core collapse supernova of type Ib at right ascension α = 09 h 09 m 30.70 s and declination δ = 33 • 08 19.1" (Soderberg et al. 2008). SN 2008D offers a realistic chance to detect high-energy supernova neutrinos for the first time since the observed X-ray peak provides the most precise timing information ever available to such a search. Whether or not the existence of jets in aspherical explosions is evidenced in the spectroscopic data for SN 2008D remains highly debated. While Soderberg et al. (2008) "firmly rule out" any asphericity and Chevalier and Fransson (2008) speak of a purely spherical shock-breakout emission, Mazzali et al. (2008) and Tanaka et al. (2009) find evidence that SN 2008D possessed jets which have been observed significantly off-axis.
The IceCube neutrino detector, currently under construction at the South Pole and scheduled for completion in 2011, is capable of detecting high-energy neutrinos (E ν 100 GeV) of cosmic origin by measuring the Cherenkov light emitted by secondary muons with an array of Digital Optical Modules (DOMs) positioned in the transparent deep ice along vertical strings (J. ). The full detector will comprise 4,800 DOMs deployed on 80 strings between 1.5 and 2.5 km deep within the ice, a surface array (IceTop) for observing extensive air showers of cosmic rays, and an additional dense subarray (DeepCore) in the detector center for enhanced low-energy sensitivity. Each DOM consists of a 25 cm diameter Hamamatsu photo-multiplier tube (PMT, see Abbasi et al. 2010a), electronics for waveform digitization ), high voltage generation, and a spherical, pressure-resistant glass housing. The DOMs detect Cherenkov photons emitted by relativistic charged particles passing through the ice. In particular, the directions of muons (either from cosmic ray showers above the surface or neutrino interactions within the ice or bedrock) can be well reconstructed from the track-like pattern and timing of hit DOMs. Identification of neutrino-induced muon events in IceCube has been demonstrated in Achterberg et al. (2006) using atmospheric neutrinos as a calibration tool. Sources in the northern sky, like SN 2008D, can be observed with very little background since contamination by atmospheric muon tracks is eliminated by the shielding effect of the Earth. When SN 2008D was discovered, the installation of IceCube was about one quarter completed and the detector was taking data with 22 strings.
As shown above, a search for cosmic neutrinos from core collapse SNe is motivated by both observational evidence and theoretical predictions. While analyses using catalogs of SNe/GRBs with timing uncertainties ∼1 d as the signal hypothesis have been performed on archived AMANDA/IceCube data (see Lennarz 2009 for SNe and Abbasi et al. 2010b for GRBs), the unprecedentedly precise timing information available for SN 2008D suggests a designated study of this event. While electromagnetic observations provide no conclusive evidence for the existence of highly relativistic jets, soft, hidden jets could be revealed by high energy neutrinos, assuming sufficient alignment with the line of sight.
The paper is organized as follows: Section 2 discusses the assumed model for neutrino production. Section 3 describes the experimental and simulated data used for the analyis. The selection criteria used to separate signal events from background are detailed in Section 4. Section 5 presents the results of the search and constraints derived therefrom. Finally, the analysis is summarized in Section 6.
Model neutrino spectrum
A model for the emission of high-energy neutrinos in jets formed by core collapse supernovae has been proposed by Razzaque, Meszaros, and Waxman (2005) and further elaborated by Ando and Beacom (2005). This model will be referred to as "soft jet model" in the following. A brief summary of the physical motivation and a derivation of its analytical form shall be presented. The soft jet model assumes the collapse of a massive star M 8 M with subsequent formation of a neutron star or black hole, rotating sufficiently to power jets with bulk Lorentz factors of Γ b ∼ 1 − 10 and opening angles θ j ≈ 1/Γ b = 5 • − 50 • . Such "soft" jets, too weak to penetrate the stellar envelope, would not be observable in the electromagnetic spectrum. The rebounding core collapse is assumed to deposit E j ∼ 3 × 10 51 erg of kinetic energy in the material ejected in the jets -values of up to E j = 6 × 10 51 erg have been suggested for SN 2008D by Mazzali et al. (2008). Protons are Fermi accelerated to a E −2 p -spectrum and produce muon neutrinos through the decay of charged pions and kaons formed in proton-proton collisions. The neutrino spectrum, shown in Fig. 1, follows the primary proton spectrum at low energies and steepens at four break energies above which pions (kaons) lose a significant fraction of their energy in hadronic and radiative cooling reactions, before decaying into neutrinos. These break energies are distinct for pions and kaons and exihibit a sensitive dependence on the jet parameters (see Table 1). Using the notation of Ando and Beacom, the spectrum can be written as: where With the exception of the distance d, we assume the same parameters for SN 2008D that are quoted in Ando & Beacom (2005). A summary is given in Table 1.
An optimistic extension of this model proposed by Koers and Wijers (2007) predicts that mesons are again Fermi-accelerated after production. This re-acceleration gives rise to a simple E −γ neutrino spectrum with γ = 2.0, . . . , 2.6 extending to maximum energies of E ν ∼ 10 PeV where radiative cooling processes lead to a steepening and eventual cutoff of the neutrino spectrum. The details of this high-energy cutoff are negligible in the context of this analysis, where neutrinos with energies of 100 GeV -10 TeV are expected to yield the dominant contribution of the signal expectation.
Neutrinos are expected to be emitted in alignment with the jets. Their energy range is set by the maximum proton energy and reaches far into the sensitive range of the IceCube detector (E ν 100 GeV). In order to detect these neutrinos, the jet must be pointing towards Earth (e.g. 5% chance for a jet with an opening half angle of 17 • ). Due to the unknown jet pointing, however, no constraints can be placed on the model in the case of a nondetection. To do so with a confidence level of e.g. 90% would require a large sample of ∼200 nearby supernovae. In contrast, a positive detection would not only indicate the jet's direction, but also yield constraints on the soft jet model -constraints entirely independent of observations in the electromagnetic spectrum. If, in addition, a resolved neutrino spectrum could be recorded with future neutrino detectors, the observation of spectral breaks and a spectral cutoff would place strong constraints on the physical parameters of the supernova jet.
Data and Simulation
The analysis uses experimental data to determine the expected number of background events for a particular search window.
The signal expectations as well as the characterictics of the signal are derived from simulations. Raw data consists of time series of photon detections (henceforth "hits") for each triggered DOM. From these hit patterns, track reconstruction algorithms derive the muon's direction, measured in zenith θ and azimuth φ in a fixed detector coordinate system where muons travelling upwards in the ice have θ > 90 • and downgoing tracks have θ < 90 • . The absolute time of an event is determined by a GPS clock with a precision of better than 200 ns, which is more than sufficient for this analysis.
Background data
At trigger level (detailed in Sec. 3.3 below), IceCube data is dominated by the reducible background of atmospheric muons, falsely reconstructed as upgoing, i.e. having passed through the Earth. A comparison of experimental data and simulated muons from cosmic ray showers shows good agreement (see Fig. 2). In addition, background data contains an irreducible background of muons produced by atmospheric neutrinos from the northern hemisphere, at a rate lower by a factor of 10 5 . At the final cut levels of this analysis (see Tab. 2), data consists of approximately equal contributions of reducible and irreducible background events. The data sample used to measure and characterize background was taken by IceCube in the 22 string configuration over 275.72 days of detector live time between May 2007 and March 2008. The sample is identical to the one used in the first IceCube search for neutrino point sources (R. ). On the day of SN 2008D, IceCube was taking data continuously in a time range of [−9.5 h, +1.8 h] around the observed X-ray flash. To prevent a bias in the cut optimization, this data was kept "blind", i.e. excluded from the development and testing of selection criteria, and only "unblinded" in the final step of the analysis.
Signal Simulation
To quantify and characterize the expected signal, extensive simulations of the complex Earth-ice-detector system were conducted. IceCube simulation generates primary neutrinos at the surface of the Earth and propagates them through the Earth, tracking charged and neutral current interactions, and recording all secondary particles which can reach the detector (see Kowalski et al. 2005). All secondary muons are then passed to the muon propagation software (see Chirkin & Rhode 2008) which simulates their random energy loss and the emission of Cherenkov photons. Finally, the propagation of photons is simulated accounting for absorption and scattering according to a depth dependent ice model (see Lundberg et al. 2007). In the last step, the photomultiplier response, readout, and local as well as global triggers are simulated yielding time series of photon hits which are subsequently passed through the same processing pipeline as experimental data.
Triggering and data processing
The IceCube trigger system only reads out a photon hit at a specific optical module if a neighboring module on the same string is also hit within 1 µs (local coincidence). To initiate the event read-out, the global trigger of IceCube 22 required 8 such local coincidences within a 5 µs time window. This requirement lead to trigger rates of ∼550 Hz, dominated by atmospheric muon events. Data contamination was immediately reduced to ∼25 Hz by first-guess reconstructions running online at the South Pole, which fit a simple track hypothesis to each event and reject downgoing tracks in real time . Events passing this online muon filter are transferred to the North, where extensive likelihood track reconstructions are performed. For a given hit pattern and a first guess track hypothesis, the likelihood function is calculated as the product of the probabilities for each hit time to occur under the given track hypothesis. The likelihood reconstruction algorithm then iteratively searches for the track which maximizes the value of this likelihood function . For the final fit result, the optimization sofware computes quality parameters which can be used for event selection.
Event selection
The background event rate is further diminished to ∼3 Hz through another cut on the more precise track direction from the likelihood track reconstruction selecting events with θ > 80 • . For this analysis, events outside a circular signal region (10 • opening angle) around the position of SN 2008D were removed from the dataset to obtain a manageably sized sample. At this filtering level, the background rate is 0.03 Hz and 0.26 signal events are expected for SN 2008D according to the soft jet model.
Quality Cuts
Specific cuts tailored to the simulated properties of SN 2008D were based on the following eight quality parameters: Estimator for the uncertainty of the reconstructed track direction (quadratic average of the minor and major axis of the 1 σ error ellipse) L R Value of the negative log-likelihood for the reconstructed track divided by the number of degrees of freedom in the fit (number of hit optical modules minus number of fit parameters) R B Ratio of the log-likelihoods with and without a Bayesian prior that favors a downgoing track hypothesis R U Ratio of the log-likelihoods with and without seeding the reconstruction with the inverse track direction In conjunction with the selection of upgoing tracks, the reduced log-likelihood L R has proven to be an efficient variable for separating upgoing atmospheric neutrinos from misreconstructed downgoing atmospheric muons. It exploits the fact that for a light pattern originating from a downgoing muon the incorrect upgoing track hypothesis yields rather low absolute likelihood values. In addition, the likelihood ratios R U and R B allow for a veto on events for which inverting the track hypothesis leads to a significant relative enhancement in the likelihood value.
Histograms of all selection parameters are shown in Fig. 2 for background data, background simulation, and simulated signal events. To combine all eight parameters efficiently, they were incorporated into a boosted decision tree (BDT) classifier (see e.g. Yang et al. 2005 and references therein). The BDT method classifies an event by passing it through a tree structure of binary splits which effectively breaks up the parameter space into a number of signal or background-like hypercubes. The classifier is first trained with background data and simulated signal and then evaluated with independent datasets. The resulting distribution of classifier scores K for experimental data and simulated signal is shown at the bottom of Fig. 2. The classifier allows for a simple one-dimensional cut on the classification score. Extensive tests were conducted to assure a stable response and to estimate the uncertainty of the classification. This uncertainty was estimated by comparing the classification efficiencies for several independent experimental data and simulated signal samples. Variations in the classifier response proved to be negligible compared to statistical uncertainties.
Search Windows
The search for neutrinos in the on-time data from January 9, 2008 was conducted using three search windows of different durations, apertures, and selection cuts. A circular aperture was used in all cases. Since the soft jet model does not explicitly predict the time profile of the neutrino emission, search windows with durations of 100 s, 1000 s, and 10000 s were chosen to cover a large range of emisssion time scales. The corresponding opening angle and quality cuts for each search window were determined by optimizing the model discovery factor M accord- ing to Hill et al. (2006). For this purpose, a Poisson distribution with mean b + s is randomly sampled, where b and s represent the expected background and signal, respectively. For each drawn number of observed events n obs the lower limit on the signal contribution is computed using the Feldman&Cousins al- gorithm (Feldman&Cousins 1998). The signal expectation is increased s → s until 50% of the trials yield a discovery, that is, a lower limit on the signal s greater than zero. When this criterion is met, the model discovery factor is given by For each window, the BDT cut K and the opening angle ω yielding the minimal value of M were determined numerically. Lower limits according to the Feldman&Cousins ordering scheme were required to have a significance of 5 σ. The choices of cuts for the three search windows which yielded minimal model discovery factors are summarized in Table 2. The resulting effective areas for a neutrino spectrum obeying the soft jet model are shown in Fig. 3. With these choices, two observed events would constitute a 5 σ discovery in any of the windows taken by itself. The significances for the complete measurement consisting of three search windows were determined in a simulation study with 10 10 trials. For each possible observation of n 1 , n 2 , n 3 events in window 1, 2, 3, the p-value was calculated as the fraction of equally or less likely observations.
Unblinding
No events passing the cuts were found in the experimental data. As shown in Table 3, this result is consistent with expectations, even more so if we account for the ∼5% probability of a jet with opening half angle ∼17 • pointing towards Earth. Observed Events n 1 = 0 n 2 = 0 n 3 = 0 Expected Events Signal s 0.13 0.060 0.020 Background b 3.67 × 10 −4 5.52 × 10 −4 5.55 × 10 −4
Limits on the Soft Jet Model
In the absence of more precise theoretical predictions on the time profile of the emission, quoting limits for particular time scales is the only viable way to constrain the soft jet model. Since n 1 = n 2 = n 3 = 0 and b 1 ≈ b 2 ≈ b 3 , the signal upper limits s i are identical for all three search windows to the fourth significant digit:s 1 =s 2 =s 3 =s = 2.44 (at 90% CL). The upper limitΦ (90) ν on the neutrino flux in terms of the expected flux Φ ν is given by the ratio of the signal upper limits to the signal expectation s: Due to the different signal expectations in each window, the flux upper limits depend on the assumed emission time scale τ e . Therefore, we quote the limits on the soft jet model for canonical parameters (Tab. 1) separately for each emission time scale τ e and at a reference energy of E ν = 100 GeV: τ e = 100 s 0.058 τ e = 1000 s 0.17 τ e = 10000 s Each limit is only valid under the assumption that the entire neutrino signal is contained in the corresponding time window. In other words, SN 2008D could have emitted at most 19 (41, 122) times more neutrinos than assumed under the soft jet model with default parameters Γ b = 3 and E j = 10 51.5 erg. A higher flux would have been observed by IceCube with a probability of 90%.
The primary systematic uncertainty in these limits stems from a possible bias in signal simulation, i.e. the value of s. Systematics for IceCube 22 have been studied by and lead to a ∼15% uncertainty in s, corresponding to a +17 −13 percent shift in the limits. Incorporating the uncertainty of the BDT classification response, that is decreasing the signal prediction and increasing the background expectation by the corresponding uncertainty resulted in a negligible shift of ∼0.5% in the limits.
Next, we wish to constrain the main parameters of the model, the kinetic energy release E j and the Lorentz factor of the jet Γ b . Due to the significant Γ b dependence of the hadronic break energy E π/K (1) ν,cb ∝ E −1 j Γ 5 b and the radiative cooling break energy E π/K (2) ν,cb ∝ Γ b , the number and spectral distribution of produced neutrinos depends strongly on Γ b (see Fig. 4). Moreover, the flux is scaled with E j Γ 2 b which accounts for the energy release and the beaming of the neutrino emission. At high boost factors, radiative cooling of mesons sets in at lower energies than hadronic cooling, i.e. E π (1) ν,cb > E π (2) ν,cb E K (1) ν,cb > E K (2) ν,cb for Γ b 4 (Γ b 9). To derive constraints on Γ b and E j , we calculated the signal expectations in the intervals Γ b = 1.5 − 10 and E j = 10 51 − 10 52 erg. As Fig. 5 shows, the less efficient cooling as well as stronger beaming in more relativistic jets leads to a drastic increase in the signal expectation. Increasing Γ b places more neutrinos at high energies 1 TeV where IceCube is more sensitive, though the corresponding reduction in the jet opening angle leads to smaller probability of jet detection. The measured signal upper limits = 2.44 and the signal predictions s i Γ b , E j for each window can be used to constrain the jet parameters E j and Γ b through s i Γ b , E j <s i . Values of Γ b and E j not fulfilling this relation are ruled out at 90% CL. These limits are illustrated in Fig. 6. Finally, the scenario proposed by Koers and Wijers (2007) shall be examined briefly. Assuming that meson re-acceleration leads to a simple power law neutrino spectrum in the relevant energy range (roughly 100 GeV -10 PeV) the source spectrum can be approximated by an E −γ -law with a high-energy cutoff at 10 PeV. For the three values of the spectral index γ discussed by Fig. 6. Constraints on the jet parameters E j and Γ b where E 51.5 = 10 51.5 erg. For each assumed emission time scale τ e , the colored regions are ruled out at 90% confidence level.
Koers and Wijers, this analysis yields the following upper limits for an assumed emission time scale of τ e = 100 s: For longer emission time scales, these limits scale as in (5).
Summary and Outlook
We have searched for high-energy muon neutrinos in coincidence with SN 2008D using data from the IceCube 22 string detector. Using a blind analysis optimized with experimental background and simulated signal data, we observed no events which passed the cuts. From the non-observation, we have derived first constraints on the soft jet model for core collapse SNe under the condition that the predicted jet was pointing in the direction of the Earth. Given the strong dependence of the signal expectation on the model parameters, the non-detection of neutrinos places significant constraints on the principal model parameters. A two dimensional parameter scan in Γ b and E j shows that the jet Lorentz factor is generally constrained to Γ b < 4 for jet energies E j > 10 51 erg. As mentioned above, the constraints quoted here only hold if the assumed jet of SN 2008D was pointing towards Earth.
IceCube is now operating in an additional mode, scanning online data for neutrino bursts, i.e. two nearly collinear neutrinos within 100 s, in real time. If a burst is detected, IceCube triggers optical follow-up observations searching for a SN in the corresponding direction (Franckowiak et al. 2009). Constantly monitoring the entire northern sky, this approach has the potential to generalize the constraints obtained from studying individual objects. | 2011-01-20T16:16:44.000Z | 2011-01-20T00:00:00.000 | {
"year": 2011,
"sha1": "97a8260de6e484e98e3511fd1cb361635a46e8d6",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2011/03/aa15770-10.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "7189ee640f5dfe5f40c7a5233dfad1d5b8d7ff14",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221167880 | pes2o/s2orc | v3-fos-license | Expression and Clinical Significance of Mucin Gene in Chronic Rhinosinusitis
Purpose of Review This review highlights the expression and regulation of mucin in CRS and discusses its clinical implications. Recent Findings Chronic rhinosinusitis (CRS) is common chronic nasal disease; one of its main manifestations and important features is mucus overproduction. Mucin is the major component of mucus and plays a critical role in the pathophysiological changes in CRS. The phenotype of CRS affects the expression of various mucins, especially in nasal polyps (NP). Corticosteroids(CS), human neutrophil elastase (HNE), and transforming growth factor-β1 (TGF-β1) are closely related to the tissue remodeling of CRS and regulate mucin expression, mainly MUC1, MUC4, MUC5AC, and MUC5B. “It is expected that CS, HNE and TGF - β could be used to regulate the expression of mucin in CRS.” However, at present, the research on mucin is mainly focused on mucin 5AC and mucin 5B, which is bad for finding new therapeutic targets. Summary Investigating the expression and location of mucin in nasal mucosa and understanding the role of various inflammatory factors in mucin expression are helpful to figure out regulatory mechanisms of airway mucin hypersecretion. It is of great significance for the treatment of CRS.
Introduction
CRS is a clinically common otolaryngology disease, which is prevalent all over the world [1]. CRS is classified into two phenotypes [2, 3••, 4••], based on the tissue remodeling characteristics, referred to as chronic rhinosinusitis with nasal polyps (CRSwNP), and chronic rhinosinusitis without nasal polyps (CRSsNP), respectively [5]. CRS can occur in any age group and the morbidity rate increases with age. At present, the morbidity rate of CRS in China is 2~8% [6•, 7], as well as the number of CRS patients increases by 0.3% every year [8]. However, the pathogenesis of CRS remains unclear so far [9]. Although CRS is rarely fatal, it can cause nasal congestion, purulent rhinorrhea, reduction/loss of smell, facial pressure or pain, and mucosal edema [10][11][12][13]. The symptoms may continue for 12 weeks or more, which bring about a substantial burden in terms of health, quality of life, and economical expenditure [14,15].
Studies have shown that mucin is the major component of airway mucus in patients with CRS, which affects the rheological properties of mucus [16], leading to a series of pathophysiological changes, including submucosal gland hyperplasia, the increased numbers and excessive secretion of airway goblet cells, and mucin hypersecretion, particularly MUC5AC and MUC5B [17]. MUC5AC and MUC5B, as important components of respiratory secretions, are increased in CRS [18]. Among them, the studies find that MUC5AC plays a critical role in the inflammatory response of the respiratory tract [19], and many pro-inflammatory cytokines regulate goblet cell metaplasia and excessive secretion of MUC5AC [17].
It is speculated that uncontrolled inflammation is responsible for many of the manifestations and symptoms of CRS [20], which is closely related to mechanism of pathogenesis of CRS [21]. There exists some lack of recognition about the pathogenesis of CRS among the circles of medicine presently. Most scholars believed that hyperplasia and metaplasia of glandular cells and goblet cells and motivating expression of This article is part of the Topical Collection on Rhinosinusitis mucin are three important pathogenic mechanisms of CRS. However, the mechanism is not well established and remains controversial [22], which still requires more investigation. Nowadays, more and more studies have focused on tissue remodeling in CRS and have indicated that CRS is also distinguished by mucosal remodeling and different subtypes of CRS exhibit different characteristics of tissue remodeling. Currently, the evidence suggests that there is close relationship between inflammation and remodeling [5,23]. And lots of inflammatory mediators play an important role in this relationship [24,25]. Therefore, the treatment of CRS is extremely challenging.
Structure of Mucins
Mucin is the major macromolecular component of airway mucus [26] and exists in the form of high-density glycosylated molecules with molecular weight ranging from 1 to 50,106 kDa [16].Goblet cells in the superficial epithelium cells and submucosal glands (SMG) can rapidly produce mucus under certain stimulation in the form of exocytosis and then form a mucus layer in the airways [14]. The mucus layer is divided into two layers: one layer is the inner serous layer called the sol phase, in which the cilia recover from its active tempo, and the other layer is the outer more viscous layer called the gel phase, in which cilia plays a transport function through pulsation.
The currently accepted molecular model of mucin is a linear and flexible amino acid chain, which is composed of interconnected subunits by disulfide bonds and each subunit contains alternating highly glycosylated proteinase-resistant regions and sparsely non-glycosylated proteinase-sensitive region (Fig. 1). The greatest feature of molecular model of mucin is the variable number of tandem repeats amino acid sequences in the highly glycosylated region, and the number of amino acids varies between 8 and 169. These tandem repeats are rich in serine or threonine, which represent the potential sites for O-glycosylation. In a single mucin gene, the tandem repeat domains exhibit a change in the number of tandem repeat sequence due to genetic polymorphism, which results in a difference in the size of the mucin molecules. Different types of mucins have their own representative characteristics. Membrane-tethered mucins are transmembrane proteins and are anchored to the apical surface of mucosal epithelial cells. Membrane-bound mucins have at least three common features, which include a transmembrane domain, a highly glycosylated N-terminal domain in contact with the outside environment acting as a sensor receptor, and a short cytoplasmic tail (CT) that enables its participation in intracellular signaling [31]. Secreted mucins have at least five major cysteine-rich domains (Fig. 2).
Expression and Role of Mucins in CRS
In recent years, domestic and foreign scholars have done a lot of research on the secretion, expression, and distribution position of mucin in nasal mucosa. Seventeen mucins have been identified in the lower airways and 8 mucin genes have been reported in the upper airways. And more mucin genes are expected to be identified in future studies. Although various studies have investigated hyperecretion of 20 mucins [32,33], the expression levels and distribution positions of several mucins are still unclear, such as MUC6, MUC19, and MUC7 ( Table 1). The evidence is not clear whether the diagnosis of nasal polyposis is associated with alterations of mucin expression.
Aust et al. [34] and Kim et al. [35] reported the expression of MUC1, MUC2, MUC4, MUC5AC, MUC5B, MUC7, and MUC8 in normal nasal mucosa. Quantitative changes and localization changes in expression of mucin were investigated by immunohistochemical analysis. The results showed that MUC5AC was only identified in surface epithelial goblet cells and MUC5B was only expressed at low levels in nasal sinus SMG. Quantitative analysis of mucin secretion in CRS by ELISA indicated that most mucins are derived from SMG; however, this method failed to identify which kind of mucin. The results of Aust et al. [34] showed that the expression of MUC3 and MUC6 was weakened in nasal mucosa and nasal polyps. The results of Kim et al. [35] demonstrated that MUC3 and MUC6 were excluded from the study of expression of MUC1 to MUC8 in nasal polyps. All studies showed that MUC2 and MUC8 were more strongly expressed in nasal polyps than nasal mucosa. Martínez et al. [36] found that the expression levels of MUC5AC mRNA and MUC5B mRNA in CRS were significantly increased in CRS compared to in normal sinus mucosa. Studies on the expression of mucin in nasal polyps by using probes to be directed against unique sequence of mucin molecule (discontinuous repeat probes) show that expression level of MUC5AC is four times higher than MUC2 and is twelve times higher than MUC1. Using in situ hybridization, the researchers compared the expression of 8 mucins (MUC1, MUC2, MUC3, MUC4, MUC5AC, MUC5B, MUC6, MUC7, and MUC8) in nasal polyps and normal sphenoid sinus mucosa. The results showed that the location of 8 mucins is in nasal polyps, which demonstrated that more mucin genes were activated during the development of nasal polyps. We find that the major variations of expression of mucin genes in nasal polyps have also been reported to have similar results in CRS. This suggests that the most significant variations of expression of mucin genes involve SMG rather than epithelium with the development of nasal polyps [16]. Lee et al. [37] used reverse transcription polymerase chain reaction and immunohistochemistry to detect the expression and distribution of MUC8 in the maxillary sinus mucosa in CRS. The results showed that the expression intensity and distribution density of MUC8 increase obviously. Jung et al. [38] examined the expression of mucin in the ethmoid mucosa of 8 patients with CRS. The results revealed that there were 2 cases of MUC1 expression, 6 cases of MUC4 expression, 8 cases of MUC5AC expression, 5 cases of MUC5B expression, 7 cases of MUC7 expression, and 8 cases of MUC8 expression. However, MUC2 and MUC6 were not detected, which proved that MUC4, MUC5AC, MUC5B, MUC7, and MUC8 were the major mucins in the ethmoid mucosa. Some studies investigated the pattern of expression of mucin in healthy nasal tissues and nasal polyps, which found that overexpressions of MUC1, MUC4, MUC5AC, and MUC5B are in nasal polyps than in healthy nasal tissues [39]. Groneberg et al. [40] have confirmed that MUC5B is expressed in goblet cells and SMG in normal nasal mucosal epithelium, and yet MUC5AC is expressed in goblet cells in normal nasal mucosal epithelium. Sharma et al. [41] fully demonstrated that MUC5B is expressed in mucous cells, while MUC7 was only expressed in the serous cells of the submucosal glands of the trachea.
Under normal circumstances, the mucous glands of the nasal mucosa and the goblet cells in the epithelium secrete mucus, which can keep the nasal mucosa moist, maintain nasal physiological function, and prevent lesions. The goblet cells proliferate and differentiate to maintain their balance of quantity in the nose and sinus. When the nasal mucosa has chronic inflammation, the mucous glands of the mucous and the goblet cells secret excessive mucus, which affect the mucociliary clearance rate [16,42], leading to mucus retention, aggravating inflammation, and forming a vicious circle, which can also lead to respiratory complications [43].
According to the results of pathological observation [44], mucosal edema, thickening of basement membrane, fibroblast proliferation, collagen deposition, and connective tissue increasing are common remodeling forms of CRS [5], which seriously affect normal sinonasal physiological function [45]. The two phenotypes of CRS [3••, 4••] have different immunoregulatory mechanisms and remodeling features. CRSwNP is a Th2-skewed response with high levels of IL-4, IL-5, and IL-13 [46], which is characterized by albumin deposition, stromal edema, and fibrosis. Conversely, CRSsNP is Th1-skewed response with high levels of IFN-γ; TGF-β1 [47], which is characterized by goblet cell hyperplasia [48]; fibrosis; excessive collagen deposition; and thickening of basement membrane [24,49]. In short, CRSsNP shows neutrophil infiltration, whereas CRSwNP shows eosinophil predominance, whose symptom is known to be more severe than CRSsNP. Thus, the recurrence rate of nasal polyps is still very high even after surgical removal [50].
Tissue remodeling has been known to be associated with glandular hyperplasia, inflammatory cell infiltration, and mucosal fibrosis [51••, 52]. Therefore, tissue remodeling is considered to be the main influencing factor of CRS [53]. Tissue remodeling is the process of rebuilding an existing tissue that cures the wound by secreting the extracellular matrix when the tissue is damaged [54]. Many inflammatory factors may be involved in tissue remodeling; for example, transforming growth factor-β1 regulates tissue remodeling and induces myofibroblastic differentiation [55]. Activation of fibroblasts produces myofibroblasts and induces extracellular matrix deposition and remodeling [56]. Therefore, fibroblasts can be used as an important target cell to treat CRS. In principle, CRS is regulated by inhibiting ECM accumulation, preventing nasal fibroblastic differentiation, and tissue remodeling, but whether it can be clinically achieved requires further confirmation [57].
Regulation of Mucins in CRS
The researchers found that respiratory pathogens and inflammatory cytokines regulate the expression of airway mucin [24, Similarly, TNF, TGF-b, IFC-c, and IL-1B upregulate MUC5AC transcription and protein levels in normal sinusoidal epithelial cells treated with inflammatory mediators and pro-inflammatory cytokines such as neutrophil elastase, IL-4, IL-9, and IL-13. In contrast, in normal nasal epithelial cells, pro-inflammatory cytokines including TNF, IL-1B, LPS, IL-4, and PAF all downregulated the expression of MUC5AC [58]. The study demonstrates that a short course of oral steroids increases membrane-tethered (MUC1 and MUC4) mucins and that long-term intranasal steroid treatment is able to decrease major secreted mucins (MUC5AC and MUC5B). The downregulation of secreted mucins could result from the ability of CSs to reduce GCH and could account for the reduction of mucus production and rhinorrhea [59]. The histological features of sinus mucosal lesions in CRS are the large number of inflammatory cell infiltration and cytokine release [60], particularly neutrophils [52,61,62], which were reported to be associated with mucosal remodeling. Sampson et al. [63] showed that neutrophils played a role in remodeling by secreting recombinant mediators (such as MMPs and TGF-β) [64]. Lou et al. [65] observed inflammatory cell infiltration status and then used the cluster analysis method to classify the cytological phenotype of CRSwNP and found that the neutrophil type cells accounted for 8%. Neutrophils regulate the excessive secretion of mucin through corresponding signaling pathways (such as tumor necrosis factor, α-convertase, TGF-β, and epidermal growth factor receptor). The indirect effects of pendrin protein on the recruitment of inflammatory cells (such as neutrophils) may also induce excessive production of mucin. In Chinese patients with CRS, the pendrin protein may accelerate the excessive secretion of MUC5AC by promoting the neutrophil infiltration and goblet cell proliferation. The pendrin is an aniontransporting protein and plays an important role in mucus production. However, the specific pathological mechanism of increasing MUC5AC secretion by pendrin protein is still unclear, which still needs further research.
HNE is a serine protease [66] and released by neutrophil degranulation [67], forming the neutrophil inflammatory mediator [68] and inducing mucin overproduction and goblet cells metaplasia [69]. Many studies have showed that there is a link between CRS and inflammatory granulosa proteins, such as human neutrophil elastase (HNE) [70] (Fig. 3).
HNE has a significant effect on the immune status of the sinus and the entire airway mucosa. Studies have shown that HNE induces the secretion of MUC5AC and MUC5B [71] and the expression of MUC1, MUC4, and MUC5AC in extracorporeal epithelial cells [72][73][74][75]. In addition, other studies have indicated that HNE can cause airway goblet cell hyperplasia. In vitro and in vivo experiments of airway epithelial cells revealed that goblet cells can metabolize and proliferate and increase MUC5AC expression and secretion [69,76]. Voynow and her colleagues [77] proved that HNE induced goblet cell metaplasia in the airway [78] and resulted in mucin overproduction [79]. Further studies [80,81] have showed that HNE could increase MUC5AC [82,83]. Recent clinical observational studies also suggested that HNE is a key risk factor for the onset and persistence of bronchiectasis [84,85]. Seshadri et al. [86] show that similar results in patients with CRS and nasal polyp. Although many reported studies showed that HNE and TACE can induce goblet cell proliferation in lower respiratory tract diseases, the relationship is still unclear in CRS [87].
TGF-β (transforming growth factor beta) is a multifunctional cytokine with important immunomodulatory and fibroblastic properties. TGF-β regulates the inflammation and remodeling of CRS [88,89], which can produce in the airway inflammatory cells and permeate in the bronchial mucosa [90]. Although nowadays five TGF-β subtypes have been identified, only three subtypes are found in the human body including TGF-β1, TGF-β2, and TGF-β3 [91]. TGF-β1, TGF-β2, and TGF-β3 are localized on chromosomes TGF-β1-19q13, TGF-β2-1q41, and TGF-β3-14q24, respectively. The regulations occur on the transcriptional level, but the function is not entirely known. The promoters of TGF-β2 and TGF-β3 have a classical TATAA box domain and a hormone-controlled CRE-ATF terminal domain in their structure [92] (Fig. 4). The study identified that TGF-β1 and TGF-β2 genes were regulated by miR-532-3p and miR-500a-5p, respectively [93]. It has been reported that TGF-β from regulatory T cells (Tregs) is related to Tregs production and inhibition function of CD4T cells [93].The decrease of TGF-β 1 expression and TGF-β receptor 2 in CRSwNP results in the absence of Tregs in CRSwNP. TGF-β 1 not only affects Tregs but also destroys the integrity of Tregs in CRSwNP, which partly explains the defect of epithelial barrier in CRSwNP [93]. This evidence suggests that TGF-β is closely related to the pathogenesis of CRSwNP. Here, the role of miR-532-3p and miR-500a-5p will be the targets to further understand the role of TGF-β in CRSwNP [89,94,95].
According to previous reports, TGF-β1 is a representative profibrotic cytokine and a major stimulator of fibroblast activation, which can induce activation and differentiation of fibroblasts into myofibroblasts. Activated fibroblast or myofibroblast has collagen contractile activities and can initiate tissue remodeling [96]. Therefore, TGF-β1 is closely related to the pathogenesis of CRS. TGF-β1 stimulates the differentiation of fibroblasts into myofibroblasts. The differentiated myofibroblasts increase Fig. 4 TGF-β participates in the regulation of airway inflammation and remodeling process. Three TGF-β isoforms have been found in the human body, respectively: TGF-β1, TGF-β2, TGF-β3.TGF-β1, TGF-β2, and TGF-β3 are localized on chromosomes 19q13, 1q41 and 14q24, respectively. A promoter for TGF-β2 and TGF-β3 isoforms possesses in its structure a classical TATAA box domain and a CRE-ATF terminal region. TGF-β1 stimulates fibroblast activation and promotes the differentiation of fibroblasts into myofibroblasts and accelerates collagen synthesis in tissue remodeling. Differentiated myofibroblasts increased the expression of ECM components such as fibronectin, promoting massive amounts of ECM deposition, which leads to tissue remodeling. In addition, TGF-β1 binds to receptor TGFβ-IIR and activates Smad signal transduction pathway. TGF-β1 is an important factor in the transactivate MUC5AC promoter activity through Smad4 and Sp1.TGF-β1-Smad3/4 signaling acts as a negative regulator via MAPK14. Inhibitor in MUC5AC transcription. (MAPK14:Mitogen-activated protein kinase 14; Sp1:specificity protein 1) the expression of ECM components (such as fibronectin and collagen type I), which can promote abundant ECM deposition, leading to airway tissue remodeling [97,98]. TGF-β is known to activate both Smad-dependent and Smad-independent pathways after binding to its receptor. In miR-32-3p, miR-548e-3p, and miR-3149 miRNAs, miR-548e-3p is the only miRNA involved in the regulation of Smad2, Smad4, and MAPK1 genes in TGF-β signaling pathways. To sum up, miR-532-3p, miR-500a5p, and miR-548e-3p are the three most important miRNAs for the study of TGF-β signaling pathway in CRSwNP94-96.It binds to TGF-β-IIR receptor and activates Smad pathway [99]. TGF-β1 signal transduction is regulated primarily by the Smad proteins: receptor-mediated Smad2, common mediator Smad3, and inhibitory mediator Smad724. TGF-β1 is considered to be an important factor for MUC5AC transactivation promoter activity in a synergistic manner through the Smad4 and Sp1 pathways [100]. However, studies have shown that MAPK14 phosphatase-1-dependent inhibitor can act as a negative regulator of MUC5AC transcription under the TGF-β1-Smad3/4 signal pathway [101]. In addition, the effect of TGF-β2 subtype on MUC5AC expression is also controversial. Previous studies have demonstrated that TGF-β2 could result in a decrease in the expression of both MUC5AC and MUC5B in human bronchial epithelial cells. TGF-β2 can also partially reduce IL-13-induced MUC5AC production by binding to MUC5AC promoter alone in the Smad4 signaling pathway [102]. Another study suggested that IL-13 could induce TGF-β2 expression in vitro and TGF-β2 can promote mucin expression in airway epithelial cells [101]. However, there is no direct and unequivocal evidence that whether TGF-β3 regulates MUC5AC expression, which needs further investigation.
Imbalance of TGF-β subtype activation and expression suggests that TGF-β participated in regulating airway inflammation and remodeling [103], especially regulating mucus hypersecretion in the airway epithelial cells [102]. New evidence suggested that TGF-β can enhance collagen synthesis during tissue remodeling process [24]. Clinical studies [104] also showed that patients with CRSsNP usually have higher expression levels of TGF-β compared with healthy individuals, whereas patients with CRSwNP have lower expression levels of TGF-β. Moreover, Nicholas et al. [53] demonstrated that the decrease of expression levels of TGF-β1 may be associated to the edema formation in CRSwNP, whereas increase of expression levels of TGF-β1 may play a critical role in the excessive tissue repair and fibrosis formation in CRSsNP. As reported by the previous study, TGF-β1 concentration, mRNA expression, and the number of activated Smad2-positive cells (the indication of TGF-βactivation) are significantly higher in patients with CRSsNP than those in healthy individuals. In contrast, in patients with CRSwNP, this phenomenon was not observed [53,105]. At the same time, TGF-β1 may be affected by many factors in CRSwNP and further research is needed to confirm this hypothesis.
Corticosteroids (CS) are the first-line treatment drug for NP, which has strong anti-inflammatory activity [106] and can reduce its volume and inflammatory component. However, its effect on mucin hypersecretion has been controversial. Corticosteroids play an anti-inflammatory role by binding to the intracellular receptor the glucocorticoid receptor (GR), which is a ligand induced transcription factor [106]. In ligand binding, GR complex is transferred to the nucleus with the help of a number of proteins, such as nuclear localization signals and input protein, and exerts its anti-inflammatory effects [107]. There are two ways of action of CS: one is to directly regulate the expression of mucin [108]; the other is to indirectly inhibit pro-inflammatory cytokines [109]. Milara et al. suggested that downregulation of MUC1 in NP tissues is related with anti-corticosteroid in CRSwNP [106]. In contrast, MUC4 was significantly overexpressed in NP epithelial cells of corticosteroid resistant patients when compared with CRSwNP responder patients [110]. Further analysis showed that the cytoplasmic tail (CT) part of MUC1 has anti-inflammatory effects on nasal polyp epithelial cells by inhibiting toll-like receptors (TLR) [31,111,112].. In addition, the formation of MUC1-CT and glucocorticoid receptor alpha (GRα) protein complex can protect GR-Ser226 hyperphosphorylation induced by TLR agonists and help to mediate GRα nuclear translocation in response to corticosteroids, so as to play an anti-inflammatory role [106]. Recent evidence suggests that after 2 weeks of oral administration of corticosteroids, corticosteroids increase MUC1 expression in vitro and in human NP epithelium. However, the relationship between the efficacy of oral corticosteroids and the expression of MUC1, as well as the possible interactions among corticosteroids, GR, and MUC1, is not clear [106]. For example, it is not clear whether there is a direct interaction between MUC1-CT and GRα, because MUC1-CT can indirectly bind to the GRα chaperone complex. Recent evidences shows that MUC4 expression is upregulated in airways under inflammatory conditions and that corticosteroids reduce MUC4 expression in vitro [113][114][115]. However, the relationship between the efficacy of oral corticosteroids and the expression of MUC4, as well as the possible interaction among corticosteroids, glucocorticoid receptor (GR), and MUC4, is not clear [110]. The downregulation caused by CSs in MUC5AC and MUC5B levels may lead to the decrease of mucus hypersecretion in NP. In this direction, the downregulation of MUC5AC after CS treatment was significantly related to the improvement of nasal urea in all NP patients [59].
Conclusion
In this review, we provide supporting evidence that the expression and location of mucin in normal nasal mucosa and CRS mucosa are different, depending on the phenotype of CRS, various inflammatory factors, and types of mucin (secretory or membrane binding). Most of the studies focus on the expression of mucin in nasal polyps and normal nose, and only a few studies compare the expression of mucin in normal nose and CRS. Compared with normal nasal mucosa, the expression of MUC3 and MUC6 in nasal polyps is downregulated, the expression of MUC2 and muc8 in nasal polyps is upregulated, the expression of MUC5AC and MUC5B in CRS is upregulated, and the expression of MUC5AC in nasal polyps far exceeded MUC1 and MUC2. At present, only MUC8 is upregulated in maxillary sinus mucosa, MUC2 and MUC6 are not expressed in ethmoid sinus mucosa, and no other mucin is found. According to the existing results, we find that MUC5AC is distributed in epithelial goblet cells and SMG, MUC5B is distributed in SMG, and the distribution of other mucins needed further study. The regulation of mucin depends on various inflammatory factors. In CRSwNP, there are few studies on the downregulation of MUC5AC and MUC5B by CS, which is not enough to draw accurate conclusions and remains to be studied. At present, only HNE upregulated MUC5AC and MUC5B, TGF-β upregulated MUC5AC, and few other mucins were involved. In conclusion, the role of various inflammatory factors in the regulation of mucin expression provides a good direction for clinical treatment.
Compliance with Ethical Standards
Conflict of Interest The authors declare that there are no conflicts of interest.
Human and Animal Rights and Informed Consent This article does not contain any studies with human or animal subjects performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2020-08-19T14:51:39.956Z | 2020-08-18T00:00:00.000 | {
"year": 2020,
"sha1": "b7f62e8aea67e9f37b08e855e5dff8fdf69bfd2c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11882-020-00958-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7f62e8aea67e9f37b08e855e5dff8fdf69bfd2c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246234982 | pes2o/s2orc | v3-fos-license | Post Renal Transplant Infections: A Six Month Follow-Up Study from a Kidney Transplant Institute of North India
: Transplantation returns the majority of patients to an improved life-style and an improved life expectancy, as compared to patients on dialysis. Infections are the most prevalent cause of morbidity and mortality in kidney transplant recipients, with more than 80% suffering at least one episode of infection in the first year. The method of data collection in this study was prospective hospital record analysis, all renal transplant recipients were screened pre operatively for the presence of any overt or occult infection. The predominant age group undergoing renal transplantation was between 18 - 29 yrs. Urinary tract infections were the highest and commonest infections observed. The microorganisms involved in the infections were Bacteria (36.4%), Viruses (7.6%), Fungi (3.7%) and Parasites (5.5%). In Urinary tract infection, E. coli followed by Klebsiella pneumoniae were the predominant bacterial isolates. Candida albicans were the commonest fungi isolated. Among the Gastrointestinal tract infections, Cryptosporodium was the commonest Protozoal isolate. Cryptococcus neoformans was isolated in two cases of meningitis. In this study the organisms causing infection during the immediate operative period have been categorized, which will give the treating physician a reasonable idea to suspect the system and cause of infection during the particular post renal transplant period. This study has focused to evaluate the spectrum of infectious complications in post renal recipients in first 6 months of follow up and evaluate the most common type of infection.
INTRODUCTION
Kidney transplantation offers a healthier life over hemodialysis in patients with end-stage renal disease (ESRD). The survival following transplantation is determined by various factors, including pretransplant co-morbidities, graft type, and degree of immunosuppression (Arend et al., 1997). The new developed immunosuppressive drugs has lead to the reduction in mortality of renal transplant recipients (RTRs). Nevertheless, potent immunosuppression poses an extra risk of infectious disorders in the transplant recipients. One quarter of RTR develop a serious infection in the post-transplant period that causes allograft dysfunction (Ram et al., 2005). Bacterial infections are very frequent as compared to the viral infections in RTR. Nearly 13% of all patients transplanted between 1996 and 2000 in the US needed hospitalization for bacterial infections in the first 3 years compared to 6% for viral infections (Dharnidharka et al., 2007).
In the period from one to six months after transplantation, infections with immunomodulating viruses, particularly cytomegalovirus, are most important. Cytomegalovirus accounts for two-thirds of febrile episodes during this period. In addition to the clinical syndromes induced by these viruses, their immunomodulating properties predispose to opportunistic infections with such organisms as Pneumocystis carinii, Listeria monocytogenes, and Aspergillus fumigates (Pava, 1993;Fishman, 1995;Hadley and Karchmer, 1995;vanDenberg et al., 1996). Other infections generally occurring during this period include hepatitis, Herpes zoster, Herpes simplex, Mycobacterium tuberculosis, and Epstein-Barr virus (EBV), which can be complicated by the development of lymphoproliferative disorders. Recurrence or relapse of urinary tract infections can also occur (Fishman and Rubin, 1998). Most infections occur early in the post-transplantation course with about two-thirds of renal transplant recipients (RTR) experiencing an infectious-related complication in the first year after transplantation . Approximately 70% of severe bacterial, fungal and viral infections occur within 3 months of transplantation. This study has focused to evaluate the spectrum of infectious complications in post renal recipients in first 6 months of follow up and evaluate the most common type of infection.
Methodology
A. Collection of specimen. B. Microscopical examination. C. Culture procedure and identification of organisms.
Collection of specimens
Urine, Drain fluid, foley catheter tip and Drain tip were collected from all cases. According to signs and symptoms, Blood, Serum, Sputum, Oral scrapings, Faeces, Pus and CSF were collected.
OBSERVATION AND RESULTS
Infection in renal transplantation is a major and severe penalty of immunosuppression and is associated with high mortality. In order to prevent the occurrence of infection, one should know the commonest types of infection in that particular group of patients. Hence, a detailed account of post renal transplant infections was made to find out the present trend of infections and their incidence in renal transplant patients. All the details produced were based on the post renal transplant follow up of the patients which mainly explains the prevalence of infection, most common type of infection and most common organism involved. (Table-1). All 75 cases underwent live related donor transplantation and no case underwent Cadaver transplantation. Urinary tract infections were commonest followed by respiratory tract infections (Table-8). Bacterial infections were the most common infections in the post transplant period (Table-9).
URINARY TRACT INFECTION
Urinary tract infections (UTIs) are the most common bacterial infections requiring hospitalization in kidney transplant recipients, followed by pneumonia, postoperative infections, and septicemia. Women are at greatest risk for UTIs; other risk factors include deceased-donor transplant, kidney-pancreas transplantation with bladder drainage, prolonged catheterization, uretero-vesical stents, and increased immunosuppressed state (Lorenz EC and Cosio FG., 2010). 155 Urinary tract infections were the single most common infection occurring in renal transplant recipients, as noted in the present study and also reported by Rubin, et al., (1981) 141 and Jadav, et al., (1992). 142 Urinary tract infection constituted 54.5% (Table-8) of the total infections in this study. Umesh et al., (2007) has reported 31.1% of UTI incidence in transplant recipient. Jadav, et al., (1992) observed the incidence of 53% which were in congruence with the present study. Krieger JN, et al. [1977] observed 61% , the reports of Leigh, D.A. (1970) and Chan, P.C. (1990) varies from 30-79%, 31% incidences based on study by Chan PC, et al. [1990]. The incidences of 51% reported by Ravi kumar (1998) were in proximity with current investigation. A study of 28,942 primary renal transplant recipients from the U.S. has revealed a cumulative UTI incidence of 17% during the first 6 months after transplantation; at 3 years the incidence were 60% for women and 47% for men . Kumar Enterococcus faecalis. Similar organisms were also isolated by Paul, D. Ellner (1987). E. coli followed by Klebsiella pneumonia were the most frequently isolated organism in this study. Takai et al., (1998) found that E.coli was the commonest organism causing urinary tract infection. Gram negative bacilli of Enterobacteriaceae family were most frequently isolated in urinary tract infections in a study by Morz, E. et al., (1993). 3.3% of the urinary tract infections were due to fungi of which all were caused by Candida albicans. Funguria has been attributed in part to the widespread use of broad spectrum antibiotics, Corticosteroid, antineoplastic agents, immunosuppressive agents and urinary catheterization. Fluconazole is the drug of choice for susceptible Candida species; other azoles and echinocandins are not concentrated in the urinary tract and thus are less likely to be effective if infection is confined to the urinary tract (Pappas PG. et al., 2004;).
RESPIRATORY TRACT INFECTION
Respiratory tract infection occurred in 16.3% of the transplant recipient in comparison with 33% based on the study by the Jha R, et al. [1999].
Respiratory tract infection constituted about 14.5% of the total infections which was second most prevalent infection in the current study (Table-8). These reported incidence showed proximity with some of the previous studies whereas some reports showed a significant variation: 8% of incidences were observed by Moore, F.D et al., (1983), 15% by Giri (1992) which is approximately same as reported by present study, 12.6% by Ravi kumar (1998), 33% by Jha R, et al. [1999]. reported 8% of RTI incidences as second most prevalent infection in renal transplant patients following UTI. Organisms causing bacterial infection were E. coli, Gee-Chen Chang (2004) , mention which organism was involved in current study.
GASTROINTESTINAL TRACT INFECTION
Infections of the Gastrointestinal tract occurred in 12.7% of the total infectious patients (Table-8
SKIN AND SOFT TISSUE INFECTION
Skin and soft tissue infection accounted for 5.5 % of the infections (Table-10). Among the infections Staph aureus-1 Herpies-2 and Histoplasma-1.
CYTOMEGALO VIRAL INFECTION
Cytomegalovirus infection is a recognized problem of the early post transplant period in renal transplant recipients (Boehter A., 1994). In the present study, one patients (1.8%) developed cytomegalovirus infection which was detected in the first month of transplant (Table ). Kumar
CENTRAL NERVOUS SYSTEM INFECTION
Two patients were observed to develop meningitis which constituted 3.6% of the total infectious. Giri, (1992) has reported the incidence of 0.7% in his study. The causative organism was found to be Cryptococcus neoformans. Both patients developed meningitis in the 6th month post transplant. Ravi kumar (1998) has also repoted two cases of Cryptococcus neoformans meningitis in his study. It has been mentioned that Cryptococcus neoformans, the single most common cause of central nervous system infection in the renal transplant patients, occurs almost exclusively in the late post transplant period (more than six months after transplant ) Robert H. Rubin, (1993). | 2022-01-24T16:02:56.796Z | 2022-01-22T00:00:00.000 | {
"year": 2022,
"sha1": "98cc2ac26a682764259a60afe59cc30a5dc57981",
"oa_license": "CCBY",
"oa_url": "https://ijcsrr.org/wp-content/uploads/2022/01/21-22-2022.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "13289d8ed231c16548e19867040f189bb2bb5bac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
237472798 | pes2o/s2orc | v3-fos-license | What Is Efficient Social Studies Instruction ?
Effective social studies instruction should intend to train young individuals who are interested, are capable of participating in the learning process, are capable of utilizing technology, have a good memory, look forward to the future with confidence, and transfer the knowledge they acquire at school to daily life. The aim of the present research is to determine the problems experienced in the instruction of social studies course based on teacher views and the means for an efficient social studies instruction. Thus, the case study method, a qualitative research design, was employed in the present study. Semi-structured interviews were conducted with 20 teachers, employed in five middle schools in Elazığ province urban center, during the 2016-2017 academic year, and the data were analyzed with descriptive analysis. Thus, it was determined that the inadequacy of course hours and the redundancy and complexity of the topics were the main problems experienced in social studies courses and these were identified as the factors that led to the lack of student interest. According to the views of the teachers, efficient social studies instruction requires a focus on current issues, requires employment of available technologies, and should allow the individuals to transfer content knowledge to life. It is concluded that an efficient social studies instruction would be possible through the transfer of knowledge to real-life situations, the employment of technological tools, active student participation, the simplification and the elimination the discontinuities between textbook content.
INTRODUCTION
Education aims to train efficient, creative and responsible citizens. In fulfilling this objective, the efficiency of social studies course plays a significant role. The determination of the problems experienced in social studies instruction would partially reveal the criteria for a more effective social studies instruction. Social studies is a self-renewing, dynamic and complex field of study developed to train individuals with universal and national good citizenship values, who could adapt the skills identified in the curriculum to real life, and plan a healthy future based on the knowledge acquired in various sciences (Çatak, 2016). Deveci (2015) aimed to determine the expectations of pre-service teachers from the future curricula. The study findings demonstrated that pre-service teachers expected the future curricula to train global citizens, allow the individuals to acquire national and universal values, include an applicable content, fully implement constructivism in learning and instruction processes, and adopt the process evaluation technique. They stated that the curricula should be revised based on the above mentioned criteria.
Previous studies revealed that certain problems have been experienced in social studies courses. In a study conducted knowledge of teachers on alternative evaluation methods was limited.
Similarly, Meziobi et al. (2012) investigated the implementation of UBE (Universal Basic Education), a reform enacted in the state of Imo in Nigeria. The study findings demonstrated that social studies teachers were not adequately informed about the objectives of UBI. Furthermore, it was determined that educational institutions did not prepare teachers for the implementation of UBE.
Certain problems were identified in technology literacy as well. The findings of the study conducted by Sezer et al. (2020) demonstrated that teachers were at a traditional level in technology use. Furthermore, it was determined that the creative ideas and skills of the social studies teachers were inadequate in technology use and instructional material design, and they perceived that technology integration was limited to the classroom activities. Similarly, in a study conducted by Yontar (2019) to determine the digital literacy levels of pre-service teachers in classroom and social studies education programs, it was reported that the digital literacy levels of the participants were moderate.
Teacher perspectives towards the students are also affect the education and instruction process. In a study conducted by Keefer (2017), inadequate approaches of social studies educators were investigated in Florida. In the study, the views of five teachers were obtained. The study findings demonstrated that the perspectives of the social studies teachers towards impoverished students were improper. It was emphasized that social studies teachers should be trained to improve the asset-based knowledge of these students and to change their inaccurate approaches that lower expectations and promote stereotypes. Fraenkel (1995) conducted a study to determine the attributes of efficient social studies teachers. Fraenkel (1995) attempted to identify the activities conducted in social studies classes and the attributes and behaviors of efficient social studies teachers. The study findings revealed that efficient teachers tended to have high expectations about the students and share these expectations clearly with the students. They could change instruction methods and classroom activities. It was also demonstrated that efficient teachers were effective on the students and interested in not only the course content but the learning of the students. According to Karademir and Akgül (2019), an efficient social studies teacher should be a model for the students, is aware of the current agenda, emphasizes individual differences, possesses political and citizenship knowledge, democracy culture, and leadership skills.
According to Lennon (2017), it is important for teachers to conduct classroom discussions and critical thinking exercises in the social studies course. Critical thinking and student dialogues are powerful tools for young adults and children that should be attempted by the educators. These are the skills required for critical analysis of current issues and learning and discussion of these issues with a civilized attitude. Teachers are concerned about the possibility that the method could backfire. According to the author, America changes, and although older teachers may have some reservations about the method, others should help them understand.
Different instruction models should be employed based on the intelligence level of the students. Abas and Solihatin (2019) conducted a study to determine the impact of interpersonal intelligence on social studies learning outcomes. In the study that included experimental and control groups and conducted in an elementary school in Indonesia, the sample included 22 students with high and low interpersonal intelligence. The findings demonstrated that face-to-face instruction models had a better impact on students with high interpersonal intelligence. It was demonstrated that the direct instruction model had a better impact on students with low interpersonal intelligence.
In a study conducted by Yalley (2017), it was emphasized that social studies teachers should acquire technological pedagogical knowledge in teacher training institutions in Ghana. It was reported that the Ministry of National Education should organize periodic workshops for social studies teachers on technology-assisted instruction methods and techniques. It was also stated that it would be more effective if social studies courses are integrated with technology, content and pedagogy to reform high school education.
Previous studies mainly concentrated on the problems associated with social studies instruction and only a few studies were conducted on how social studies should be instructed. The views of the teachers are important since they are the main actors in teaching.
Objective and Research Questions
The study findings are expected to guide curricula developers by analyzing the problems in the curriculum and determining the criteria required for more efficient social studies instruction. Thus, the following research problems were determined: "What are the problems that affect the efficiency of social studies instruction?" and "How a more efficient social studies course could be instructed?"
METHOD
In the study, the problems that affect efficient social studies instruction and more efficient social studies instruction methods were determine were investigated in depth. For this purpose, the case study approach, a qualitative research design was employed. The main characteristic of qualitative case study is the in-depth investigation of several cases. In other words, the factors that affect a case are investigated with a holistic approach and the study focuses on how these factors affect and affected by the relevant case (Yıldırım & Şimşek, 2005, p. 77).
The Study Group
The study group was assigned with the criterion and convenience sampling methods. Easily accessible middle schools in Elazig provincial center were preferred. Teacher participants were assigned based on the criterion sampling method based on the following criteria: Employment in a public middle school as a social studies teacher and voluntary participation in the study. The criterion sampling method could be described as the inclusion of all cases that meet a set of predetermined criteria in the study. The criteria could be determined by the researcher or the list of predetermined criteria could be employed (Marshall & Rossman, 2014).
The study was conducted with 20 social studies teachers employed in five middle schools in Elazığ province urban center during the 2016-2017 academic year. Teachers who were employed in Elazığ, Istiklal, Gazi, Mezre, Dumlupınar Middle schools participated in the study. The criterion sampling method was employed to assign the participants. Employment in a middle school as a teacher was the inclusion criterion. The study aimed to investigate the experiences of the teachers with the social studies curriculum. The participant demographics are presented in Table 1.
As seen in Table 1, 11 interviewed teachers were female, 9 were male, 7 were Faculty of Education graduates, and 13 graduated from other faculties.
Data Collection Instrument
Initially, official approval was obtained from the Provincial National Education Directorate to conduct the study. During the dates approved for the research, schools were visited to collect the study data. The author visited the schools and personally delivered the semi-structured interview forms to the participating teachers. They were asked to complete the interview within 40 minutes.
In the study, only volunteering teachers were included. Initially, the form that included the research questions was applied to five teachers in the pilot scheme. The form used in the pilot scheme was reviewed and organized with the assistance of a field expert. The data collected in the pilot scheme were not included in the main study. In the main study, questions that included participant demographics and the semi-structured interview questions were included in the survey form. Open-ended questions were posed to the teachers on their experiences with the social studies curriculum. Social studies teachers stated their views on the semi-structured interview form. The following questions were included in the form. 1. What are the problems that affect efficient social studies instruction? 2. What is efficient social studies instruction? 3. How could the efficiency of the social studies instruction be improved?
Data Analysis
The study data were analyzed with the descriptive analysis technique. The survey forms completed by social studies teachers were read first. In the study, each question posed to the teachers was considered a theme. The collected data were initially read with a holistic approach. The questions were then considered as a theme and the answers were classified based on similarities. The interrater reliability coefficient was 96%.
The study was explained in detail, and the views of the teachers were also included as direct quotes. Direct quotes are presented based on the survey form order such as T1, T2, etc.
FINDINGS
The study findings are presented under three categories: the problems experienced in social studies instruction, views on efficient social studies instruction, and on more efficient social studies instruction.
The Problems Experienced in Social Studies Instruction Based on Teacher Views
The problems experienced in social studies instruction based on the views of teachers are presented in Table 2.
As seen in Table 2, according to more than half of the teachers, the problems experienced in social studies courses were insufficient course hours and crowded classrooms. Examples of teacher views on this topic are presented below.
"The intense curriculum (in the 7 th and 8 th grades), the didactic nature of the course content, insufficient discussion, brainstorming, and project work due to crowded classes." [T6] "Non-implementation of methods other than instruction and question and answer in the course, insufficient course hours, lack of book reading habits among the students." [T2] "Insufficient course hours, the lack of thinking and interpretation skills." [T11] According to eight teachers, one of the problems in the social studies course was too long and complex topics. Examples of teacher views on this topic are presented below. According to two teachers, abstract topics were among the problems experienced in the social studies course.
"Problems are experienced in the instruction of abstract topics. Visual material, videos and slides could not be used due to the lack of a smart board in the classroom." [T20] "Sometimes, there are problems in the instruction of abstract topics. In these cases, we need to talk much more, and students are bored. They even sleep sometimes in the 7 th and 8 th grades." [T5] According to two teachers, rote-based instruction without practical applications was one of the problems in the social studies course.
"Non-implementations of methods other than lectures and question and answer in classroom instruction." [T2] "The students who do not repeat the topic forget even the previous class. I think the biggest problem is the fact that it is a forgotten course. Repetition is necessary in each class hour." [T10] In short, the majority of the teachers claimed that insufficient class hours, crowded classes, high number of complex topics, student disinterest, unavailability of instruction material were the most significant problems.
Efficient Social Studies Instruction Based on Teacher Views
Based on teacher views, effective social studies instruction should entail addressing current issues, utilization of technologies, adoption of the knowledge learned in the classroom in real life, and accurate presentation of the topics. Findings on efficient social studies instruction based on the views of the teachers are presented in Table 3.
As seen in "It could be effective as long as it is instructed at the student level. When the student starts to love and understand the course, the course achieves the objectives." [T13] According to three teachers, efficient social studies instruction entails active students. Examples of teacher views on this topic are presented below.
"Social studies that the student learns actively and transfer the knowledge to daily life." [T12] "An environment where the student is a participant and a researcher." [T20] According to three teachers, efficient social studies instruction means students who can criticize, interpret and think. Examples of teacher views on this topic are presented below.
"I can define it as a curriculum and activity aimed at a student who thinks, gives examples, criticizes, analyzes, interprets and raises student awareness rather than providing information. It also entails practicing the knowledge, providing opportunities to develop exemplification skills." [T6] "To train individuals who can think, criticize, comment and apply what they learn in life." [T11] According to two teachers, efficient social studies instruction means less topics. Additionally, two teachers stated that if the topics are presented well, the social studies course will be efficient. The views of these teachers are presented below.
"It could be effective as long as it is instructed at the student level. When the student starts to love and understand the course, the course achieves the objectives." [T13] "We sometimes experience difficulties in explaining abstract topics, which results in longer lectures which bores the students. 7 th and 8 th grade students even sleep in the class." [T5] According to the teachers, efficient social studies instruction should focus on current issues, employ technologies, allow the transfer of the learned knowledge to life, and the topics should be presented well and the students should be active.
How to Ensure More Efficient Social Studies Instruction Based on Teacher Views
According to the teachers, when the social studies knowledge could be transferred to life, when technological tools and equipment are employed, the textbooks are simplified, the topical inconsistencies are eliminated, the students are active, and more efficient social studies instruction could be achieved. These findings are presented in Table 4.
As seen in Table 4, more efficient social studies instruction is possible when knowledge is transferred to life according to seven teachers. Examples of teacher views on this topic are presented below.
"The textbooks should be revised, and the learned knowledge should be applicable." [T16] "It will be more effective when the students are actively involved and (the course) is associated with daily life." [T9] According to six teachers, more efficient social studies instruction would be possible when technological tools are employed. Examples of teacher views on this topic are presented below.
"It should be a program where current tools are employed, and it should be sensitive to social events." [T15] "I can state that the participation should be high in classrooms, where all narrative techniques and technological tools are used in addition to the textbooks." [T5] According to four teachers, textbooks should be simplified, and topics should be consistent. Examples of teacher views on this topic are presented below.
"Textbooks should be simplified".
[T19] "There should be plenty of examples, activities, and topics should be connected." [T4] According to four teachers, more efficient social studies instruction is possible when the students are active. Examples of teacher views on this topic are presented below.
"A curriculum with fewer units is preferred, leading to a more effective social studies instruction by further involving the students in the process." [T3] "For a more efficient course, it is necessary to leave the old patterns behind and make the course more enjoyable. For example, the last favorite is to put questions on the board, to conduct a quiz show with these questions and to reward the students based on the results; the students love and demand this application." [T10] According to two teachers, more efficient social studies teaching is possible when the research and interpretation skills of the students are developed. Teacher views on this topic are presented below.
"Research and interpretation skills should be developed".
[T11] "The instruction should focus on research…where the students are active." [T12] According to two teachers, more efficient social studies instruction is possible when the teacher instructs the course effectively. An example teacher view is presented below.
"Although the instruction is student-centered, course success is not possible unless the teacher instructs it effectively." [T17] According to the teachers, more efficient social studies instruction was possible when the course knowledge could be transferred to real-life situations, when technological tools and material are used, books are simplified and include consistent topics, and when the students are active.
DISCUSSION AND CONCLUSION
The current study aimed to determine the problems experienced in social studies instruction and the criteria for efficient social studies instruction based on the views of the teachers. According to the majority of the interviewed social studies 13. Social studies should be applied fully at schools 1 teachers, the most prevalent problem they experienced in social studies course was the insufficient course hours. The present study findings were consistent with those reported by Kırtay (2007) and Koçak (2017).
One of the problems experienced in social studies instruction that was determined in the present study based on teacher views was the high number and complexity of the curriculum topics. They suggested to simplify the textbooks. The study findings were consistent with the findings reported in the research conducted by Dinç and Doğan (2010). Furthermore, in the present study, students' disinterest in the course was another problem experienced in the social studies course.
In the study, teachers included unavailability or lack of instructional material as one of the difficulties experienced in the social studies course. The study findings were consistent with the studies by Önen et al. (2011Önen et al. ( ), Kahriman (2008, Çelikkaya (2011), and Kuş and Çelikkaya (2010).
According to the teachers, efficient social studies instruction entails a focus on current issues, technology use, transfer of learned knowledge to life, active student participation, training students who can criticize, interpret, and think, lower number of topics, and good instruction. The present study findings were consistent with the findings of the research conducted by Wright andGrenier (2009) andŞimşek (2016).
A study was conducted by Curry and Cherner (2016) with two well-trained social studies teachers who employed technological pedagogical content. According to the teachers, effective social studies teachers were those who prioritize instruction and students and conduct an effective, planned, well-prepared and organized instruction. Furthermore, technological pedagogical and field knowledge are also necessary for an active instruction. Even when teachers have good pedagogical and field knowledge, they should not neglect technological components. Furthermore, literacy skills are also important. Less successful students could be supported by literacy training.
Technology literacy is significant for contemporary education and social studies instruction. In a study conducted by Sarı and Kartal (2018), it was observed that individually innovative pre-service social studies teachers adopted a more positive attitude towards technology use. Similarly, the findings of the study conducted by Çöl and Karaca (2020) revealed that technology use in the social studies course improved the academic achievements of the students, led to permanent learning, and increased the number of active students. In a study conducted by Karaman and Akbaba (2020) that aimed to determine the fantasies of middle school 5 th grade students about information technologies in the social studies course, the findings demonstrated that the students dreamed of using technological tools and equipment in the classroom while the robot teachers instructed the course.
The present study findings demonstrated that knowledge could be transferred to daily life, textbooks should be simplified, and topics should be consistent, the student could participate actively in the class, and research and interpretation skills of the students could be improved for more efficient social studies instruction. Based on the study findings, the following recommendations could be suggested: 1. Weekly course hours could be increased, or course content could be reduced. 2. School material inventory should be improved as soon as possible, and further studies should be conducted to improve the availability of instruction material. 3. Instead of the rote-based examination system, systems that focus on thinking and interpretation skills should be studied. 4. Instruction methods that would ensure active student participation in the classroom should be developed. 5. An education system, where students not parents research, internalize and interpret knowledge, is required. | 2021-09-09T20:40:55.259Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "66b820535e0f57694876bd34a41275cf5c361003",
"oa_license": "CCBY",
"oa_url": "https://www.journals.aiac.org.au/index.php/IJELS/article/download/6791/4700",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e34a86a0deea3eab390db247f9678766343ef7ba",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
229364165 | pes2o/s2orc | v3-fos-license | Acetoacetate enhancement of glucose mediated DNA glycation
Acetoacetate (AA) is a ketone body, which generates reactive oxygen species (ROS). ROS production is impacted by the formation of covalent bonds between amino groups of biomacromolecules and reducing sugars (glycation). Glycation can damage DNA by causing strand breaks, mutations, and changes in gene expression. DNA damage could contribute to the pathogenesis of various diseases, including neurological disorders, complications of diabetes, and aging. Here we studied the enhancement of glucose-mediated DNA glycation by AA for the first time. The effect of AA on the structural changes, Amadori and advanced glycation end products (AGEs) formation of DNA incubated with glucose for 4 weeks were investigated using various techniques. These included UV–Vis, circular dichroism (CD) and fluorescence spectroscopy, and agarose gel electrophoresis. The results of UV–Vis and fluorescence spectroscopy confirmed that AA increased the DNA-AGE formation. The NBT test showed that AA also increased Amadori product formation of glycated DNA. Based on the CD and agarose gel electrophoresis results, the structural changes of glycated DNA was increased in the presence of AA. The chemiluminescence results indicated that AA increased ROS formation. Thus AA has an activator role in DNA glycation, which could enhance the adverse effects of glycation under high glucose conditions.
Introduction
Reactive oxygen species (ROS) are free radicals derived from molecular oxygen, and are highly reactive. The recognition of their increased roles in pathogenesis of many diseases has led to significant areas of investigation [1]. A free radical is independent molecular specie having an orbital with only one unpaired electron [2]. Free radicals can be derived either from internal sources such as normal essential metabolic pathways or from external sources such as industrial chemicals and foods [3]. The most important oxygen-containing free radicals in the body, which cause diseases, are hydroxyl radical, superoxide anion radical, hydrogen peroxide, oxygen singlet, hypochlorite, and nitric oxide and peroxynitrite radicals [4]. One of the endogenous sources for free radical production is β-oxidation of free fatty acid in the liver, which leads to formation of ketone bodies [5]. There are three types of ketone bodies generated in the body, namely 3-β-hydroxybutyrate, acetone, and acetoacetate (AA) [6]. High levels of ketone bodies are produced in diabetes, childhood hypoglycemia, growth hormone deficiency, intoxication with alcohol or salicylates, several inborn metabolic disorders [7], fasting, and prolonged exercise [8]. However, among ketone bodies, only AA can generate ROS [5].
Glycation is a spontaneous process which occurs between the amino groups of biomacromolecules and carbonyl groups of reducing sugars [9]. During the first and second stage of this reaction, a reversible compound including Schiff base and Amadori is produced [10,11]. In the late stage, this compound produces "advanced glycation end products" (AGEs) via oxidation, dehydration, and cyclization, which is irreversible [12]. AGEs contribute to pathogenesis of many diseases including diabetes complications [13], Parkinson's [14], Alzheimer's [15], and aging [16].
The amino group of nucleic acids can be modified by reducing sugars to form DNA-AGEs [17], which affect DNA structure and function [18].
We recently reported that AA can enhance human serum albumin AGEs formation [31]. With regards to the free radical production by AA, and its increased concentration in diabetes, the aims of these studies were: (i) to determine the effects of AA on DNA glycation in the presence of glucose, and (ii) to characterize the structural changes and Amadori products and DNA-AGEs formation using UV-vis, fluorescence and CD spectroscopy and agarose gel electrophoresis.
Chemicals
DNA from Calf thymus, agarose, ethidium bromide, acetoacetate (AA), sodium dihydrogen orthophosphate, disodium hydrogen phosphate, EDTA, nitro-blue tetrazolium (NBT) sodium chloride and Tris-HCl, were obtained from Sigma-Aldrich (USA). β-D Glucose was purchased from Fluka. All reagents were of analytical grade and were used as received without further purification.
Preparation of AGE-DNA
For the preparation of glycated products, DNA (25 μg/mL) was added to D-glucose (130 mM) in a sodium phosphate buffer (200 mM; pH 7.4) in the presence or absence of AA with the concentration of 3.125 mM [32] under sterile conditions. After 4 weeks incubation, the mixtures were dialyzed over sodium phosphate buffer for 48 h to eliminate unbound particles. The samples were then kept at − 30 • C. The control was DNA incubated without glucose and AA.
UV-vis analysis
The UV-Vis analyses of all samples were carried out according to the procedures described in the literature [33] by a Cary spectrophotometer (UV-2100, Rayleigh, China) in 200-600 nm spectral range with path length 10 mm in a quartz cuvette.
Amadori product measurements
The Amadori product measurements were determined based on colorimetric fructosamine assay using nitro-blue tetrazolium (NBT) reaction with ketoamines [26] as follows: 100 μL of sample (25 μg/mL) was added to 100 μL of NBT reagent (250 μM in 0.1 M carbonate, pH 10.8) and incubated at 37 • C for 45 min. The absorbance was recorded at 525 nm over a blank using a BioTek Power Wave XS2 plate reader (USA).
Fluorescence analysis
The fluorescence studies were carried on the spectrofluorophotometer (RF-5301-PC, Shimadzu, Japan) at 290 nm and 400 nm excitation wavelength. The fluorescence emission intensities were measured at 10 nm upper than the excitation up to 600 nm. The presence of AGEs in the samples was confirmed by AGE-specific fluorescence compared with control DNA [27].
Agarose gel electrophoresis
The electrophoresis analyses were performed using a 0.8% agarose gel at 30 mA for 2 h in TAE buffer (40 mM Tris-acetate, 2 mM EDTA, pH 8.0). The bands were detected under UV after staining with ethidium bromide [34].
UV-visible spectroscopy
The UV-Vis spectra of all samples including control-DNA, DNA + AA, DNA + Glc + AA and DNA + Glc are shown in Fig. 1. These results indicated that absorption of DNA + Glc + AA was higher than absorption of DNA + Glc, increasing by approximately 44.7%.
Circular dichroism (CD) profiles
The CD analyses of all samples are shown in Fig. 2. The control-DNA revealed a negative peak of − 5.9 mdeg at 245 nm, and a positive peak of +16.4 mdeg at 275 nm. The DNA + Glc, DNA + AA, and DNA + Glc + AA had a negative pick of − 3.9, − 4.9 and − 3.2 mdeg at 245 nm, and a positive peak of +12.7, +14.6 and + 10.5 at 275 nm, respectively. These results showed that the amount of Amadori product of DNA + Glc + AA was higher than other samples. The amount of Amadori product of DNA + Glc + AA was increased by approximately 42.5% compared with DNA + Glc. Fig. 4 shows the fluorescence spectra of all samples at 290 and 400 nm excitation wavelengths. These results showed the fluorescence emission intensity of DNA + Glc + AA is higher than other samples at 290 nm (Fig. 4a). The same results were also obtained at 400 nm excitation wavelength (Fig. 4b).
Agarose gel electrophoresis
The electrophoresis analyses of all samples are shown in Fig. 5. The DNA + Glc + AA had a higher mobility compared with other samples. Fig. 6 shows the amount of ROS formation based on a chemiluminescence procedure. The chemiluminescence intensity of DNA + Glc + AA was higher than DNA + Glc and DNA samples.
Discussion
Diabetes is a disease in which blood glucose levels remains high. High blood glucose levels increases glycation process and AGE formation [36]. The glycation process could mediate DNA structural changes, strand breaks, and mutations [37]. Some agents such as ROS (chemical compounds) could increase AGE formation causing further DNA damage. Thus, the study of glycation promoting agents will allow the identification of their harmful effects on health. Based on the UV-Vis results (Fig. 1), the absorbance of DNA + Glc + AA increased compared with DNA + Glc. UV-Vis absorbance of DNA + Glc rises because of the partial unfolding of double helix and exposure of chromophoric bases [32,33]. Thus, AA increases the partial unfolding of DNA incubated with glucose. Also, our findings indicated a new peak (300-400 nm) for DNA + Glc + AA that was higher than that of DNA + Glc. The peaks in the range of 300-400 nm confirm DNA-AGEs formation [38], which was further enhanced in the present of AA. In comparison with CD spectra of control-DNA, the negative and positive parts of CD spectra of the DNA + Glc increased and decreased, respectively (Fig. 2). These were in agreement with the results reported by a similar study in the literature [38]. Moreover, based on CD results the structural changes of DNA in the presence of Glc + AA was higher than Glc alone. Thus, AA could cause enhanced structural changes and DNA-AGEs formation. These results are consistent with that of the UV-Visible results. According to the NBT test, the amount of Amadori product of DNA + Glc + AA was also increased compared with the DNA + Glc. These results were also consistent with the CD and UV-Vis findings.
The fluorescence results revealed that the emission of DNA + Glc + AA was increased compared with the DNA + Glc sample. As previously noted, the glycated DNA shows an excitation of 400 nm and an emission of 290 nm [20,39]. The presence of AA increased the fluorescence intensity, thus causing increased DNA structural changes. Based on electrophoresis results, the mobility of DNA + Glc was higher than native DNA, which is in agreement with a previous study [39]. The mobility of DNA + Glc + AA sample was higher than DNA + Glc. Thus, the presence of AA could cause more structural changes and damage in the DNA. According to the chemiluminescence results, the ROS formation was increased in DNA + Glc + AA samples, which was more than DNA + Glc samples (Fig. 6). These results are in agreement with those reported by a previous study, which showed the glycation process [40] and ROS formation by AA [5].
Collectively, our results demonstrate that the structural changes, Amadori products, ROS and AGEs formation of glycated DNA were increased in the presence of AA. We recently reported that 3BHB, as a ketone body, inhibited DNA glycation by glucose [41]. In contrast, the results of present study indicate that AA increases DNA glycation by glucose. These differences in behaviors of AA and 3BHB as ketone bodies are mainly attributed to their structural differences. While AA can generate ROS, 3BHB cannot [42]. Thus, AA can cause ROS formation and activation of DNA glycation process.
Conclusions
Acetoacetate is a ketone body, which increases during ketosis in diabetic patients. Our results establish that AA possesses an activating effect on glucose-mediated DNA glycation. According to our findings, AA shows a significant activation effect on DNA glycation because of its ROS production ability. ROS production induces DNA-glycation. Also, AA had a significant effect on the DNA structural changes and AGEs formation, as demonstrated by changes in UV-Vis and fluorescence spectrometry. The UV-Vis absorbance (at 363 nm), the fluorescence intensity (at 290/440 nm), the amount of Amadori products (at absorbance 525 nm) and the amount of ROS decreased by approximately 44.7%, 24.1%, 42.5% and 49.6%, respectively, when AA was incubated with glucose and DNA solution. Increased DNA AGEs formation and structural changes can lead to enhancement of diabetes mediated complications associated with hyperglycemia.
Declaration of competing interest
The authors declare no conflict of interest. | 2020-12-24T05:07:26.057Z | 2020-12-17T00:00:00.000 | {
"year": 2020,
"sha1": "70ff191c072b217dd003e1d0b7cec0ad54cf57a4",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bbrep.2020.100878",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "70ff191c072b217dd003e1d0b7cec0ad54cf57a4",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
155248094 | pes2o/s2orc | v3-fos-license | Implementation of International Instruments in Indonesian Legislation in the Field of Conservation of Fish Resources
The intention contained in laws and regulations concerning the conservation of fish resources is so that there are activities that lead to protection of fish resources as a whole. During this time the utilization of fish resources more dominant done compared to the protection and preservation, so the impact on the aquatic ecosystem. The purpose of this research is to examine International agreements regarding the conservation of fish resources which have been ratified and implemented into other policies and legislation in order to become a guideline in behavior so bring a change in society. This research is a descriptive qualitative using a data source from an earlier study results and document the latest libraries. Results of the study that the conditions of the CCRF (Code of Conduct for Responsible Fisheries) became a legal basis in the formulation of the provisions concerning the management of fish resources responsibly. Sustainable fisheries zone were never regulated in the regulation of the management of conservation areas. The Fisheries Act has yet to implement the provisions of the UN fish stocks agreement of 1995 relating to the conservation and management of fish resources in the ZEE and Indonesia on the high seas. Therefore the Fisheries Act require refinement, considering Indonesia has been a member of two regional fisheries management organization and ratify the UNITED NATIONS Agreement on fish stocks of 1995.
INTRODUCTION
Unitary State of the Republic of Indonesia is the largest archipelago in the world with as many as number of 17,508 Islands, along the coastline, and extensive 81.000 km Lake of 58 million km (75% of the total area of Indonesia).
So that the potential of the coast, Islands, and oceans is enormous, consisting of Bio resource and non biodiversity.
Reviewing the function of the sea, Indonesia has long been utilized by people of Indonesia hereditary for sustaining life, where the sea is very strategic and Indonesia is rich in natural resources, which constitute the basic capital all national development field. The basic capital of natural resources must be protected, nurtured, preserved, and benefits is optimal for the welfare of society Indonesia, the main fish resources are extremely high for its potential as a source of food ingredients that are useful and full of protein. Although he acknowledged that the potential is very promising if fisheries are managed in a professional manner. But sometimes the potential of less note. But hale personal or business interests.
Setting the national aspect of the law in protecting the fisheries resources include Law ± Law No Mean annual fish resource conservation in this form of protection, preservation, and utilization of fish resources, including ecosystems, species, and genetic to guarantee the existence, availability, and continuity and maintains and improves the quality of the value and diversity of fish resources.
According to the report, Indonesia is expected to suffer losses up to USD 2 billion or equivalent to nineteen trillion Rupiah per year. 22% the production of illegal fishing around the world comes from Indonesia. According to sources mentioning the losses far greater Indonesia which between 30 ± 40 Trillion Dollars every year due to the illegal fishing.
As an archipelago, an area of the sea waters Indonesia reached more than 2/3 of the total area of Indonesia. Indonesia's sea wealth, among others, in the form of coral reefs (coral reef) represent the diversity reached 17, 59% of the diversity of the world's coral reefs. Indonesia also has 37% of world seas species and 30% of the world's mangrove area. Pay attention to an area of the waters, and wealth for the future economic growth and welfare of society Indonesia will be determined how big our performance in marine resource managers, coastal and small islands.
Based on the data of potential marine fisheries Indonesia expected to reach an annual tons of 6.167.940. Due to the layout position of Indonesia that cross is located between two continents, Asia and Australia and two oceans (Pacific and Indian oceans), causing the occurrence of Illegal Fishing-prone Indonesia (theft of fish). As for areas that are prone to the point of being located in the Natuna Sea, Arafuru Sea, North of North Sulawesi (Pacific Ocean), the Makassar Strait, and West Sumatra (Indian Ocean).
As a result of illegal fishing is destroying the habitat of the biological resources of the sea. Damaging the coral reefs, sea grass meadows, and numerous ecosystems in the sea. All small fish up. Damaged breeding grounds. If this is left to the future of the fishery can no longer be expected. The Government still turn a blind eye to the impact caused. This not only implies a lack of economic income (foreign exchange) State of the results of the fishery reach trillions of Rupiah, but also can damage marine ecosystems, including coral reefs, according to the Ministry of State of Environment, extensive coral reefs of Indonesia which has suffered damage to reach 61% and 15% categorized already critical. To the need for the Political Will of the Government in handling and managing the wealth of the sea wisely remained in favor of the environment and society in Indonesia widely.
In principle the aim of promoting the normative basis is contained in Government Regulation No. 60 in 2007 about the Fish resource conservation is so that there are activities that lead to protection of fish resources as a whole. During this time the utilization of fish resources more dominant done compared to the protection and preservation, so the impact on the aquatic ecosystem. Therefore, based on the principles contained in the Declaration of Stockholm 1972, that countries are free to manage environment however should pay attention to the preservation of the environment. It is also a paradigm of modern environmental law-oriented environment (environment oriented) that changes the paradigm of classical environmental law, which merely describes the utilization environment only (use oriented).
Based on the above fish resource management has yet to provide a sustainable livelihood enhancement and fair management of the fishery, through supervision and law enforcement that is optimal. Thus there is a tendency of Government Regulation No. 60 in 2007 about the conservation of Fish resources and other regulations have not been implemented optimally. As a result the impact on marine ecosystems and affect the economy of the country. This is particularly of concern because if not done wisely, then ecosystems aquatic biota's life as a buffer at a time now and to come will be damaged. Therefore the authors restrict the problem how can the protection of the law of conservation of fish resources in Indonesia.
The importance of research on protection of fish resources, conservation is to avoid excessive the utilization of without paying attention to the protection and preservation of which may affect the welfare of the people. For that, through a series of studies/further research with in earnest to find answers that by applying the legal protection on conservation of fish resources is something that cannot override primary consideration to ensure the welfare of the community in the future. For it takes a change in attitude in the community give the fish resource conservation.
Convention on the law of the Sea 1982
In relation to the utilization and management of the fish resources, the law of the Sea 1982 Convention contains provisions relating to fisheries laws that apply in the various kinds of maritime zones under and outside the limits of the national jurisdiction. Provisions related to the conservation of fish resources are regulated by article 61 of the Convention law of the sea of 1982.
Based on the foregoing, coastal States are required to take measures of conservation by setting the number of permitted catches of fish resources within the economic zone of ekslusifnya. Coastal States are required to maintain, based on scientific evidence that exists, so that fish resources are not subjected to overexploted (symptomatic capture more in order to guarantee the maximum sustainable yield of article 4 paragraph (1) of law No. 24 of 2000 concerning international treaty stated that the Government of Indonesia to make international agreements (with one or more Countries , international organizations or other international law subjects) based on the agreement and the parties are obliged to execute the agreement in good faith. 1
The Convention on Biological Diversity 2010
One of the results of the Rio Conference on sustainable development is the + 20 years stressed the need for the conservation and utilization of marine resources on an ongoing basis for tackling poverty, food security and livelihoods and to improve economic growth. From 283 points agreement 19 points related to marine and fisheries and the three points is very important, namely conservation, fisheries management and subsidies. The importance of marine conservation include the protection of the Sea (above sea level) and the sustainable utilization in point 177 expressly referring to the Convention on Biological Diversity 2010 target of 10 percent of the coastal area and the sea by 2020. When sea area of Indonesia reached 3.1 million km2 (310 million acres), then we must conserve 31 million hectares. Up to this point our marine conservation area about 15, of 1.4 million hectares (5 per cent) and ditargetkan20 million hectares by 2020 (compass, 12 Juli2012). 2
The 1993 FAO Agreement to Promote Compliance with international Conservations and Management Measures by fishing Vessels on the High Seas
The 1993 FAO Compliance Agreement is an integral part of the FAO Code of Conduct for Responsible Fisheries. This agreement specifically tried to address the issue of reflagging and Flag of convenience (mock flag) associated with ships that fish do IUU fishing. This agreement was then developed into an instrument that regulates the provisions related to the flag State of the ship all the obligations.
The approval of the United Nations for 1995 on the implementation of the provisions of the United Nations Convention on the law of the sea on 10 December relating to the Conservation and management of fish stocks are Limited and Preparations Beruaya fish Beruaya Far.
The United Nations agreement on fish stocks of 1995 it consists of 50 chapters and two Annex (annex), which contains the basic materials include: Under article 2 of this agreement aims to ensure the long term conservation and sustainable utilization of fish stocks are limited and beruaya the remote beruaya fish stocks through effective implementation of the related provisions of the Convention.
According to article 6 paragraph (1) the approval of the United Nations, States should apply the precautionary approach widely to conservation, management and exploitation of fish stocks are limited and beruaya a beruaya fish stocks further. Whereas paragraph (2) of this Agreement States that countries are required to be more careful at the moment information is uncertain or unreliable and inadequate. Based on the principle of prudence, the lack of evidence of adequate scientific evidence should not be used as an excuse to delay or frustrate the measures of conservation and management of fish resources. The implementation of the principle of the precautionary approach at the regional level contained in the Convention on the Conservation of Antartic Marine Living Resources (CCAMLR). CCAMLR is the first international treaty that contains the precautionary approach and the ecosystem as a basic principle for the conservation and management of the biological resources of the sea.
The 1995 FAO Code of Conduct for Responsible Fisheries
The food and Agriculture Organization of the UNITED NATIONS in 1995 issued the CCRF (rules of conduct of the management of fisheries is responsible). CCRF contains the guidelines, principles and international standards that apply to the activities of responsible fisheries. The main purpose of this is to ensure the CCRF measures of conservation and management of fisheries effective having regard to the environmental aspects, biological, technical, economical, social and commercial. This international legal instrument is voluntary (voluntary).
International Plan of Action to Deter, Prevent and Eliminate Illegal, Unreported and Unregulated Fishing (IPOA-IUU),2001
IPOA-IUU was formed as a non-binding international instrument in the framework of the CCRF, to respond to the concerns of the Fisheries Commission of the Council of the food and Agriculture Organization of the UNITED NATIONS that the 23rd in February 1999. The purpose of the IPOA-IUU is to prevent, reduce and remove activities IUU fishing by providing guidelines to all countries in drawing up the measures that are comprehensive, effective and transparent cooperation with regional fisheries management organizations that are competent. The international treaties mentioned above which is done in writing, in which participating countries bind themselves legally to act in a certain way. 3
Principles and Objectives the conservation of Fish resources
For Indonesia, the fishery had an important role in national development. This is due to several factors, including the many fishermen who depend their lives from capture fisheries business activities and the presence of potential fisheries owned by Indonesia. As an island nation that has vast waters and in it there is also a wide range of resources. In General, natural resources can be grouped on natural resources renewable (renewable resources) or flow and natural resources that cannot be renewed (unrenewable resources) or stock. Natural resources are renewable resources related to natural resources biodiversity, whereas unrenewable natural resources relates resources with non-biological natural resources. The number of natural resources that are contained in the waters of Indonesia can be managed and utilized for the community and for the benefit of the nation and the State. Dictum Law Number 31 of 2004 confirmed that the waters which were under the sovereignty and jurisdiction of the Republic of Indonesia and the Indonesia Exclusive economic zones and the high seas based on national provisions, contains fish resources and fish cultivation potential of land, is the blessing of God Almighty who entrusted the Nation Indonesia which has the philosophy of life Pancasila and the 1945 Constitution, for utilization folk Indonesia prosperity. In the framework of the implementation of national development based on insight into the archipelago, fish resource management need to be done as well as possible based on justice and equity in the utilization with emphasis on the expansion of employment opportunities and improving the standard of living for fishermen, fish farms, as well as security for future sustainability of fish resources and the environment. 4 Setting the management efforts of fisheries in Indonesia refers to the Code of Conduct for Responsible Fishheries (CCFRF), which is determined by World Food Agency (FAO). This imposes an international opinion that the whole marine and fisheries products are safe and consumed concerned with aspects of sustainability. 5 The backup or the establishment of an area to be aquatic conservation area aims to harmonize the economic needs of society with the desire to preserve its natural resources, resulting in the development of conservation area waterways has been put to many purposes such as research, nature protection, preservation of species and genetic diversity, tourism activities, environmental education activities and the protection of natural or cultural elements that are specific. Conservation of ecosystems conservation is the urgently needs to be done, since the disruption of the ecosystem of the coastal area and the water will interfere with all suitable habitat is found around coastal areas and small islands. The core problem of the environment is reciprocal relations between living things and their environment. In a reciprocal relations between living beings and their environment running on a regular basis and is a single entity that it formed a mutual affect the ecological system commonly known ecosystems. Because of the environment itself over the component life and no life, then any ecosystem formed by the components of life and no life who interact on a regular basis as a unified and mutually influence each other. (Interdependence).
Refer to the provisions listed in article 2 and article 3 of the above government regulations, then in principle in normative goals of promoting contained therein are there activities that lead to in order for protection fish resources as a whole, especially restrictions on catching up with the scale better. For if not done wisely, the population of diverse kinds of fish will be endangered, this is the goal and purpose of this provision is made.
Implementation of international instruments in Indonesia in the field of Legislation conservation of Fish Resources
Since 1990 international attention fixed on any changes in the pattern of human life which is not sustainable in the field of production and consumption. This pattern was marked by human lifestyle changes, one of them a fish protein consumption levels higher. The level of global fish consumption was further increased to reach an average of 17 kg per capita, but preparations of fish in the ocean did not experience an increase. The sea as fisheries resources are producing world, play an important role in the fulfillment of the human consumption of protein. Humans depend on the potential of the oceans which is stable to produce 20% of the protein consumed. With the presence of world fish demand continues to increase, the activity of fishing as a traditional function of the Sea also experienced significant changes. These changes are supported by the development of the technology of fishing that more advanced and update on the activities in the process of catching the fish. Fishing activities can be done in any part of the ocean that correspond to the settings area of any country, especially the coastal States and the island nation bordering the sea.
The characteristics of the privately run conservation areas, among others: good care and watchful eye so that no violation occurs; repair or improvement of conditions occurring from time to time both the environment and the species which are protected; sense of belonging high society means maintaining sustainability of the area.
National development is the mandate of all the people of Indonesia should be implemented together, by the Central Government and local governments as well as all elements of the nation. The construction was carried out by all the Nations of Indonesia, in all aspects of public life during this time, gradually has been able to improve the welfare and improvement of the sense of security the majority of the community. The management should be implemented with the SKA keep paying attention to the sustainability of the environmental functions of life and its sustainability. One thing you have to download so common attention is, that the purpose of the management of SKA may not be contrary to the purpose of the State, as stated in paragraph 4-the preamble to the Constitution of the Republic of Indonesia in 1945 (UUD 1945 NRI) as well as on article 33, paragraph (2) to set up "branches of production which is important for the country and that ruled his life crowd dominated by country" and in paragraph (3) set the "Earth and water and natural resources contained therein are controlled by the State and used for the prosperity of the people ". Only one planet Earth and construction should pay attention to "the rights of future generations", so the management of SKA should be implemented on an ongoing basis and utilized for the construction of the acceleration in order to strengthen national resilience. 6 The potential utilization of marine resources in Indonesia will not be sustainable if there is no conservation efforts. Marine resource conservation is a series that cannot be separated from the protection, preservation, and utilization. It includes the management of the aquatic conservation area, the type of fish and the fish to assure the availability of genetic and its sustainability. First, the management is controlled by a system of zoning. There are four divisions in the conservation zone waters. That is, the core zone, zone fisheries sustainable utilization zones, as well as other zones. Keep in mind, sustainable fisheries zone were never regulated in the regulation of the management of conservation areas, either according to Local governments were given the authority to manage conservation areas in its territory. This is aligned with the mandate of the law No. 23 by 2014 about local governance, on article 27 paragraph (1) which States that the provinces were given the authority to manage the natural resources existing in the sea area. The new paradigm, have now been removed fears of reduced access to fishing waters conservation area.
Indonesia's involvement in the management and development of marine fisheries, especially off The India performed through its membership in regional fisheries management organizations are two, namely, the Indian Ocean Tuna Commission (Indian Ocean Tuna Commission) and the Commission for the Conservation of Southern Bluefin Tuna (Convention on the conservation of Southern Bluefin Tuna) and the Act of ratification of the UNITED NATIONS Agreement on fish stocks, 1995.
The occurrence of international cooperation in tackling the problem of conservation of fish resources in these countries, spurred by any change in local communities and the international community. Changes in society have a valid enactment before legalized by laws. This is implemented by lawmakers in Indonesia to provide legal certainty on the application of conservation of fish resources. As for the form of the implementation of the provisions of international and national legislation poured in Indonesia in order to protect the fish resources management responsibility is as follows: The management of fisheries in the area of management of fisheries of the Republic of Indonesia to achieve optimal and sustainable benefits, as well as the sustainability of fish resources guaranteed. Provisions of the CCRF became legal basis in the formulation of the provisions concerning the management of fish resources responsibly. Requirements or international standards in managing fish resources beyond the limits of national jurisdiction are regulated in the law of the Sea Convention of 1982, the UN fish stocks Agreement of 1995, the CCRF and IPOA-IUU. That needs to be questioned is whether the Fisheries Act and regulations implementation can support the activities of the conservation and management of fish stocks that migrate in accordance with international standards that are set in all international instruments. Nevertheless in such Fisheries laws above is yet to implement the provisions of the UN fish stocks agreement of 1995 relating to the conservation and management of fish resources in the ZEE and Indonesia on the high seas. Therefore the Fisheries Act require refinement, considering Indonesia has become a member of two regional fisheries management organization and ratify the UNITED NATIONS Agreement on fish stocks of 1995. 8 As coastal States, Indonesia need merivisi Fisheries Act by adding chapter on compatible measures in the conservation and management of fish sea resources that are under the jurisdiction of Indonesia and on the high seas that borders Indonesia ZEE. In the concept of conservation, there is a groove back (renew) renew, utilizing again (reuse), reduce (reduce), recycle back (recycle), and cashing in return (refund). Implementation of international treaties which poured in our national laws, expected to bring changes in the conservation of fish resources. Although there are international treaties which have not poured in the legislation, but the rules that are in line to be the guideline in order to make the activities of konnservasi aimed at the sustainability of fish resources for the benefit of generations to come.
CONCLUSION
The concept of the law of conservation of fish resources established by reference to international instruments that are adapted to the local wisdom of society, to ensure the implementation of the rule of law and sustainability management of fish resources for the benefit of generations to come.
Seokwoo Lee, The Legal Assessment of the Illegal Fishing Activities of Chinese
Fishing Vessels:A Focus on Detention of Foreign Vessels, Korean Journal of International and Comparative Law 1 2013, | 2019-05-17T14:38:55.583Z | 2015-02-02T00:00:00.000 | {
"year": 2015,
"sha1": "494a53ee2fbedfc1e64452d350d4541cf9aa96b0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14724/jh.v3i1.31",
"oa_status": "GOLD",
"pdf_src": "Neliti",
"pdf_hash": "ba5a4d7cc09d9a6e773343d52b36d744d76e29e2",
"s2fieldsofstudy": [
"Environmental Science",
"Law"
],
"extfieldsofstudy": [
"Economics"
]
} |
34190851 | pes2o/s2orc | v3-fos-license | Muscle Mass and Training Status Do Not Affect the Maximum Number of Repetitions in Different Upper-Body Resistance Exercises
Method: Thirty participants, 15 trained (T) and 15 untrained (UT) men, volunteered to participate in this study and attended six separate occasions, each separated by at least 48 h. In the first three sessions, familiarization and 1RM tests were evaluated. The last three sessions were designed to assess the performance of the RM’s at 60%, 75%, and 90% 1RM. The exercise order and intensities performed in each session were randomized. Muscle action velocity for each repetition was controlled by an electronic metronome.
INTRODUCTION
One of the most important variables to consider in the development of the resistance training prescription is exercise intensity [1], which is considered one of the program variables that dictate the magnitude of training-induced neuromuscular adaptations [2].Depending on an individual's training experience and current level of fitness, proper loading encompasses one or more of the following loading schemes: increasing load based on a percentage of 1RM, increasing absolute load based on a targeted repetition number, or increasing loading within a prescribed zone (e.g., 8-12 RM) [3].
Based on the inverse relationship between the amount of weight lifted and the maximum number of repetitions (RM's) performed [4 -6], the prescription of resistance exercise intensity is usually based on a percentage of one repetition maximum test (%1RM) [7].Previous studies investigating the relationship between RM`s and %1RM have shown that different factors influence the RM's during resistance exercises: the amount of muscle mass used [4,5,8], the training status of participants [5,8,9] and the movement velocity of each repetition [10,11].Hoeger et al. [5] reported that at 80% of 1RM an individual can perform 10-15 RM's for exercises such as the bench press, leg extension, lat pulldown, and leg press (i.e., multi-joint exercises); while at the same intensity the same individual can perform only 6-8 RM's for the leg curl, and for the arm curl (i.e., single-joint exercises).
Regarding the training status of participants, controversial results have been found.Pick and Becque [9] reported that trained individuals are able to perform more repetitions in the squat exercise at 85% 1RM compared to untrained subjects.In contrast, Shimano et al. [8] showed that untrained participants performed a significantly greater number of repetitions than trained subjects during bench press at 90% 1RM, although no differences between groups at 60 or 80% 1RM for bench press were found.In addition, previous studies have demonstrated that the RM's can vary with different movement velocities, with a higher number of RM's produced under faster conditions, and this effect becomes greater with lower intensities [10 -12].Although the relationship between the percent of 1RM and the RM's performed is affected by the movement velocity, the previous above-mentioned studies did not control this variable [4,5,8,9].Consequently, studies on the relationship RM`s and %1RM that controlled the movement velocity are scarce [13].In addition, controversial results have been shown in studies comparing this relationship in trained and untrained participants [8,9].Therefore, the purpose of the present study was to compare the number of repetitions performed at 60, 75%, and 90% of 1RM in 4 different upper-body free weight exercises, controlling the movement velocity of each repetition.The working hypothesis was that, using the same percentage of 1RM, trained and untrained subjects would perform similar number of repetitions in four different muscle groups of upper-body exercises.
Participants
Thirty participants (15 trained and 15 untrained men) volunteered to participate in this study.The trained group (T) had been engaged in regular resistance training in the last 2 years at least three times per week using free weight exercises.The untrained participants (UT) were physically active but had not engaged in any resistance-training program before the study.All participants were free of any musculoskeletal, bone and joint, or cardiovascular diseases.Moreover, the participants reported that were not taking steroid anabolic medications.In order to participate in this study all subjects were informed about the procedures and potential risks and gave their written informed consent.The study was approved by the local Research Ethics Committee and is in accordance with the Declaration of Helsinki.Sample size was calculated based in a previous study [8] using PEPI software (version 4.0) and determined that a sample size of n=15 subjects would provide a statistical power of 90% and a correlation coefficient of 0.8 for all variables.
Experimental Design
In order to evaluate the efficacy of resistance exercise prescription based on percentages of 1RM, the number of repetitions performed at 60%, 75%, and 90% of 1RM in 4 different upper-body free weight exercises (i.e., bench press, barbell triceps extension, unilateral dumbbell elbow flexion, unilateral bent knee dumbbell row -Fig.( 1)) were determined.The loads corresponding to 60, 75, and 90% of 1RM were used due its potential to maximize adaptations in local muscular endurance, hypertrophy, and muscular strength, respectively [7].Participants attended six separate occasions, each separated by at least 48 h.The tests and experimental protocols were performed at the same time of day to avoid variations related to circadian rhythms and under the same conditions (i.e., no resistance exercise for at least 24 h and no stimulants for 12 h before each experimental session).In the initial session, body mass, height and body composition using a 7-sites skinfold prediction technique [14] were assessed.After that, participants performed a familiarization in order to practice the resistance exercises and standardize the technique and range of motion of the resistance exercises.The next two sessions were randomly performed (i.e., exercise sequence and the percentages) to determine 1RM in four upper-body free weight exercises: bench press, bilateral triceps extension, unilateral dumbbell elbow flexion and unilateral bent knee dumbbell row.The participants warmed up for 5 min on a cycle ergometer, and performed specific movements with 1 set of 10 repetitions with light load (50% of the first test load) in the exercise tests.Two 1RM tests were performed each day (bench press or barbell triceps extension and unilateral dumbbell elbow flexion or unilateral bent knee dumbbell row) and a ten-minute recovery was used between exercises.Each subject's 1RM was determined with no more than five attempts with a three-minute recovery between each.Participant performance characteristics are reported in (Table 1).The last three sessions were designed for the performance the maximum number of repetitions (RM's) tests, in which three different percent of 1RM were used in each exercise (i.e., 60%, 75%, and 90% of 1RM).Each day the participants performed one exercise; the exercise order and intensities performed in each session were randomized.In order to perform the RM's tests, the participants warmed up for 5 min on a cycle ergometer, and performed a warm up set of ten repetitions using 50% of 1RM [15,16].Thereafter, each participant performed a maximal attempt using the load corresponding to the selected percentage of 1RM.Movement velocity for each muscle action (i.e., concentric and eccentric) was two seconds and was controlled by an electronic metronome (MA-30, KORG; Tokyo, Japan).If the individuals could not maintain the controlled velocity the exercise was interrupted and the test was ended and considered completed.
Statistical Analysis
Results are reported as mean ± standard deviation (SD).Normal distribution of data was checked with Shapiro-Wilk.The comparison between performance characteristics by group was performed using Student's independent ttests.Statistical comparisons regarding the number of repetitions among different exercises in each load and between groups were tested using a mixed model two-way ANOVA, using repeated measures for different exercises in each percentage evaluated.Significance was accepted when P <0.05 and the SPSS statistical software package (version 22.0) was used to analyze all data.
RESULTS
The performance characteristics presented in Table 1 showed higher 1RM values for all exercises in trained subjects (p<0.001), which reinforce the different training status of participants in the present study.
The number of repetitions performed at 60, 75, and 90% of 1RM on bench press, barbell triceps extension, unilateral dumbbell elbow flexion and unilateral bent knee dumbbell row are described in (Table 2).There was no significant difference between T and UT in any of the exercises and loads evaluated.The number of repetitions during the row exercise was significant lower when compared to other exercises at 60 and 75% 1RM.However, comparing exercises with different muscle mass (i.e., bench press vs. triceps extension, and dumbbell row vs. elbow flexion), the same number of repetitions in each percentage was performed in those that utilize greater muscle mass (i.e., bench press and dumbbell row) compared to the exercises with less amount of muscle mass.
DISCUSSION
The primary finding of the present study was that, independent of the muscle group exercised, there was no difference on the number of repetitions performed by different upper-body free weight exercises at 60%, 75% and 90% 1RM.In addition, the training status of subjects does not affect the number of repetitions performed in each percentage of 1RM.To the best of our knowledge, this was the first study design that controlled all main factors that could potentially interfere in the number of repetitions performed at a given percentage of 1RM (i.e., training status, amount of muscle mass, and movement velocity of each repetition).The present results showed that RM's performed at a given percentage of 1RM, when movement velocity is controlled, is not dependent on the absolute muscle mass involved during free weight upper-body exercises.However, previous studies have found that large muscle mass exercises allow a higher RM's when compared small groups [4,5,8].
Hoeger et al. [5] investigate the relationship between RM's at different percentages of 1RM and reported that at 80% of 1RM an individual can perform 10-15 RM's for bench press, leg extension, lat pulldown, and leg press exercises, while for the same intensity the individual can perform 6-8 RM's for the leg curl, and for the arm curl exercises.Likewise, Shimano et al. [8] try to determine the RM's that trained and untrained men could perform at 60, 80, and 90% of 1RM in 3 different exercises: hack squat, bench press, and arm curl.The authors also concluded that the RM's performed during free weight exercises are influenced by the amount of muscle mass used.It has been shown that faster velocities allow performing a higher RM's during resistance exercises [10,11,16].However, the previous abovementioned studies did not describe how the movement velocity of each repetition was controlled, which can potentially explain these controversial findings.Because the movement velocity influences the number of repetitions achieved, it is not possible to compare properly different exercises, as well as different intensities with no velocity control.In addition, the number of repetitions performed at a given intensity influences the mechanical overload, and consequently, the neurophysiological, hormonal, and metabolic responses, which can also influence the strength gains and muscle hypertrophy resulted from resistance training [17].Besides, it has been suggested that increasing the repetition duration without changes in the repetition numbers per set could increase the metabolic response provided by resistance training [18].
Another interesting result of the present study was that the training status of participants does not affect the number of repetitions performed in each percentage of 1RM.Previously, Pick and Becque [9] demonstrated that trained individuals performed a higher RM's in the squat exercise at 85% 1RM compared to untrained.In contrast, Shimano et al. [8] showed that untrained participants performed a significantly greater number of repetitions than trained during bench press at 90% 1RM, although no differences between groups at 60 or 80% 1RM for bench press were found.Methodological differences, especially regarding the movement velocity of each repetition during the RM's tests and the use of different resistance exercises could be an explanation for those discrepancies.
Our findings have an important implication for resistance exercise intensity prescription, since the use of a specific percentage of 1RM can be used for target the same maximum number of repetitions in different free weight upper limb exercises.Second, movement velocity of each repetition throughout each set should be standardized in order to allow the same goal using the same percentage of 1RM, facilitating the exercise prescription and management of a group of athletes or recreational weight lifters.However, some limitations should be taken into account in order to interpret the results.Our sample consisted of men only, therefore limiting the generalization of our findings to the female population.Moreover, lower limb resistance exercises were not evaluated and should take into account in future studies.
CONCLUSION
In conclusion, the amount of muscle mass used during upper-body resistance exercises does not influence the number of repetitions performed at 60, 75 and 90%1RM.Likewise, the training status of participants does not affect the maximum number of repetitions performed when the movement velocity of each repetition is controlled and maintained constant throughout the set. | 2019-05-10T13:10:01.703Z | 2017-04-28T00:00:00.000 | {
"year": 2017,
"sha1": "34a9be4cf756a696653a04a142da814f381cfe72",
"oa_license": "CCBY",
"oa_url": "https://opensportssciencesjournal.com/VOLUME/10/PAGE/81/PDF/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "34a9be4cf756a696653a04a142da814f381cfe72",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55722087 | pes2o/s2orc | v3-fos-license | Alternative approaches to community participation beyond formal structures: evidence from Langa within the municipality of Cape Town
While ward committees and Integrated Development Planning (IDP) representative forums constitute formal participatory mechanisms in South Africa’s local government, little is known about the potential of local approaches in enhancing participation in municipal planning. This paper examines alternative approaches to participation based on research conducted in Langa – a township situated on the Cape Flats of Cape Town. The paper highlights approaches to residents’ participation in planning tested during the ‘interregnum’ – the period when ward committees are in abeyance due to elections. The study found that, while IDP participatory processes facilitated awareness of participation, ward councillors were crucial in operationalising participation that reflects the diversity of the community. Introduction and background South Africa’s post-apartheid local government legislation emphasises public participation as a prerequisite for consolidating the democratic dispensation. The rationale for public participation is enshrined in and protected by the South African Constitution of 1996, as well as specific policy and legislative instruments: notably the White Paper on Local Government 1998, which mandates municipalities to work with communities to maximise socio-economic development and growth; and the Municipal Structures Act (Act 117 of 1998), which decrees local government to establish structures and modalities through which citizens and communities can participate in the planning and policymaking processes which determine the development of the municipal area. Andani Alternative approaches to community participation in South Africa CJLG December 2017 Page number not for citation purpose 2 The notion of participation has enduring currency as an institutional directive and a tool for achieving the objectives of developmental local government (Winkler 2011; Harrison et al. 2008) in South Africa. Integrated Development Plans (IDPs) have been an important tool in providing an overall framework for development and addressing South Africa’s historic divides. All local governments have to prepare an IDP drawn up with all stakeholders in the area, which aims to coordinate all government services. IDPs may take six to nine months to prepare and are updated annually. Ward committees also provide another framework for participation. At a glance, existing participatory mechanisms in local government are effectively ‘invited spaces’ as citizens and communities are mostly invited to participate in structures such as ward committees, IDP representative forums and other consultative mechanisms (Cornwall and Gaventa 2000) in the local municipality. Though the concept of ‘invited spaces’ is not entirely the focus of this paper, it is significant given its prevalence in the local government participatory sphere. Cornwall defines ‘invited spaces’ as “those into which people (as users, citizens or beneficiaries) are invited to participate by various kinds of authorities, be they government, supranational agencies or non-governmental organisations” (Cornwall, 2002, p. 24). These are contrasted with ‘claimed spaces’ which people create for themselves. In the context of local governance, invited spaces are mostly top-down and state-led in nature – where citizens are invited to participate and give input to plans/or decision-making processes (Cornwall 2008; Escobar 2011; GGLN 2012). In discussing the potential of alternative participatory approaches for local planning, this paper addresses a key question: What alternative approaches are available for ward councillors to facilitate community participation in planning in the absence of ward committees at the local level? Using a case study approach (Langa ward in the Cape Town area), the paper distinguishes two local approaches, namely sectional group (or ‘pocket’) meetings and ‘mass’ or public meetings, used by councillors to facilitate community participation in local planning. The paper then explores how these approaches are implemented and examines their implications for enhancing participatory democracy in local government. At the time of this research, the ward committee in Langa was dissolved following the 2011 municipal elections. While residents of Langa looked forward to a new ward committee, the ward councillor assumed responsibility for mobilising and engaging residents through alternative approaches 1 The term ‘invited spaces’ is borrowed from the ideas of Andrea Cornwall and John Gaventa, who have both written extensively on participatory governance and citizenship. Their contributions are invaluable to this study, which seeks to explore the notion of participatory invited spaces in the South African context. In this article, ‘invited spaces’ will be used synonymously with ‘participatory mechanisms’, referring to formal structures such as ward committees and IDP representative forums, as established by local government in South Africa to foster deliberative and participative democracy. Invited spaces create opportunities for public input on processes of municipal governance. Unlike spaces where citizen participation is linked to representation of a stakeholder group, ‘invited spaces’ are often ‘consultative’ and tend to have a limited influence on decision-making (GGLN 2012). Andani Alternative approaches to community participation in South Africa CJLG December 2017 Page number not for citation purpose 3 around pertinent municipal processes. Hence, the research specifically targeted councillors and ward committee members who had served in the previous term. This paper has five main sections: a background situating the research in the current South African context; an overview of the research methodology; a description of formal participatory structures established by local government to promote participatory democracy; analysis of the findings on alternative participatory approaches to residents’ participation in planning in Langa; and finally a conclusion discussing the implications of these approaches for enhancing inclusive planning. Public participation as a cornerstone of South Africa’s democracy Participation has varied meanings depending on the contextual use of the term. One conception is as “...the practice of consulting and involving members of the public in the agenda-setting, decisionmaking and policy-forming activities of organisations or institutions responsible for policy development” (Rowe and Frewer 2004, p. 512). Another, Creighton’s definition (2005), is “the process of integrating public concerns, needs and values into governmental and corporate decision-making”. However, in its simplest meaning, participation evokes a sense of engagement of ordinary citizens in determining governmental actions that affect well-being. Participation is an essential part of local governance and community-led development processes, providing community members with an opportunity to be part of decision-making and to exercise ownership over development processes (e.g. identification of needs, selection of priorities, project design, approval processes, implementation, monitoring and evaluation) taking place in their constituencies. Though participation has been embraced widely as a developmental tool, it has encountered growing criticism in the past decades from scholars (e.g. Cooke and Kothari 2001), who argue that participation has failed to achieve social change, owing to it being ineffective in addressing issues of power and politics that often characterise the participatory sphere. In South Africa, government commitment to fostering citizen participation, particularly at the local government level, finds expression in legal and national policy frameworks. However, the extent to which these instruments are translated into action by the local state has been limited, and evidence suggests that public participation falls short of its ideals and expectations. Formal participatory mechanisms or structures, such as ward committees and IDP representative forums, rarely create the enabling environment that ensures meaningful participation of people in key municipal processes (GGLN 2013; Van Donk 2012; Winkler 2011; Sinwell 2010). This situation led Williams (2006, p. 19) 2 The South African Constitution 1996, the Government White Paper on Local Government 1998, the Municipal Structures Act 1998, the Municipal Systems Act 2000 and the Batho Pele White Paper on Transforming Service Delivery 1997, among others, are the main legislative instruments that provide the legal basis for public participation in South Africa’s local government. Andani Alternative approaches to community participation in South Africa CJLG December 2017 Page number not for citation purpose 4 over a decade ago to conclude that community participation is reduced to “spectator politics, where ordinary people have... mostly become endorsees of pre-designed planning programmes”. Indeed, the inadequate engagement of local communities in, for instance, the IDP (regarded as the overarching tool for service delivery and transformation at the local level) has consequences for promoting social development and democracy at that level. Moreover, studies have shown that the performance of ward committees is hampered by barriers including party politics, poor accountability, corruption, patronage and nepotism, rubber-stamping of planning by communities, and inadequate local government support for democratic participation (Tapscott 2010; Esau 2007; Moodley 2007). Due to these challenges, the objectives of participatory democracy envisioned in the White Paper on Local Government 1998 are likely to become remote, particularly in local municipalities. For instance, the literature indicates that ward committee structures have tended to become extensions of political parties and have neither real power nor capacity to achieve their mandate of deepening participation in local governance (Piper and Deacon 2008; Oldfield 2008). To date,
Introduction and background
South Africa's post-apartheid local government legislation emphasises public participation as a prerequisite for consolidating the democratic dispensation.The rationale for public participation is enshrined in and protected by the South African Constitution of 1996, as well as specific policy and legislative instruments: notably the White Paper on Local Government 1998, which mandates municipalities to work with communities to maximise socio-economic development and growth; and the Municipal Structures Act (Act 117 of 1998), which decrees local government to establish structures and modalities through which citizens and communities can participate in the planning and policymaking processes which determine the development of the municipal area.
CJLG December 2017 84
The notion of participation has enduring currency as an institutional directive and a tool for achieving the objectives of developmental local government (Winkler 2011;Harrison et al. 2008) in South Africa.
Integrated Development Plans (IDPs) have been an important tool in providing an overall framework for development and addressing South Africa's historic divides.All local governments have to prepare an IDP drawn up with all stakeholders in the area, which aims to coordinate all government services.IDPs may take six to nine months to prepare and are updated annually.Ward committees also provide another framework for participation.
At a glance, existing participatory mechanisms in local government are effectively 'invited spaces' 1 as citizens and communities are mostly invited to participate in structures such as ward committees, IDP representative forums and other consultative mechanisms (Cornwall and Gaventa 2000) in the local municipality.Though the concept of 'invited spaces' is not entirely the focus of this paper, it is significant given its prevalence in the local government participatory sphere.Cornwall defines 'invited spaces' as "those into which people (as users, citizens or beneficiaries) are invited to participate by various kinds of authorities, be they government, supranational agencies or non-governmental organisations" (Cornwall, 2002, p. 24).These are contrasted with 'claimed spaces' which people create for themselves.In the context of local governance, invited spaces are mostly top-down and state-led in naturewhere citizens are invited to participate and give input to plans/or decision-making processes (Cornwall 2008;Escobar 2011;GGLN 2012).
In discussing the potential of alternative participatory approaches for local planning, this paper addresses a key question: What alternative approaches are available for ward councillors to facilitate community participation in planning in the absence of ward committees at the local level?Using a case study approach (Langa ward in the Cape Town area), the paper distinguishes two local approaches, namely sectional group (or 'pocket') meetings and 'mass' or public meetings, used by councillors to facilitate community participation in local planning.The paper then explores how these approaches are implemented and examines their implications for enhancing participatory democracy in local government.At the time of this research, the ward committee in Langa was dissolved following the 2011 municipal elections.While residents of Langa looked forward to a new ward committee, the ward councillor assumed responsibility for mobilising and engaging residents through alternative approaches 1 The term 'invited spaces' is borrowed from the ideas of Andrea Cornwall and John Gaventa, who have both written extensively on participatory governance and citizenship.Their contributions are invaluable to this study, which seeks to explore the notion of participatory invited spaces in the South African context.In this article, 'invited spaces' will be used synonymously with 'participatory mechanisms', referring to formal structures such as ward committees and IDP representative forums, as established by local government in South Africa to foster deliberative and participative democracy.Invited spaces create opportunities for public input on processes of municipal governance.Unlike spaces where citizen participation is linked to representation of a stakeholder group, 'invited spaces' are often 'consultative' and tend to have a limited influence on decision-making (GGLN 2012).
around pertinent municipal processes.Hence, the research specifically targeted councillors and ward committee members who had served in the previous term.This paper has five main sections: a background situating the research in the current South African context; an overview of the research methodology; a description of formal participatory structures established by local government to promote participatory democracy; analysis of the findings on alternative participatory approaches to residents' participation in planning in Langa; and finally a conclusion discussing the implications of these approaches for enhancing inclusive planning.
Public participation as a cornerstone of South Africa's democracy
Participation has varied meanings depending on the contextual use of the term.One conception is as "…the practice of consulting and involving members of the public in the agenda-setting, decisionmaking and policy-forming activities of organisations or institutions responsible for policy development" (Rowe and Frewer 2004, p. 512).Another, Creighton's definition (2005), is "the process of integrating public concerns, needs and values into governmental and corporate decision-making".
However, in its simplest meaning, participation evokes a sense of engagement of ordinary citizens in determining governmental actions that affect well-being.
Participation is an essential part of local governance and community-led development processes, providing community members with an opportunity to be part of decision-making and to exercise ownership over development processes (e.g.identification of needs, selection of priorities, project design, approval processes, implementation, monitoring and evaluation) taking place in their constituencies.Though participation has been embraced widely as a developmental tool, it has encountered growing criticism in the past decades from scholars (e.g.Cooke and Kothari 2001), who argue that participation has failed to achieve social change, owing to it being ineffective in addressing issues of power and politics that often characterise the participatory sphere.
In South Africa, government commitment to fostering citizen participation, particularly at the local government level, finds expression in legal and national policy frameworks.2However, the extent to which these instruments are translated into action by the local state has been limited, and evidence suggests that public participation falls short of its ideals and expectations.Formal participatory mechanisms or structures, such as ward committees and IDP representative forums, rarely create the enabling environment that ensures meaningful participation of people in key municipal processes (GGLN 2013;Van Donk 2012;Winkler 2011;Sinwell 2010).This situation led Williams (2006, p. 19) over a decade ago to conclude that community participation is reduced to "spectator politics, where ordinary people have... mostly become endorsees of pre-designed planning programmes".Indeed, the inadequate engagement of local communities in, for instance, the IDP (regarded as the overarching tool for service delivery and transformation at the local level) has consequences for promoting social development and democracy at that level.
Moreover, studies have shown that the performance of ward committees is hampered by barriers including party politics, poor accountability, corruption, patronage and nepotism, rubber-stamping of planning by communities, and inadequate local government support for democratic participation (Tapscott 2010;Esau 2007;Moodley 2007).Due to these challenges, the objectives of participatory democracy envisioned in the White Paper on Local Government 1998 are likely to become remote, particularly in local municipalities.For instance, the literature indicates that ward committee structures have tended to become extensions of political parties and have neither real power nor capacity to achieve their mandate of deepening participation in local governance (Piper and Deacon 2008;Oldfield 2008).
To date, participatory democracy in South Africa has not necessarily led to the social and economic empowerment of citizens, despite the citizenship rights that liberal democracy bestows.Most people are unable to apply their citizenship rights.Thompson and Matheza (2005) cited in Thompson and Nleya (2010, p. 225) have argued that, "the poor are variously perceived as apathetic and reluctant to take advantage of the fresh opportunities available to them, especially now that apartheid has gone".
The quest for meaningful community participation, especially in municipal IDP processes, has so far proved intractable in many local municipalities.Although progress has been made to improve service delivery, especially in metropolitan municipalities, apathy towards the local state, owing to bad governance and poor service delivery, particularly in under-resourced local municipalities, has led to low levels of community participation in local government activities.
Furthermore, financial and administrative constraints, inadequate use of public participation procedures, and limited understanding by local government officials of participation processes and the legal framework continue to undermine efforts to foster the meaningful engagement of communities in IDP processes in many local municipalities.South Africa's system of local government remains a complex developmental environment that is still reeling from the legacy of apartheid social engineering, compounded by crises such as inequality, poverty, rising unemployment and social ills.These challenges make authentic and democratic public participation in local governance more strenuous.
With South Africa now well into its third decade of liberal democracy, it is likely that the hard-earned democratic achievements may erode if the above challenges are left unaddressed.Participatory democracy has not sufficiently resulted in improved service delivery and accurate identification of CJLG December 2017 87 community needs and priorities, nor increased trust between communities and officials or politicians.
Important decisions pertaining to service delivery and resource allocation remain the preserve of technocratsmoderated by government.
As seen in recent years, many communities have disengaged from the government's participatory 'invited spaces', and instead have elected to take their grievances to the streets in the form of violent protests3 (Bond 2012;Van Donk 2012;Tapscott 2010).These sporadic violent protests over service delivery not only expose the shortcomings in municipal services, but also reflect the fundamental failure of formal participatory structures to foster effective collaborative and meaningful participatory decision-making at the local level.The current challenges facing participatory democracy at the local level thus require municipal governments to look for a variety of approaches to engage with local residents around core municipal processes.
Methodology
The present study mainly employed a descriptive qualitative research methodology, based on in-depth interviews and a literature review.The primary data gathered from interviews with councillors and ward committee members in Langa was supplemented by findings from a larger study in Cape Town, with a sample of 315 including residents, ward committee members and councillors, conducted by the African Centre for Citizenship and Democracy (ACCEDE) in 2011. 4Data from this study was analysed, as part of the literature review, to contribute to the results and conclusions drawn in the present study.The specific findings presented in this paper are based on data from 10 interviews, including three ward councillors, six ward committee members and one community development worker in Langa.All interviews were conducted in Langa in 2012 and the analysis and interpretations of the results presented in this paper are thus shaped by the views of the respondents in this study.A semi-structured qualitative questionnaire was administered to glean data from respondents.The process was recorded with the aid of a voice recorder and through note-taking.
Primary data from the research, on which the current study is based, was analysed and presented using a qualitative approach involving content analysis and descriptive statistics.However, the author acknowledges two limitations associated with the research methodology.First, Langa township was, at the time of the research, within wards 51 and 52, of the 105 wards in the municipality of Cape Town.The findings thus cannot be generalised to the whole municipalityas they do not necessarily reflect conditions in other ward committees in Cape Town, despite the similarities that many wards in South Africa have.Second, the data collection phase of the research coincided with the dissolution of the community's ward committee following the 2011 local elections; hence the findings are based on the experiences and views of ward committee members and councillors who served prior to suspension of the ward.
Formal participatory mechanisms
Formal mechanisms have existed since 1994 to foster interaction between government and communities, through which the latter can influence and exert control over governmental actions 5 that affect the well-being of communities.Through these mechanisms, citizens, including civil society groupings such as non-governmental organisations (NGOs), community-based organisations (CBOs), ratepayer associations and business organisations are invited to participate and input on local decisions and policies that are designed to promote socio-economic development in the local area.These formal spaces provide opportunities for non-state actors to enforce accountability and transparency of elected make recommendations on any matters affecting the ward to the ward councillor or through the ward councillor to the municipality; serve as an official specialised participatory structure; create formal unbiased communication channels as well as cooperative partnerships between the community and the council; serve as a mobilising agent for community action, in particular through the IDP process and the municipality's 5 At the local level, the actions of government encompass decisions or policies, including decisions about service delivery, performance monitoring and budgeting processes, which affect the well-being of communities.CJLG December 2017 89 budgetary process; and perform other duties as delegated by the municipality (Mufamadi 2005, p. 8).
Ward committees are thus both communication vehicles and catalysts for transformation at the local level and, per the legal framework, should not have more than ten members in addition to the ward councillor who serves as the chairperson (Municipal Structures Act 1998).A ward committee is defined as "an advisory body, a representative structure, an independent structure, and an impartial body that must perform its functions without fear, favour or prejudice" (Smith and De Visser 2009, p. 10).In fulfilment of Section 73 of the Municipal Structures Act (1998), ward committee members should reflect all sections and interest groups in the community including: CBOs; ratepayers; faith-based organisations; safety and security groups; environmental groups; early education; youth organisations; arts and culture; sports; the business community; and designated vulnerable groups such as the aged, gender groups and the disabled.
Alternative approaches to resident participation in Langa
The following discusses alternative participatory approaches or practices to the two formal ones discussed above, based on insights from the research in Langa.However, before discussing these approaches, brief contextual information about the community is first provided.
Like other communities in the Cape metropolitan area, the participation of residents in Langa in core municipal processes is facilitated by a ward committee structure chaired by a ward councillor who represents the interest of the community (ward) on the sub-council.As indicated earlier, this research coincided with a transition period between elected councils, making its findings significant in that they uncover the alternative ways or approaches through which councillors promoted community participation in this situation.The views of ward councillors, ward committee members and the community development worker interviewed, provided invaluable insight into the current dynamics of CJLG December 2017 90 local participation and the strategies that were being used in the area to engage residents in local government activities.
Within the ward committee system, Langa is represented by wards 51 and 52a large area that encompasses different communities in Sub-council 15, and a wide range of economic activities.This area consists of five wards that stretch from Mowbray through Pinelands, Langa, Epping to Milnerton including Brooklyn, Rugby and Ysterplaat.Major roads, including the N1, the N2, Raapenberg Road, Koeberg Road, Voortrekker Road, Settlers Way and Sable Road run through the Sub-council.Wards 53, 55 and 56 7 also form part of this area.
The City of Cape Town defines a sub-council as "a geographically defined area within the city which is made up of between three and six neighbouring wards.Sub-councils exist to make sure that the issues affecting your neighbourhood are heard and dealt with.There are a total of 24 sub-councils which 7 For more on ward demarcations, see http: https://www.capetown.gov.za/family%20and%20home/meet-thecity/city-council/subcouncils/subcouncil-profile?SubCouncilCode=15 (Accessed 05 March 2018).
Profile of Langa
Langa came into being in 1923 following the passing of the Urban Areas Act and was named after Langalibalele Dube (who was imprisoned in Robben Island after rebelling against the Natal government).It is one of the oldest townships in South Africa and was home to low-cost housing for Black Africans in Cape Town, together with single-sex hostels which accommodated migrant workers in the 1960s and 1970s.The area later saw an influx of illegal residents into these hostels, bringing more balanced demographic representation to the area.However, this influx also exacerbated existing housing shortages, which forced many families to rent backyard shacks in formal housing areas, a situation which persists today (Thompson et al. 2011).
Langa's population according to the 2011 population census is 52,401, made up of 17,400 households with an average household size of three.The majority are Black African (99%).IsiXhosa is the dominant language in the area, and the housing profile is diverse: 58% of households live in formal dwellings, but 27% occupy informal dwellings/shacks in informal settlements, largely concentrated in Joe Slovo.
The census further indicates that 40% of those aged 20 or older have completed schooling to Grade 12 (last grade of secondary education) or higher; 60% of residents aged 15 to 64 are employed; 72% of households have a monthly income of R3,200 or less; 67% of households have access to piped water in their dwelling or inside their yard; 72% of households have access to a flushing toilet connected to the public sewer system; 94% of households have their refuse removed at least once a week; and 98% of households use electricity for lighting in their dwelling (City of Cape Town 2011 Censussuburb data for Langa supplied by Statistics South Africa).
CJLG December 2017 91 make up the City of Cape Town's municipal structure.The sub-councils serve the residents by engaging with them on municipal issues". 8 As depicted by census data, Langa grapples with many socio-economic challenges, including increasing unemployment, crime, drug and substance abuse, and inadequate service deliveryespecially health, sanitation and housing infrastructure.The hardest hit by these challenges are poor residents, many of whom live in backyard or shack dwellings.Therefore, there is considerable need to stimulate economic activities in the township, for job creation, and improved service delivery across housing, sanitation and education.
In the absence of the ward committee in the run-up to elections, this research found that several alternative participatory approaches were used by ward councillors to engage residents in local planning.While residents in Langa waited for a new ward committee to be elected, their participation in service delivery decisions continued, as ward councillors adopted specific local strategies to mobilise and inform residents about the council's decisions and developments which affected the community.This paper distinguishes two different approaches or strategies which were proven by ward councillors to be effective in promoting community participation in Langa.
Sectional group (or 'pocket') meetings
Pocket meetings were used by ward councillors to engage community residents and other stakeholders in key local government activities and decision-making.This approach, as indicated by the councillors, provided a useful conduit both for disseminating municipal information to residents, and for gathering information on the community's grievances, needs and priorities.
Owing to the large number of houses in Langa, residents are divided into sections or 'pockets', with each pocket comprising up to 60 houses.Each pocket is then required to establish a street committee responsible for providing the pocket's 'wish list' which is later taken forward by the ward councillor.
The 'wish list' is a written report outlining the important needs, grievances and priorities of the residents of each pocket.Once the pocket meetings are concluded, the 'wish lists' received from all the pockets in the community are tabled by the councillors for further deliberation and discussion at a mass or public meeting.
The benefits of using 'pocket' meetings to promote residents' participation in local government processes cannot be overlooked.At the outset, the approach provides a mechanism through which councillors can effectively identify the pressing needs of residents.As discussions develop, the interactions that take place between councillors and residents during pocket meetings foster 8 Cape Town Municipal Sub-councils: http://wwwqa2013.capetown.gov.za/Family%20and%20home/meet-thecity/city-council/subcouncils(Accessed online on 05 March 2018).
CJLG December 2017 92 accountability, transparency and responsiveness of councillors, as well as providing an opportunity to address disputes within the community.Furthermore, 'pocket' meetings are relatively safe spacesthey provide residents, especially those who may be unable to express their views at mass meetings due to fear of intimidation by other residents, the chance to articulate their views in smaller gatherings where they feel safe.They also provide councillors with the space to understand the felt needs of individuals, and the capacities or opportunities within each pocket.Furthermore, because this approach is expected to cover all sections of the community, community participation becomes more holistic and inclusive, as pocket meetings help to gather a cross-section of input from residents regarding key municipal actions.
A key disadvantage of this approach is that it is resource-intensive and time-consuming and may not keep pace with increases in population over time.As the number of residents increases, pocket meetings may become less effective in engaging all residents.For instance, with the 17,400 households in Langa, if pockets are groups of 60 households, there would be about 290 'pocket' meetings or more.At a rate of one pocket meeting per week, it would take over five years to cover all households in Langa.
'Mass' or public meetings
In addition to pocket meetings, the second (more common) approach used by the councillors to engage residents on municipal matters was 'mass' or public meetings.In the absence of the ward committee, the research found councillors frequently convened public meetings to discuss issues gathered from pocket meetings.Mass meetings are open to all stakeholder groups and individuals in the community.
The common stakeholder groups at mass meetings include: business, youth, or ratepayer associations; members of the police forum; traditional leaders; ward committee members; community development workers; health workers and school governing bodies/leaders.The public meetings are chaired by the ward councillors who also take responsibility for recording the outcomes of the discussions.
Though mass meetings often see heated debate, the approach nevertheless provides a platform to inform the community on the latest developments in the municipality, and progress made by the sub-council regarding service delivery demands and priorities.The outcomes of mass meetings are brought to the attention of the sub-council by ward councillors, and later to the chamber of the local government for final decisions on how the various ward needs that have been submitted can be met.
The merits of mass meetings are noteworthy.As pointed out by the councillor of ward 51, "Though mass meetings are often difficult to organise, they help us to generate useful lessons."The approach and its process contribute to narrowing the gap between the community and government, as well as to promoting public participation, accountability and transparency by bringing together a variety of stakeholders in the community to deliberate on local issues in public.
CJLG December 2017 93
However, councillors also revealed thatin Langa at leastparticipation is also heavily charged with emotion owing to poor service delivery.According to the Ward 52 councillor: Meetings are emotionally charged because of poor service delivery.The people come into the meeting with anger, and this makes it difficult to reach a consensus on certain important issues.Housing and illegal dumping are currently the serious issues we face as a community.People who claim to be originally born in Langa mobilise to claim rights to housing.They call themselves the concerned group and often breach formal procedures to assert their voices or claims.
In addition, mass meetings convened in Langa to discuss development issues usually end up in political squabbles and disputes between members of different political parties in the community.An interview with one community development worker revealed that: Members of the African National Congress in the community have often battled against members of the Democratic Alliance during council and ward committee meetings, with disregard to the relevance of the issues being tabled for discussion.This distorts the aim of meetings and makes it difficult to reach consensus on certain pertinent issues.
Residents' participation in local planning is further hindered by a number of socio-economic challenges.
For example, the issue of poor service delivery reduces community members' willingness to participate in both council and ward committee meetings.Thus, participation has not necessarily improved service delivery in Langa; on the contrary, it has resulted in participation fatigue among residents who felt that no matter how many times they attend meetings, their contribution will not be considered by the council.This situation exacerbates the low levels of community participation in the area.
Another major challenge hindering community participation in the area relates to increasing poverty, unemployment and inequality.These factors make it difficult for residents to meet their basic needs.As can be seen in Table 1, most residents in Langa and the surrounding communities of Delft and Khayelitsha struggle with food insecurity challenges.Until today, many residents in these townships continue to grapple with issues of poverty, crime and unemployment, and have poor access to proper nutrition, healthcare and decent housing.Source: Thompson et al. (2011) The urgent need of people, particularly the unemployed, to meet their basic needs, exerts a challenge to meaningful community participation in Langa.The community development worker interviewed for this study revealed that "people give low priority to participation owing to the socio-economic CJLG December 2017 94 hardships that they face in the community".As a result, public meetings convened by the ward committee in the community were often dominated by ward committee members, with few residents attending.A key highlight of the interviews with community residents was that majority of respondents feel politicians do not care much about their needs and priorities, which re-echoes the urgency to address unemployment, inequality and poverty as these factors negatively affect residents' ability to meet their basic needs.This has resulted in a growing mistrust and dissent between communities and local government institutions and officials.
Interviews with community residents indicated that many are dissatisfied with local government partly owing to the municipality's slow response to community's needs and priorities.To quote one respondent, "the municipality is taking us for a ride; they have made promises that they have not delivered, yet they expect us to remain calm and vote".This response suggests that municipality has not lived up to expectations in terms of effectively responding to community concerns.The councillors suggested that bureaucracy slows the pace of governments' response to communities and service delivery.It may be argued that the lack of trust in these important municipal institutions and elected representatives affects the level of community participation in municipal processes.This situation fuels sporadic violent service-delivery protests seen in various parts of the country in recent years.
Despite the opportunities associated with the pocket meetings and mass/public meetings, their effectiveness depends much on the capacity of the conveners (mostly ward councillors) to conduct participation in a manner that is open, informative, and encourages residents' participation.Among the obstacles to local participatory decision-making, political squabbles and conflicting interests among participants are perhaps the most pervasive.
Conclusion
As this research has shown, in Langain the absence of ward committees -residents' participation in municipal processes such as the IDP was made possible through sectional group (or pocket) meetings and mass/public meetings.While these approaches have proved partially effective in fostering meaningful community participation, the responsibility rests with ward councillors and active citizens to implement them in a manner that is participatory and mirrors the diversity of the community.
CJLG December 2017 95
In the context of this research, these approaches enabled the councillors during the pre-election interregnum to glean information about the felt needs and aspirations of the communitywhich was critical for appropriately aligning IDP policies and services.Information emanating from these meetings provides a basis for deliberations at IDP forums and within specific portfolio committee meetings, as well as informing projects for the IDP and service-delivery plans of the City of Cape Town.
Notwithstanding the limitations of the pocket and mass meetings discussed in this paper, they represent a small step towards hearing communities' voices and meeting their needs.These approaches are nevertheless useful for community profiling, especially in identifying community needs, priorities of relevant stakeholders, community development resources, and voices of dissent and divergent interests in the community.While the findings and conclusions drawn on the alternative participatory approaches in this paper cannot be generalised, they represent a practical account of participatory practices in South Africa's local government.Therefore, this paper stimulates discourse on participatory democracy, and its insights may be relevant to local government, research and academic communities, as well as civil society organisations concerned with issues of local participation, citizenship, democracy and local governance in South Africa and beyond.
or appointed local government officials and representatives.The following describes the formal mechanisms established by local municipalities in South Africa to foster public participation in local governance.Ward committee structuresWard committees are formal participatory structures established in line with the Municipal Structures Act (Act 117 of 1998), with the main objective of enhancing participatory democracy in local government affairs.Since 2001, wards have emerged as important in fostering people-centred, participatory and democratic governance(Piper and Deacon 2008;Smith 2008).As outlined in the 2005 Guidelines for the Establishment and Operationalisation of Municipal Ward Committees 6 a ward committee is set up to: 6 The Guidelines for the Establishment and Operation of Municipal Ward Committees Notice 965 of 2005 (GG 27699 of 24 June 2006) was prepared by the then Local Minister for Provincial Government, in terms of section 120, read with section 22, of the Local Government: Municipal SystemsAct, 2000 (Act No. 32 of 2000), after consultation with the MEC's for local government and organised local government representing local government nationally.
Besides ward committees, IDP representative forums are established by local government, in line with the IDP legal framework, to foster stakeholder participation in the IDP process.As per the legal provisions, the IDP forum should include the following actors: "members of the executive committee; councillors (including councillors who are members of the district council and relevant portfolio councillors); traditional leaders; ward committees; heads of departments/senior officials; stakeholder representatives of organised groups; advocates for unorganised groups; resource persons; and community representatives"(Department of Provincial and Local Government 2001, p. 23) The IDP participatory mechanisms exists to institutionalise and guarantee representative participation in the IDP process, so that the interests of various stakeholders are adequately represented.
Decline in citizens' trust in local government institutions posed another challenge to community participation in the area.The literature has shown that, local government, as the body closest to the people, struggles to achieve the mandate of participatory governance, quality service delivery, and improving quality of life as envisaged in the Government White Paper on Local Government (1998).
Table 1 .
Household health and food security | 2018-12-07T02:49:14.663Z | 2018-05-23T00:00:00.000 | {
"year": 2018,
"sha1": "34624985ddb84320df33dde19a494475fc5f5fb3",
"oa_license": "CCBY",
"oa_url": "https://epress.lib.uts.edu.au/journals/index.php/cjlg/article/download/6084/6408",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "34624985ddb84320df33dde19a494475fc5f5fb3",
"s2fieldsofstudy": [
"Sociology",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
225230768 | pes2o/s2orc | v3-fos-license | Study of static and dynamic geometric characteristics of buildings
It is impossible to get a truthful overview of the strain-stress state of any object using the methods recommended by the current regulatory documents for determining deformations. This paper implements the proposed method of determining the spatial geometry of an object. Such a method allows obtaining a detailed picture of deformations, on the basis of which it is possible to develop a qualitative project to restore its operational reliability. The rolls were determined according to 38 vertical sections (9 measurement cycles in total) for a building with a classic facade and 53 vertical sections (2 measurement cycles in total). Besides, five cycles of determination the settlement of structure were performed. The trigonometric leveling was used to control the horizontal position of different level building structures (top of window aperture of the 2nd floor). Elevations of horizontal building structures were determined in different conditional systems of heights. Elevations of horizontal building structures were determined in a single conditional system of heights. The results of the method are presented in graphical and tabular form.
Introduction
The methods of studying building deformations recommended by the current regulatory instruments require further improvement. Thus, it is recommended to use generalized roll values, settlements of the building foundation, and in some cases deflections, to determine deformations of buildings and structures in regulatory documents, which cannot sufficiently reflect the real picture of deformed state of buildings and structures [1][2][3][4][5][6]. Based on such results, it is impossible to determine a real picture of the stress-strain state and to make a viable project to restore its design geometry.
The proposed methodthe method of determining the spatial geometry of an objectdefines a larger number of geometric parameterssettlement, horizontality of horizontally oriented structures, a set of rolls on separate vertical sections, characterizing in full and in detail the deformed state of the building. This method makes it possible to detect stress zones caused by deformations of engineering structures, which is necessary to restore the spatial geometry of the building without additional deformations and faults [6][7][8][9][10][11].
Methods and materials
Static and dynamic deformation characteristics were studied on the example of the shopping center building with built-in fitness club located at 133/177 Krasnoarmeyskaya Str., Moscow.
The work was carried out using the latest geodetic measurement tools, such as: The office study of geodetic measurements was performed in the Credo_Dat program. Analysis and design of geometric parameters of objects is performed in ZWCAD software.
The purpose of preliminary inspection was to establish compliance of the layout and structural diagrams of existing structures with the requirements of technical documentation. During the inspection, the most damaged sections of the structures were revealed, as well as bearing elements under the most unfavorable conditions of operation. The general condition of the structures was visually assessed: presence of wetted concrete sections, condition of protective coating, presence of corrosion, etc. Thus, the preliminary inspection made it possible to collect the information, which allowed clarifying the program and scope of instrumental works.
No defects of both the entire building and its individual elements were detected from the visual inspection.
Observations of settlement points are made in the system of heights close to Baltic from wall benchmarks.
Observations of settlement points are made in a conditional system of heights. The levelling line is laid in forward and reverse directions. The determination of settlement points in each cycle is made with the accuracy characterizing the mean quadratic error of determination of settlement points in a weak place not more than 1 mm (the most remote mark from original benchmarks), which is provided by the method of high-accuracy geometric leveling of the II class in accordance with the current regulatory documents. The settlement points are determined in the Baltic system of heights through geometrical levelling in forward and reverse directions. The average length of the directional ray was 25-30 m for the main traverse and 2-25 mfor levelling of settlement points. This made it possible to reduce the time of observations at the station, to improve the reliability, quality and speed of monitoring the measurement process.
The height of the directional ray above the underlying surface of the earth was allowed not less than 0.5 m, shoulder inequality at the stationnot more than 1 m, and accumulation of shoulder inequality in the sectionnot more than 2 m. According to Paragraph 3, GOST 24846-2012 (Soils. Measurement methods of base deformations of buildings and structures) the geometric leveling is used as the main method to measure vertical displacements (settlement). Basic technical characteristics and tolerances for geometric leveling were adopted in accordance with regulatory requirements.
The defects of the closed levelling loop did not exceed the permissible value determined by the formula: (1) where Llevelling line length in km.
Permissible misalignment of levelling line between settlement points did not exceed the following value (2) where nnumber of levelling stations.
As a result of the levelling, cycle and accumulated vertical movements of settlement points were determined.
In geometrical sense, settlement is expressed by vertical sections lowered from initial to subsequent position of the settlement point. In cases where these lines for different brands are the same, the settlements are called even and when the lines are not equal, they are called uneven. The principle of processing repeated measurements is that there are differences in elevation of settlement points between measurement cycles. At the same time the following is defined: total settlement Si,(полн.), the value of which is equal to the difference of elevations of the same point of the current Hi,(послед.цикл) and initial cycle of observations Hi,(1цикл); ., the value of which is equal to the minimum total settlement Smin, (полн.) of the total settlement points; .
uneven settlement S(нерав.), the value of which is defined as the difference of the total settlement Si,(полн.) and the uniform settlement Sрав.; .
The average settlement is calculated by the following formula: On the basis of calculated values of total settlement points the average monthly settlement rates i are defined as follows: tinter-cycle time period (usually expressed in months). In the physical sense the velocities characterize the settlement dynamics. Acceleration (attenuation) of settlement is calculated by the following formula: This parameter characterizes the change in the dynamics of settlement (attenuation, acceleration). Roll measurement was performed according to the following procedure. The device on the tripod was installed at a distance approximately equal to the height of the building under control. The coordinate system of the electronic total station was oriented parallel to the controlled plane (building wall). Then, in a single coordinate system, the device coordinated the points under study located in characteristic places of the building. The measurement results were automatically recorded by an electronic total station in the selected tool file. This file was translated to a computer after the measurements were made. After that, it was imported into the Credo_Dat program intended for camera processing of field engineering and geodetic measurements. In this program, the polar coordinate system of the electronic total station was converted into a rectangular Cartesian coordinate system. Then the resulting 3D point coordinates were exported to the *.dxf (AutoCAD) format. The resulting file was then opened in AutoCAD, where drawing, measurement and analysis of the basic geometry of the building was performed.
The rolls of the structure can be expressed in relative and absolute measures. Besides, there are private qx, qy (for some coordinate plane) and absolute Q (full roll) for both individual elements and the structure as a whole.
In determining the rolls of the studied building, the method of coordination was used. This method defines the rectangular coordinates of the top (xв, yв, zв) and bottom point (xн, yн, zвн) of the construction structure in various ways. It is more rational to implement this method using a modern electronic total station).
The electronic total station measures a horizontal angle , a vertical angle and an inclined distance s. The following working formulas are used to convert the polar coordinate system to a rectangular Cartesian coordinate system: (11) Private rolls were determined by the following formulas: qx = x1 -x2; qy = y1 -y2.
(12) The absolute (total) roll Q is calculated by the following formula:
Conclusion
The study made it possible to conclude the following: 1. The visual inspection of the whole building and its individual elements did not reveal any deformation defects.
2. the recorded results of settlement points are practically within the limits of instrumental errors of observations. The maximum accumulated settlement is recorded in points No. 6 and No. 9 (0.6 mm). The minimum settlement is recorded in point No. 12 (0.3 mm). All settlement points do not exceed the permissible limit.
3. Full roll of the entire building equals 37 mm and is placed in the southeast direction. All fixed rolls of building structures do not exceed the permissible limit of 125 mm established by regulatory documents.
4. According to the results of the horizontal control of the building structures, it was determined that on the northern facade of the building, the top of the window aperture of the 2 nd floor bent by about 20 mm. For all other elevations, the elevations of the 2 nd floor window aperture change within the measurement errors.
5. The building is in the permissible design operating mode. 6. More complete analysis of the geometric characteristics of the building structures will be carried out based on the results of the next measurement cycle. | 2020-08-27T09:14:27.488Z | 2020-08-26T00:00:00.000 | {
"year": 2020,
"sha1": "b6a8a29570924f1be6965f1584a7593ea7256b03",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/905/1/012024",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e922308b942c0776cd9c4002d32f04f0f420ad34",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
252989522 | pes2o/s2orc | v3-fos-license | Virtual simulations for health education: how are user skills assessed?
Introduction: A virtual simulator, or one based on virtual reality, can computationally recreate real contexts. Objective: To analyze works on virtual simulations for training clinical procedures, focusing on the assessment of user skills. Method: Integrative literature review, carried out between 2010 and 2020. A total of 56 studies were selected 56 studies. Results: The selected studies showed that the variables and parameters of virtual simulators are usually obtained by consulting experts or through medical literature. These simulators mainly focus on developing psychomotor skills and assessing the learner’s performance through real-time alerts, progress indicators, and performance reports after the end of each training. Conclusion: Considering the expert’s knowledge exclusively to define the requirements of virtual simulators can limit their reliability and accuracy. The participation of experts in these projects does not follow standards regarding the selection and frequency with which they collaborate. Few simulators provide insightful and pertinent feedback on user performance.
INTRODUCTION
Medical education has changed, especially regarding the student's training method 1 . Due to the characteristics of the traditional training model, it is often impossible to experience more significant variability and complexity of clinical cases. In addition, another difficulty reported is the students' insecurity when treating patients for the first time 2 . Thus, health education professionals have been looking for new techniques to improve students' clinical skills and ensure the patient's integrity.
Traditionally, in health education, simulations are carried out to train clinical procedures. For these simulations, instructors use mannequins, animals, and corpses. However, these practices have the disadvantage of requiring on-demand preparation and raise ethical issues 3 . In this sense, virtual simulators emerged as an alternative to traditional training.
They allow the user to interact with a virtual environment that is similar to the interaction in the real world, reducing costs and enabling the experience of more significant variability of clinical cases.
There are minimum criteria for developing training with simulations, such as defining the expected objective, the target audience, the application usage scenario, the challenge difficulty, the main application subject, and the concepts related to it 4 . Thus, adequately designed simulation-based training can significantly reduce health professionals' errors and improve patient safety 2
RESEARCH METHODOLOGY
The integrative literature review was chosen as the methodological procedure for the selection and analysis of research related to the subject. Each step will be detailed below, as well as the obtained results.
Research questions
Virtual simulators for training address skills to be developed and the several ways to assess their acquisition. Therefore, the following research questions (RQ) were defined aiming to contemplate the different scenarios on the topic:
Papers screening
After defining the research questions, the next step was to select the studies for analysis. Terms commonly present in studies on virtual simulators for health training were used in the search to cover the largest number of articles and obtain a broad state-of-the-art view. As a result, the following search string was obtained: ("augmented reality" OR "virtual reality" OR "simulation" OR "simulator" OR "haptic" OR "haptics") AND ("medical education" OR "medical training").
The following digital libraries were searched: ACM Digital Library, IEEE Xplore, and PubMed, considering that the first two are important publication vehicles for Computer Science and Medical Informatics. The studies were selected based on the title/abstract search using the search string defined above.
In addition to the cited databases, we also included articles that we deemed relevant or appeared in the selected studies' references. Based on the search string and after reading the titles of the studies, we initially selected 210 articles. Finally, all articles were read in full and the exclusion criteria were applied.
The defined exclusion criteria ware: 1. not addressing health procedures, 2. not addressing skill development for health professionals, and 3. dealing with non-digital simulations. Most of the excluded studies included non-digital simulations. In this last phase, a total of 56 studies were selected, which were relevant to our objective.
RQ1: How are the variables and parameters of virtual simulators defined?
In this study, the variables of a clinical procedure mean the set of elements that constitute it, such as the instruments used and the professional's performance. The parameters mean the values assigned to the variables, which can be the presence of a step or clinical instrument, as well as a range of acceptable values, such as needle depth and angulation. Virtual simulators usually use the variables and parameters to assess user performance, and they influence the accuracy and reliability of the simulator.
Most of the analyzed studies defined the variables and parameters of the virtual simulators with the help of experts in the area, which is equivalent to more than 70% of the selected studies. The participation of these experts took place through multidisciplinary research groups or by inviting them to participate in specific stages of the virtual simulator production.
Their participation in the testing stage is also common. However, we observed that there is no defined standard regarding the participation of these professionals, and the publications do not provide clear indications about the methods and criteria for selecting experts. Furthermore, it is not clear how often their consulting activities takes place. Considering that the analysis and definition of a system's requirements are cyclical, this step will require those involved in more than one moment, for example. In turn, the multidisciplinary teams theoretically allow the more active and frequent participation of professionals from different fields of activity, being able to identify variables and parameters with precision, influencing the reliability and accuracy of the final product. Concerning quantity, two studies that only consulted one specialist were identified. However, this should not be a limitation if there are more professionals involved in the validation step.
Another way that was identified to obtain the requirements is from the medical literature; around 13% of the studies did this.
Only approximately 11% of the studies combined the literature with the participation of experts. The literature is one of the t resources traditionally present in professional training, so relying on its information is considered reliable, although the handson experience can add observations that will make the virtual simulation even more realistic.
The knowledge construction process consists of three stages: knowing, knowing how to do and knowing how to be.
The awareness of education enables the learner to acquire information. The knowing stage operates in the sense of giving meaning to the theory, practically transforming it into knowledge, which is related to the stage of knowledge.
Knowing how to do it is related to putting knowledge into use.
That is, the development of skills related to the studied field.
Finally, knowing how to be is about the learner's attitude. It lies in deciding to put knowledge (knowing) and skills (knowing how to do) into motion. In this sense, the literature can assist in identifying variables, parameters, and clinical procedures steps. However, the granularity of the problem and the criteria for evaluating the identified variables and parameters are often not found in the books, making the participation of experts necessary to assist in modeling the parameters that will be inserted in the simulator and defining the assessment criteria for the variables. Only experts can assist in obtaining this information with a focus on knowing how to do and knowing how to be. We consider that the exclusive use of the literature can limit the establishment of parameters for assessment while relying exclusively on the expert can limit the reliability and accuracy of the simulation. The ideal situation is to consult both sources in a complementary way. We also consider important that teams involved in developing virtual simulations use validated techniques to define criteria for selecting experts, such as the Fehring model 60 .
We also identified a considerable amount of proposals with little or no detail on how they defined the virtual simulator variables and parameters. A total of 10 studies did not specify this information. This amount is higher than that of studies based on the literature. Although in half of these studies, the text suggests the literature, mainly related scientific articles, they do not specify how they extracted the information in question. The other half is about tools that share tactile force feedback, whose later tests cover aspects of performance or usability validated by students, and in one case, by experts. One of the studies focuses exclusively on describing the proposed technique without detailing the source of information.
RQ3: How are the skills assessed?
In their study, Machado et al. 63 Finally, there are the progress indicators, which provides: 1.
scores for correct answers, 2. retention on a given task until completion, and 3. the possibility of repeating a particular task to improve practice. These indicators are objective ways of verifying whether the user achieved the expected performance.
In addition, they act as motivating agents, as they adopt gamelike characteristics.
There was a considerable number of studies (40% of them) whose objective is to present a new simulation technique, such as the texture and cutting of human tissues, or the advancement of an existing computational technique.
These studies focus almost exclusively on engineering aspects, leaving the assessment of skills as a future task or approaching it superficially. Such studies do not detail assessment information.
These virtual environments commonly bring only the tactileforce feedback. Although tactile feedback plays an essential role in aiding the development of clinical skills, in some cases, these studies do not take into account the online assessment process and condition the assessment of user skills to the presence of a supervisor (offline assessment). Approximately 43% of virtual simulators need instructors to carry out user skill assessments. We also observed that, regardless of this aspect, 56% of the analyzed simulators have their use conditioned to the instructor's presence. Another point that was observed is that most studies use simulated data when performing tests 37 .
RQ4: How is the virtual simulator effectiveness verified for the acquisition of skills?
The verification of the occurrence of skills acquired by students is a point of attention in the studies. In simulators that use alert mechanisms, is assumed that the reduction of these messages indicates fewer errors, hence a behavioral change. However, we are aware of the importance of timely feedback. The same goes for progress indicators, as it is necessary to provide the student with a meaningful reflection about their performance to avoid resorting to trial and error. In performance reports, some studies do not clarify how the information is made available. In addition to being quantitative, this feedback must also be qualitative in the sense of presenting a clear textual (or visual) feedback; otherwise, the presence of an instructor will still be necessary to complete the training cycle.
Among the selected studies, there are 36 (64% of them) in which the presence of an instructor is necessary to provide some feedback on the user's performance in virtual simulation. In 20 of these studies, it is assumed that an instructor is needed because, in addition to not providing feedback on student performance, these simulators do not have mechanisms capable of situating learners about the completion of their tasks. There were two of these studies 10,17 in which the instructor's presence was only necessary for some simulation modules. In these cases, the presence of instructors was due to network interaction that allowed discussions between students and professionals.
Bloom's taxonomy 66 classifies learning as a plural and interactive phenomenon that co-occurs in the cognitive, affective, and psychomotor domains. Virtual simulations that operate on motor skills can directly contribute to achieving educational goals related to the psychomotor domain. The psychomotor domain deals with behaviors that imply the development of neuromuscular coordination. It related, therefore, to the acquisition of skills that combine muscle actions, cognition, skills to manipulate objects or perform a procedure 67 . Although Bloom and his team never defined a taxonomy for the psychomotor domain, others did it. Dave's Dave's classification 68 for the psychomotor domain is the most often cited interpretation. It consists of five categories: imitation (observing the skill and trying to repeat), manipulation (following instructions, memorizing a procedure and being able to reproduce it), precision (performing the skill accurately and without help), articulation (combining skills to achieve a non-standard goal) and naturalization (unconscious domain of activity, when one becomes an expert).
The naturalization category is not typically covered in virtual simulations since to become an expert, the apprentice will need time of practical experience with actual patients. Regarding the precision category, as already mentioned, although computer simulations have the potential for selfguided use, the absence of this functionality is still common. It implies challenges in advancing the goals of the psychomotor domain, since the presence of an instructor is still necessary. Regarding the articulation category, although there efforts have been made in this direction, it constitutes a challenge for simulations to present greater variability in clinical cases, aiming to predict the wide range of possible clinical scenarios for the same procedure. This demand reinforces the need for constant and active participation of domain experts.
In addition to reducing costs and support the tactile and visual aspects, another differential of training mediated by virtual simulators is the possibility of using it at any time, without requiring supervision and receiving accurate and agile feedback about the user's performance. In this sense, studies should further explore this potential. We understand that virtual simulators can be efficient tools for skill acquisition. However, feedback provided to the user about their actions during the simulation plays a fundamental role in the knowledge construction process. Therefore, these tools must provide personalized, relevant, and timely feedback on user performance. As noted, few studies have addressed the assessment of learning, and as it is an automatic assessment, few studies have implemented this functionality. Thus, we understand this potential area of the research landscape as a possible open problem.
Finally, when developing a virtual simulator for training, considering the user's skills assessment, the results of this study show that it is essential to observe the following steps: • Design: a step that will deal with the survey and definition of the tool's requirements. The criteria for selecting specialists must follow standards already reported in the literature, and the times and frequency of their participation must be specified. In addition, for the definition of variables and parameters, the knowledge established in the literature should be considered, complemented by the experts' knowledge. We recommend developing documents such as concept and navigation maps and using case diagrams to help the interdisciplinary team communicate.
• Skill feedback: a step that will verify the correct operation of the simulator, its educational effectiveness, and the user's performance in the simulation. The presence of experts is necessary to validate the requirements defined in the design stage and collaborate modeling skill assessment methods by the simulator (online assessment). As for evaluating educational effectiveness, we observe that the singleuse effects may not reflect the learning reality. Thus, it is essential to analyze how this construction of knowledge occurs over time and its effects on the work process. The acquisition of knowledge can be carried out in the short, medium, or long term, and what we observed in the studies are the single-use effects. Virtual simulators must provide personalized, relevant, and timely feedback to the student about their performance.
CONCLUSION
This review aimed to analyze studies about simulations in virtual graphic environments for training and education in the health area, focusing on the user skill assessments procedures. We conclude that using virtual simulators for training and assessment of clinical skills can be effective by using reliable and well-established variables and parameters, supporting teaching and learning in the field of health education. Considering the primary question of this research, we found that most simulator evaluations are not related to user skills but usability aspects. Thus, there is a significant research area to be explored in this direction, specifically related to user skill assessment, since this kind of evaluation has been little addressed in virtual simulators. It demands efforts to integrate research from computer science, engineering, and health areas. | 2022-10-19T15:40:09.371Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "7556359dc5ef89a964fc666a8a04c3eea5008b1b",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbem/a/R6xV9nVj4BG3jf6CDTcM8Jj/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "30aa535bb24a0aa7590038b6245d1f09c263e180",
"s2fieldsofstudy": [
"Medicine",
"Education",
"Computer Science"
],
"extfieldsofstudy": []
} |
51863023 | pes2o/s2orc | v3-fos-license | The Cryosphere Interaction between ice sheet dynamics and subglacial lake circulation : a coupled modelling approach
Subglacial lakes in Antarctica influence to a large extent the flow of the ice sheet. In this study we use an idealised lake geometry to study this impact. We employ a) an improved three-dimensional full-Stokes ice flow model with a nonlinear rheology, b) a three-dimensional fluid dynamics model with eddy diffusion to simulate the basal mass balance at the lake-ice interface, and c) a newly developed coupler to exchange boundary conditions between the two individual models. Different boundary conditions are applied over grounded ice and floating ice. This results in significantly increased temperatures within the ice on top of the lake, compared to ice at the same depth outside the lake area. Basal melting of the ice sheet increases this lateral temperature gradient. Upstream the ice flow converges towards the lake and accelerates by about 10% whenever basal melting at the ice-lake boundary is present. Above and downstream of the lake, where the ice flow diverges, a velocity decrease of about 10% is simulated.
Introduction
During the last decades our knowledge on subglacial lake systems has greatly increased.Since the discovery of the largest subglacial lake, Lake Vostok (Oswald and Robin, 1973;Robin et al., 1977), more than 270 other lakes have been identified so far (Siegert et al., 2005;Carter et al., 2007;Bell, 2008;Smith et al., 2009).Plenty of efforts have been undertaken to reveal partial secrets subglacial lakes may hold, mostly referring to Lake Vostok.For instance the speculation that extremophiles in subglacial lakes may be encountered (e.g., Duxbury et al., 2001;Siegert et al., 2003) has Correspondence to: M. Thoma (malte.thoma@awi.de)been nurtured by microorganisms discovered in ice core samples (Karl et al., 1999;Lavire et al., 2006).These samples originate from the at least 200 m thick accreted ice, drilled at the Russian research station Vostok (Jouzel et al., 1999).Discussions about the origin and history of the lake (Duxbury et al., 2001;Siegert, 2004;Pattyn, 2004;Siegert, 2005) are still ongoing.It is still unknown whether Subglacial Lake Vostok existed prior to the Antarctic glaciation and if it could have survived glaciation, or whether it formed subglacially after the onset of glaciation.The impact of subglacial lakes on the flow of the overlying ice sheet has been analysed (Kwok et al., 2000;Tikku et al., 2004, e.g.,) and modelled (Mayer and Siegert, 2000;Pattyn, 2003;Pattyn et al., 2004;Pattyn, 2008), but self-consistent numerical models, which include the modelling of the basal mass balance, are lacking at present.Several numerical estimates and models (West and Carmack, 2000;Williams, 2001;Walsh, 2002;Mayer et al., 2003;Thoma et al., 2007Thoma et al., , 2008b;;Filina et al., 2008) as well as laboratory analogues (Wells and Wettlaufer, 2008) of water flow within the lake have been carried out, permitting insights into the water circulation, energy budget and basal processes at the lake-ice interface.Finally, observations (Bell et al., 2002;Tikku et al., 2004) and numerical modelling (Thoma et al., 2008a) of accreted ice at the lakeice interface gave insights about its thickness and distribution over subglacial lakes.
Early research assumed that subglacial lakes were isolated systems, while only recently Gray et al. (2005), Wingham et al. (2006) andFricker et al. (2007) found evidence that Antarctic subglacial lakes are connected, hence forming an extensive subglacial hydrological network.Several plans to unlock subglacial lakes exist (Siegert et al., 2004;Inman, 2005;Siegert et al., 2007;Schiermeier, 2008;Woodward et al., 2009) and valuable knowledge will be available as soon as direct samples of subglacial water and sediments are taken.
Published by Copernicus Publications on behalf of the European Geosciences Union.
However, current knowledge about the interaction between subglacial lakes and the overlying ice sheet is lacking.The most important parameter exchanged between ice and water is heat.The exchange of latent heat associated with melting and freezing dominates heat conduction, but the latter process has also be accounted for in subglacial lake modelling (Thoma et al., 2008b).Melting and freezing is closely related to the ice draft which varies spatially over subglacial lakes (Siegert et al., 2000;Studinger et al., 2004;Tikku et al., 2005;Thoma et al., 2007Thoma et al., , 2008aThoma et al., , 2009a)).The ice draft, and hence the water circulation within the lake, is maintained by the ice flow across lakes.Without this flow, lake surfaces would even out (Lewis and Perkin, 1986).On the other hand, a spatially varying melting/freezing pattern will have an impact on the overlying ice sheet as well.In order to get an insight in the complex interaction processes between the Antarctic ice sheet and subglacial lakes, this study for the first time couples a numerical ice-flow and a lake-flow model with a simple asynchronous time-stepping scheme to overcome the problem of different adjustment time scales.
We describe the applied ice-flow model RIMBAY and lakeflow model ROMBAX in the first two Sections, respectively.Each section starts with a general description of the particular model, describes the applied boundary conditions, and the results of the uncoupled model runs.The newly developed RIMBAY-ROMBAX-coupler RIROCO is introduced in Sect.4, where we also discuss the impact of a coupled icelake system on the ice flow and lake geometry.
General description
The ice sheet model RIMBAY (Revised Ice sheet Model Based on frAnk pattYn) is based on the work of Pattyn (2003), Pattyn et al. (2004) and Pattyn (2008).Within this thermomechanically coupled, three-dimensional, full-Stokes ice model, a subglacial lake is represented numerically by a vanishing bottom-friction coefficient of β 2 = 0, while high friction in grounded areas is represented by a large coefficient; in this study we use β 2 = 10 6 .The choice of the latter parameter has no influence on the model results as long as it is much larger than zero.
The constitutive equation, governing the creep of polycrystalline ice and relating the deviatoric stresses τ to the strain rates ε, is given by Glen's flow law: ε = Aτ n (e.g., Pattyn, 2003), with a temperature dependent rate factor A = A(T ).Here we apply the so-called Hooke's rate factor (Hooke, 1981).Experimental values of the exponent n in Glen's flow law vary from 1.5 to 4.2 with a mean of about 3 (Weertman, 1973;Paterson, 1994); ice models traditionally assume n = 3.However, a simplified viscous rheology with n = 1 stabilises and accelerates the convergence behaviour of the implemented numerical solvers.Therefore, previous subglacial lake simulations within ice models were limited to viscous rheologies (Pattyn, 2008).
Model setup and boundary conditions
The model domain used in this study is a slightly enlarged version of the one presented in Pattyn (2003): It consists of a rectangular 168 100 km 2 large domain with a model resolution of 5 km (resolutions of 2.5 km and 10 km are also used for comparison) and 41 terrain-following vertical layers.The surface of the initially 4000 m thick ice sheet has an initial slope of 2% (similar to Lake Vostok, Tikku et al., 2004) from left (upstream) to right (downstream).An idealized circular lake with a radius of about 48 km and an area of about 7200 km 2 is located in the center of the domain, where a 1000 m deep cavity modulates the otherwise smoothly sloped bedrock (similar to Pattyn, 2008).The lake's maximum water depth of about 600 m, resulting in a volume of about 1840 km 3 (the lake's geometry is also indicated in Figs. 3, 4).
We apply a constant surface temperature of −50 • C at the ice's surface, a typical value for central Antarctica (Comiso, 2000).The bottom layer boundary temperature depends on the basal condition: above bedrock a Neumann boundary condition is applied, based on a geothermal heat flux of 54 mW/m 2 (a value suggested for the Lake Vostok region by Maule et al., 2005).Assuming isostasy above the subglacial lake, the bottom layer temperature is at the pressuredepended freezing point.This temperature, prescribing a Dirichlet boundary condition, depends on the local ice sheet thickness (T b = −H ×8.7×10 −4 • C/m ,e.g., Paterson, 1994).Accumulation and basal melting/freezing are ignored during the initial experiments in this Sect., where no coupling is applied.In the subsequent coupling-experiments, basal melting and freezing at the lake's interface, as modelled by the lake flow model, is accounted for (Sect.4).
The lateral boundary conditions are periodic, hence values of ice thicknesses, velocities, and stresses are copied from the upstream (left) side to the downstream (right) side of the model domain and vice versa.The same applies to the lateral borders along the flow.The initial integration starts from an isothermal state.The basal melt rate is set to zero over bedrock as well as over the lake.
Model improvements
Compared to Pattyn (2008) we improved the model RIMBAY in two significant ways to allow a numerically stable and fast representation of the more realistic non-linear flow law with an exponent of n = 3: First, a gradual increase of the friction coefficient β 2 at the lake's boundaries is considered (Fig. 1a).Physically, this smoothing can be interpreted as a lubrication of the ice sheet base on the grounded side and as stiffening due to debris on the lake side of the lake's edge.Our experiments have shown, that a slight prescribed β 2 -smoothing The Cryosphere, 4, 1-12, 2010 www.the-cryosphere.net/4/1/2010/coefficient of (1/0) is suited to decrease the integration time significantly and stabilises the numerical results.In this notation, (1) indicates one lubricated node on the bedrock side of the boundary and (0) indicates no stiffed (debris) node on the lake side.If the number of lubricated nodes is reduced to zero, the model's integration time increases without a significant impact on the velocity field.If the number of stiffed nodes over the lake is increased, lower velocities over the lake are achieved.However, for higher model resolutions than used in this study, higher β-values should be considered to make the transition more realistic.
Second, we implement a three-dimensional Gaussian-type filter to smooth the viscosity as well as the vertical resistive stress with a variable filter width.Without this filter, the numerics becomes unstable.Figure 1b shows a one-dimensional equivalent to the implemented three dimensional filter.Figure 2 compares three different smoothing parameter results on the friction coefficient β 2 as well as the Gaussian-type filter impact on the viscosity for three different filter widths.Sensitivity experiments have shown, that with a slight transient β 2 smoothing at the lake edges combined with a gentle (quadratic) Gaussian filter, as shown in Fig. 2b, numerical stability is achieved.
Results
The standard experiment is a thermomechanically coupled full-Stokes (FS) model with a horizontal resolution of 5 km.
A quasi-steady state is reached after 300 000 years. Figure 3a shows the lake's position and depth in the center of the model domain.The frictionless boundary condition over the lake results in an increased velocity, not only above the lake, but also in it's vicinity (Fig. 3b).From mass conservation it follows, that the (vertically averaged) horizontal velocity converges towards the lake from upstream and diverges downstream.The flattened ice sheet surface over the lake is a consequence of the isostatic adjustment.This effect is also visible in the geometry of the profiles in Fig. 4 and in particular in Fig. 5.The inclined lake-ice interface is maintained by the constant ice flow over the lake and hence, independent of a possible basal mass exchange.
Figure 3b shows the vertically averaged horizontal velocity and Fig. 4a a vertical cross section of the velocity along the flow at the lake's center at y = 200 km.Over the lake, the ice sheet behaves like an ice shelf, featuring a vertically constant velocity.Towards the lake, the surface velocity increases by more than 70% from about 0.7 m/a to about 1.2 m/a.The largest velocity gradients occur in the vicinity of the grounding line (Figs. 3b,4a,5).Convergence of ice towards the lake results in an accelerated ice flow which undulates at the lake's edges because of significant changes in the ice thickness and ice sheet surface gradients (yellow and red line in Fig. 5, respectively).The surface velocity (green line in Fig. 5) accelerates towards the lake until the ice sheet surface flattens.The velocity increase in deeper layers close to the bedrock (blue line in Fig. 5) is suspended just before the frictionless lake interface is reached, because of a strong increase of the ice thickness filling the trough.The maximum velocity is reached across the lake (Figs. 3b,4a,5), where the basal friction is zero.The vertical temperature profile (Fig. 4b) is nearly linear, as accumulation and basal melting are neglected.Geothermal heat flux is not sufficient to melt the bottom of the grounded ice, but over the lake the freezing point is maintained by the boundary condition.This results in submerging isotherms.At the downstream grounding line a slight overshoot of the upwelling isotherm is observed.
Robustness of the results
To investigate the impact of the horizontal resolution on the model results, simulations with a coarser 10 km as well as a finer 2.5 km grid resolution have been performed.Within the coarser resolution most features are reproduced well, but there is a significant impact on ice velocities, in particular along the grounding line where the differences reaches about 10%.In general, the model with the higher spatial resolution has increased velocities along the flowlines across the lake and decreased velocities outside.In addition, the local The Cryosphere, 4, 1-12, 2010 www.the-cryosphere.net/4/1/2010/velocity maximum at the grounding line (Fig. 5) cannot be resolved with the 10 km resolution.The finer 2.5 km model resolution needs a much longer integration time, without producing significant differences in the results compared to the intermediate 5 km grid resolution.
We also investigated, whether a higher order model, neglecting resistive stress and vertical derivatives of the vertical velocity (see Pattyn 2003, Saito et al. 2003, Marshall 2005 3 Lake flow model "ROMBAX"
General description
To simulate the water flow in the prescribed subglacial lake we apply "ROMBAX", a terrain-following, primitive equations, three-dimensional, fluid dynamics model (e.g., Griffies, 2004)."ROMBAX" simulates the interaction between ice and subjacent water in terms of melting and freezing, according to heat and salinity conservation and the pressure dependent freezing point at the interface (Holland and Jenkins, 1999).The model uses spherical coordinates and has been applied successfully to ice-shelf cavities (e.g., Grosfeld et al., 1997;Williams et al., 2001;Thoma et al., 2006) as well as to subglacial lakes (Williams, 2001;Thoma et al., 2007Thoma et al., , 2008a 6a).Across the following gradient from abou side of ice flow to about 24 m results from the modelled tem (Fig. 4b) and is used as input
Results
The to reproduce the results obdel.All other aspects of the etrization are kept identical.erate impact on the ice flow: subglacial lake are about 5% kes model.In the vicinity of -Stokes terms decreased, but ines.Although the FS-model in the numercial code it conder model, hence all further ull-Stokes model.
X"
n the prescribed subglacial further details).The horizontal resolution (0.025 • ×0.0125 • , about 0.7 × 1.4 km), the number of vertical layers ( 16), as well as the horizontal and vertical eddy diffusivities (5 m 2 /s and 0.025 cm 2 /s, respectively) are adopted from a model of subglacial Lake Concordia (Thoma et al., 2009a).In a model domain of about 170 × 88 × 16 grid cells the circulation within the lake as well as the melting and freezing rates at the lake-ice interface are calculated.At the bottom of the lake a geothermal heat flux of 54 mW/m 2 , consistent with the ice-flow model's boundary condition, is applied.Previous subglacial lake simulations of Lake Vostok (Thoma et al., 2007(Thoma et al., , 2008a;;Filina et al., 2008), Lake Concordia (Thoma et al., 2009a), or Lake Ellsworth (Woodward et al., 2009) used a prescribed average heat conduction into the ice (Q Ice = dT /dz × 2.1 W/(K m)), based on borehole temperature measurements and thickness temperature-gradient estimates.The availability of the results of the thermomechan- geometry as well as the parametrization are kept identical.Our experiments indicate a moderate impact on the ice flow: calculated velocities across the subglacial lake are about 5% lower compared to the full-Stokes model.In the vicinity of the lake, the impact of the full-Stokes terms decreased, but is still enhanced along the flowlines.Although the FS-model implements more mathematics in the numercial code it converges faster than the higher order model, hence all further studies are performed with the full-Stokes model.
General description
To simulate the water flow in the prescribed subglacial lake we apply ROMBAX, a terrain-following, primitive equations, three-dimensional, fluid dynamics model (e.g., Griffies, 2004).ROMBAX simulates the interaction between ice and subjacent water in terms of melting and freezing, according to heat and salinity conservation and the pressure dependent freezing point at the interface (Holland and Jenkins, 1999).The model uses spherical coordinates and has been applied successfully to ice-shelf cavities (e.g., Grosfeld et al., 1997;Williams et al., 2001;Thoma et al., 2006) as well as to subglacial lakes (Williams, 2001;Thoma et al., 2007Thoma et al., , 2008a,b;,b;Filina et al., 2008;Thoma et al., 2009a,b,c;Woodward et al., 2009) only parameter exchange beween both models was the unilateral initilisation of the lake flow model ROMBAX with the 370 modelled geometry and temperature gradient of the ice flow model RIMBAY (indicated by the gray lines in Figure 7).The real coupling procedure starts, when the results of ROMBAX are reinserted into RIMBAY (indicated by the black lines in Figure 7).
Results
From ROMBAX the basal mass balance at the ice sheet-lake interface is considered for subsequent initialisations of RIMBAY.Melting dominates freezing (which is negligible) and hence the ice sheet is loosing mass.Note that the molten ice does 380 not affect the lake's volume, neither in the ice-sheet model nor in the lake-flow model, as this is constant per definition.This mass imbalance can be interpreted as a virtual constant water flow out of the lake without any feedback to the ice sheet.The coupler RIROCO applies restarts from 385 previous model runs.Consequently the models reach their new quasi-steady state after a significant shorter integration time.Ice sheet volume in the model domain is decreasing by about 600 km 3 per 100 000 years, equivalent to about 3.5 m thickness.Ice draft reduction in the lake-flow model within 390 100 000 years is shown in Figure 8a.Most ice is lost in the center of the lake (up to 8 m), and the area of maximum mass loss is slightly shifted to the upstream side of the lake.Consequently, the water column thickness increases where most ice is molten and decreases where less ice is molten (Fig- 395 ure 8b).However, the ice-thickness reduction and slope adjustment is too small to change the surface pressure on the lake water, and hence the water flow, significantly.Just a few iteration cycles are needed to bring this coupled lake-water system into a quasi-steady state.
400
Several impacts of bottom melting on ice on top of the lake are observed: First, the temperature gradient at the ice sheet's bottom is increased.This results in an increase of heat conduction into the ice by about 22% (Figures 8d), compared to a model run without basal melting (Figures 6a).Consequently, 405 more heat is extracted from the lake and the modelled average melting decreases by about 7% (Figures 8c).Second, ice sheet thickness above the lake is reduced.This increases the surface gradient towards the upstream part and decreases the surface gradient towards the downstream part.Hence, the ice 410 flow upstream accelerates and decelerates downstream (Figure 9a).The magnitude of the ice velocity change is about 10%.Third, bottom melting removes mass and a vertical downward velocity follows from mass conservation.This advects colder ice from the surface towards the bottom, re-415 sulting in a relative cooling of up to -2.1 • C (Figure 9b) above the lake.This negative temperature (compared to the model
Model setup and boundary conditions
The bedrock topography and the ice draft, needed for the lake-flow model ROMBAX, is obtained from the output (Sect.2.4) of the ice-flow model RIMBAY (see Sect. 4 for further details).The horizontal resolution (0.025 • ×0.0125 • , about 0.7 × 1.4 km), the number of vertical layers (16), as well as the horizontal and vertical eddy diffusivities (5 m 2 /s and 0.025 cm 2 /s, respectively) are adopted from a model of subglacial Lake Concordia (Thoma et al., 2009a).In a model domain of about 170 × 88 × 16 grid cells the circulation within the lake as well as the melting and freezing rates at the lake-ice interface are calculated.At the bottom of the lake a geothermal heat flux of 54 mW/m 2 , consistent with the ice-flow model's boundary condition, is applied.Previous subglacial lake simulations of Lake Vostok (Thoma et al., 2007(Thoma et al., , 2008a;;Filina et al., 2008), Lake Concordia (Thoma et al., 2009a), or Lake Ellsworth (Woodward et al., 2009) used a prescribed average heat conduction into the ice (Q Ice = dT /dz × 2.1 W/(K m)), based on borehole temperature measurements and thickness temperature-gradient estimates.The availability of the results of the thermomechanical ice-sheet model RIMBAY permits a spatially varying Q Ice (Fig. 6a).Across the lake's center, a general draft-following gradient from about 27 mW/m 2 on the upstream side of ice flow to about 24 mW/m 2 on the downstream side results from the modelled temperature distribution in the ice (Fig. 4b) and is used as input for the lake-flow model.
Results
The initial model run starts with a lake at rest.After about 200 years a quasi-steady state is reached.The circulation within the lake is shown in Fig. 6b-c.The vertically integrated mass transport stream function (with a strength of about 1.3 mSv, 1 mSv=1000 m 3 /s) as well as the zonal overturning (about 0.3 mSv) show a two-gyre structure, while the meridional overturning (about 1.3 mSv) indicates just one anticyclonic gyre.The strength of the mass transport is between those modelled for Lake Vostok and those for Lake Concordia (Thoma et al., 2009a), and hence reasonable for subglacial lakes.There is only a slight ice draft slope from about 4006 m to 3927 m across the 90 km of the lake (Figs. 5,4).This results in a decrease of melting from about 12 mm/a in the West to a negligible freezing along the eastern shoreline (Fig. 6d).A significant amount of freezing would only be modelled if the ice draft would have a steeper slope.
General description
The bedrock topography and the ice draft, necessary to set up the geometry for the lake-flow model ROMBAX (Sect.3.2), was obtained from the modelled output geometry of the iceflow model RIMBAY (Sect.2.4).In addition, the thermodynamic boundary condition of the heat conduction into the ice was calculated from the temperature gradient at the ice sheet's bottom.The left part of Fig. 7 (gray lines) shows schematically the performed operations.A coordinate conversion is necessary as RIMBAY uses Cartesian coordinates while ROMBAX is based on spherical coordinates.
The modelled melting and freezing rates (Fig. 6d) are considered to replace the previously zero-constrained lower boundary condition in the ice-flow model RIMBAY (indicated by the central triangle within the yellow area in Fig. 7).Again a coordinate transformation is necessary.To speed up the integration time, ice geometry, ice flow, and ice temperature from the initial model run are reused (indicated by the black-lined triangle within the blue area within Fig. 7).An additional process has to be considered since the lake's area may change during an ice-model run.As the basal mass balance (melting/freezing) depends on the former lake-model results and cannot be calculated during a specific model run, extrapolation of neighbouring values may be necessary (indicated by the embedded yellow oval in the blue area of Fig. 7).
In each coupling cycle, successive initialisations of the lake-flow model ROMBAX are performed with the slightly changed ice draft and water column thickness (indicated by the right triangle in the yellow area of Fig. 7), as well as the temperature field of the previous model-run (indicated by the central triangle in the orange area of Fig. 7).Because the lake flow model does not permit dynamically changing geometry, temperatures of emerging nodes have to be extrapolated from neighbouring nodes (indicated by the embedded yellow oval in the orange area of Fig. 7).It is not necessary to reuse (and possibly extrapolate) the water circulation, as this value is based on the tracer (density) distribution and converges quickly.
The coupling mechanism, including initial starts of the individual models, parameter exchanges, coordinate transformations, and restart-initialisations are embedded into and controlled by the RIMBAY-ROMBAX-Coupler RIROCO.
In Sect.2.4 and Sect.3.3 the results of the initial runs of the individual models have been described.So far the only parameter exchange beween both models was the unilateral initilisation of the lake flow model ROMBAX with the modelled geometry and temperature gradient of the ice flow model RIMBAY (indicated by the gray lines in Fig. 7).The real coupling procedure starts, when the results of ROMBAX are reinserted into RIMBAY (indicated by the black lines in Fig. 7).
Results
From ROMBAX the basal mass balance at the ice sheet-lake interface is considered for subsequent initialisations of RIM-BAY.Melting dominates freezing (which is negligible) and hence the ice sheet is loosing mass.Note that the molten ice does not affect the lake's volume, neither in the ice-sheet model nor in the lake-flow model, as this is constant per definition.This mass imbalance can be interpreted as a virtual constant water flow out of the lake without any feedback to the ice sheet.The coupler RIROCO applies restarts from previous model runs.Consequently the models reach their new quasi-steady state after a significant shorter integration time.Ice sheet volume in the model domain is decreasing by about 600 km 3 per 100 000 years, equivalent to about 3.5 m thickness.Ice draft reduction in the lake-flow model within 100 000 years is shown in Fig. 8a.Most ice is lost in the center of the lake (up to 8 m), and the area of maximum mass loss is slightly shifted to the upstream side of the lake.Consequently, the water column thickness increases where most ice is molten and decreases where less ice is molten (Fig. 8b).However, the ice-thickness reduction and slope adjustment is too small to change the surface pressure on the lake water, and hence the water flow, significantly.Just a few iteration cycles are needed to bring this coupled lake-water system into a quasi-steady state.
Several impacts of bottom melting on ice on top of the lake are observed: First, the temperature gradient at the ice sheet's bottom is increased.This results in an increase of heat conduction into the ice by about 22% (Fig. 8d), compared to a model run without basal melting (Fig. 6a).Consequently, more heat is extracted from the lake and the modelled average melting decreases by about 7% (Fig. 8c).Second, ice sheet thickness above the lake is reduced.This increases the surface gradient towards the upstream part and decreases the surface gradient towards the downstream part.Hence, the ice flow upstream accelerates and decelerates downstream (Fig. 9a).The magnitude of the ice velocity change is about 10%.Third, bottom melting removes mass and a vertical downward velocity follows from mass conservation.This advects colder ice from the surface towards the bottom, resulting in a relative cooling of up to −2.1 • C (Fig. 9b) above the lake.This negative temperature (compared to the model run without melting, Fig. 4b) is advected downstream by the horizontal velocity.The lake-ice interface itself is cooled by less than 0.007 • C, as it is maintained at the pressure-dependent freezing point.The cooling, visible at the lateral boundaries in Fig. 9b, result from the applied periodic boundary condition.It is an artifact of the numerical representation of the setup and not discussed here.
Summary
Observations from space show that large subglacial lakes have a significant impact on the shape and dynamics of the Antarctic Ice Sheet as they flatten the ice sheets surface (Siegert et al., 2000;Kwok et al., 2000Kwok et al., , 2004;;Leonard et al., 2004;Tikku et al., 2004;Siegert, 2005).These regions depict a change in surface slope due to the isostatic adjustment of the ice sheet.This, combined with the lacking bottom friction, results in an observable redirection of ice flow.In this study, we apply a newly coupled ice sheet-lake flow model on an idealized ice sheet-lake configuration to investigate the feedbacks between the individual systems.The full-Stokes ice sheet model (Pattyn, 2008) is improved to handle a more realistic non-linear rheology.The lake-flow model is based on a three dimensional fluid dynamics model with eddy diffusivity and simulates the lake flow as well as the mass balance at the lake-ice interface.This model, previously only applied to lake-exclusive studies with a prescribed ice thickness and bedrock, now receives its geometry directly from the ice flow model.In addition, heat flux from the lake into the ice sheet is an exchange parameter.Besides other external forcing fields, the ice-flow model receives the basal mass balance at the interface from the dynamic lake-flow model.
In order to stabilise the ice-flow model numerically with a nonlinear rheology, a slight Gaussian filtering of the ice The ice flow converges where an ice-shelf type flow servation requires a downs of the deceleration when th Melting at the lake-ice inte eration upstream and decre as well as downstream, bec This impacts on the veloci and downstream of the la corresponds to about 20 tim temperature at the ice sheet pressure melting point, and based ice in the vicinity.A ing at the lake-ice interface the lake.Advection transpo stream, and hence the tem have impacts on the ice flo Our idealised configurati an important impact of the on the ice sheet's dynamic coupled model will focus o Lake Vostok and its glacia vestigate the flow dynamic with observations.tion between ice sheet and subglacial lakes 9 The ice flow converges and accelerates towards the lake, where an ice-shelf type flow structure establishes.Mass conservation requires a downstream divergence of flow, because of the deceleration when the flow reaches the bedrock again.Melting at the lake-ice interface increases the ice flow acceleration upstream and decreases the ice flow on top of the lake as well as downstream, because of the reduced surface slope.This impacts on the velocity can be traced about 75 km upand downstream of the lake in this specific setting, which corresponds to about 20 times the ice sheet thickness.The temperature at the ice sheet bottom on top of the lake is at the pressure melting point, and hence warmer than the bedrockbased ice in the vicinity.According to our simulation, melting at the lake-ice interface reduces the temperature on top of the lake.Advection transports the relatively colder ice downstream, and hence the temperature dependent rheology will
Summary
Observations from space show that large subglacial lakes have a significant impact on the shape and dynamics of the Antarctic Ice Sheet as they flatten the ice sheets surface (Siegert et al., 2000;Kwok et al., 2000;Leonard et al., 2004;Tikku et al., 2004;Siegert, 2005).These regions depict a change in surface slope due to the isostatic adjustment of the ice sheet.This, combined with the lacking bottom friction, results in an observable redirection of ice flow.In this study, we apply a newly coupled ice sheet-lake flow model on an idealized ice sheet-lake configuration to investigate the feedbacks between the individual systems.The full-Stokes ice sheet model (Pattyn, 2008) is improved to handle a more realistic non-linear rheology.The lake-flow model is based on a three dimensional fluid dynamics model with eddy diffusivity and simulates the lake flow as well as the mass balance at the lake-ice interface.This model, previously only applied to lake-exclusive studies with a prescribed ice thickness and bedrock, now receives its geometry directly from the ice flow model.In addition, heat flux from the lake into the ice sheet is an exchange parameter.Besides other external forcing fields, the ice-flow model receives the basal mass balance at the interface from the dynamic lake-flow model.
In order to stabilise the ice-flow model numerically with a nonlinear rheology, a slight Gaussian filtering of the ice viscosity is necessary.
The ice flow converges and accelerates towards the lake, where an ice-shelf type flow structure establishes.Mass conservation requires a downstream divergence of flow, because www.the-cryosphere.net/4/1/2010/The Cryosphere, 4, 1-12, 2010 of the deceleration when the flow reaches the bedrock again.
Melting at the lake-ice interface increases the ice flow acceleration upstream and decreases the ice flow on top of the lake as well as downstream, because of the reduced surface slope.This impacts on the velocity can be traced about 75 km upand downstream of the lake in this specific setting, which corresponds to about 20 times the ice sheet thickness.The temperature at the ice sheet bottom on top of the lake is at the pressure melting point, and hence warmer than the bedrockbased ice in the vicinity.According to our simulation, melting at the lake-ice interface reduces the temperature on top of the lake.Advection transports the relatively colder ice downstream, and hence the temperature dependent rheology will have impacts on the ice flow beyond the lake.
Our idealised configuration of an ice-lake system indicates an important impact of the interaction between both systems on the ice sheet's dynamics.Next applications of this new coupled model will focus on realistic configurations, such as Lake Vostok and its glacial drainage system.In order to investigate the flow dynamical impact which can be compared with observations.
Fig. 2 .Fig. 2 .
Fig. 1. a) Friction coefficient β 2 (scaled to the maximum of 10 6 ) along the central x-axis for an unsmoothed and two test-smoothed cases.The smoothing coefficient (x/y) represents the number of nodes smoothed outside and inside of the lake's border, respectively.b) Weight factors for a Gaussian-type filter with different width from one (red) to five (cyan) depending on the distance to the central node located at zero.
Fig. 1 .Fig. 1 .Fig. 2 .
Fig. 1.(a) Friction coefficient β 2 (scaled to the maximum of 10 6 ) along the central x-axis for an unsmoothed and two test-smoothed cases.The smoothing coefficient (x/y) represents the number of nodes smoothed outside and inside of the lake's border, respectively.(b) Weight factors for a Gaussian-type filter with different width from one (red) to five (cyan) depending on the distance to the central node located at zero.
Fig. 2 .
Fig. 2. Logarithm of viscosity η of the surface layer and bedrock-friction parameter β 2 for an idealized model.(a) No β 2 -smoothing or Gaussian filtering.(b) Slight β 2 -smoothing (1/0) and Gaussian filtering with a filter width of two.(c) Strong β 2 -smoothing (3/2) and Gaussian filtering with a filter width of five.Note that the Figs.represent the initial conditions for a FS-model with the applied filter and β 2 -smoothing.
Fig. 3 .
Fig. 3. Results for a full-Stokes ice dynamic model with a horizontal resolution of 5 km.(a) Lake depth (color), ice sheet surface elevation (dashed contours), and ice-flow velocity (black arrows).(b) Vertically averaged horizontal velocity of the standard full-Stokes experiment.
,b;Filina et al., 2008; Thoma et al., 2009a,b,c;Woodward et al., 2009).3.2Model setup and boundary conditionsfurther details).The horizonta about 0.7 × 1.4 km), the num well as the horizontal and ver and 0.025 cm 2 /s, respectively of subglacial Lake Concordia model domain of about 170 × tion within the lake as well as at the lake-ice interface are ca lake a geothermal heat flux o the ice-flow model's boundar vious subglacial lake simulat etal., 2007, 2008a; Filina e (Thoma et al., 2009a), or Lak 2009) used a prescribed avera (Q Ice = dT /dz × 2.1 W/(K m) ture measurements and thickn mates.The availability of the ical ice-sheet model "RIMBA Q Ice (Figure initial model run starts w 200 years a quasi-steady stat within the lake is shown in tegrated mass transport stream about 1.3 mSv, 1 mSv=1000 m turning (about 0.3 mSv) show M. Thoma et al.: Modelled interaction between ice sheet and subglacial lakes Ice ).(b) Vertically integrated mass transport stream function, positive values indicate a clockwise lockwise circulation.(c) Zonal (from west to east) and meridional (from south to north) overturning.(d)
Fig. 6 .
Fig. 6.(a) Heat conduction into the ice (Q Ice ).(b) Vertically integrated mass transport stream function, positive values indicate a clockwise circulation, negative values an anticlockwise circulation.(c) Zonal (from west to east) and meridional (from south to north) overturning.(d) Basal mass balance.
Fig. 7 .
Fig. 7. Couple scheme schematics.The upper part represents the ice-flow model RIMBAY, the lower part the lake-flow model ROMBAX.In the middle the parameters exchanged by the coupler RIROCO are shown.The gray lines in the left part of the figure represent the initial start-up sequence (the two initial model runs), dashed loops indicate cycles (successive model runs) that may repeat an apriori unspecified number of times.Two additional yellow ovals indicate the individual model needs regarding successive restarts/coupling: The ice flow calculated by RIMBAY may result in new lake nodes.For these nodes the basal mass balance is calculated by averaging adjacent nodes.For ROMBAX a modified geometry desires the extrapolation of temperatures (and other tracers) from a previous model run to skip the time-consuming spin-up process. 375
Fig. 7 .
Fig. 7. Couple scheme schematics.The upper part represents the ice-flow model RIMBAY, the lower part the lake-flow model ROMBAX.In the middle the parameters exchanged by the coupler RIROCO are shown.The gray lines in the left part of the Fig. represent the initial start-up sequence (the two initial model runs), dashed loops indicate cycles (successive model runs) that may repeat an apriori unspecified number of times.Two additional yellow ovals indicate the individual model needs regarding successive restarts/coupling: The ice flow calculated by RIMBAY may result in new lake nodes.For these nodes the basal mass balance is calculated by averaging adjacent nodes.For ROMBAX a modified geometry desires the extrapolation of temperatures (and other tracers) from a previous model run to skip the time-consuming spin-up process.
Fig. 8 .
Fig. 8. Impact of melting after 100 000 years on (a) the ice draft, (b) the water column thickness, and the geometry difference between the lake-flow model restart and the initial geometry.(d) Indicates the (a) the ice draft, (b) the water column thickness, and (c) the basal mass balance.Shown is ake-flow model restart and the initial geometry.(d) Indicates the heat conduction into the ice (Q Ice ).rical representation of the at large subglacial lakes hape and dynamics of the en the ice sheets surface 000, 2004;Leonard et al., 005).These regions depict he isostatic adjustment of h the lacking bottom fricection of ice flow.In this ice sheet-lake flow model guration to investigate the
Fig. 8 .
Fig. 8. Impact of melting after 100 000 years on (a) the ice draft, (b) the water column thickness, and (c) the basal mass balance.Shown is the geometry difference between the lake-flow model restart and the initial geometry.(d) Indicates the heat conduction into the ice (Q Ice ). | 2019-04-24T13:10:50.076Z | 2009-09-29T00:00:00.000 | {
"year": 2009,
"sha1": "bad64a86e4325e1fe2288d7afd6a6d220bab157b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5194/tc-4-1-2010",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f14e762b9fac73b60a201d23b8b724a6c7746cf6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
245730406 | pes2o/s2orc | v3-fos-license | The study On Mechanical Properties of HDPE Hybrid Polymer based Hybrid Composites
Polymer and their composites are used in many engineering applications as an alternate of metals, because of the parameters such as low cost, lightweightness and durability. Hybridization is a process of mixing two or more similar or dissimilar materials. It increases the performance and efficiency. In this study investigation has been made on mechanical properties of HDPE polymer based hybrid composites fabricated from the effect of various synthetic fibers(SGF,PTFE,SCF), synthetic fillers(Silica, hydroxyapatite, zirconia) by melt mixing method using twin screw extruder followed by injection molding technique. The mechanical properties of the samples such as tensile test, flextural test were measured by universal testing machine. And the impact test was performed on izod impact testing machine. The results show that the tensile strength of hybrid composite TR-4 sample increases compared to other composites. Flextural strength of sample TR-6 shows highest flextural strength comparted to others. The impact strength increases when the filler materials 2% of zirconia and 2% of hydroxyapatite are added to composite material i,e is TR-6 sample.
I. INTRODUCTION
Materials that are used as raw material for any form of framework or manufacturing in an sorted way of engineering application. A composite material is a multiphase material, which means it was created artificially and the constituent phases must be chemically distinct and separated by unique phases.
composites representing a combination of light weight and high strength abilities have gained wide acceptance as engineering materials. Research is on, to obtain composites through optimal combination of various materials that would significantly improve tribological and mechanical characteristics, which can lead to large scale replacement of metallic materials.
In spite of all these advantages, since they are in the nascent stages of their evolution, they possess certain limitation like difficulties in fabrication and repair, lack of standardized inspection and testing procedure. each with its own own set of requirements, such as filled, flake, particle, and laminar composites.
Low density, excellent strength to weight ratio, good abrasion resistance, and self-lubrication are all characteristics of polymeric composites. Polymers are used as the basic matrix in polymer composites.
Polyester, epoxy, low density polyethylene (LDPE), high density polyethylene (HDPE), polypropylene, and nylon acrylics are some of the most popular polymers utilised in these composites. Here synthetic fibers are used as reinforcement that is SGF, PTFE, SCF are selected due to has excellent balancing qualities These fibres are typically sized to allow for effective matrix bonding, which improves mechanical characteristics.
Filler is a common substance used in the manufacture of plastic goods. The purpose of filler is to alter the characteristics of the original plastic.
Filler materials are particles that are added to resins or binders (plastics, composites) to improve specific properties, reduce costs, or a combination of the two.
A. MATERIALS
The materials used in the present study are listed in To avoid plasticization, hydrolyzing effects from humidity, and to achieve sufficient homogeneity, the polymer and fibres were dried at 80°C before mixing in a mixing chamber. 220°C. To achieve a feed rate of 5 kg/hr, The screw speed on the extruder was set to 100 rpm. The extrudate produced was a cylindrical rod that was quenched in cold water before being palletized with a palletizing machine. Before getting the blended sample, the initial extruded materials were discarded to remove impurities from the previous extrusion stroke. All blended composite pallets were dried at 100°C before injection moulding. The pelletized polyblend material obtained from the corotating twin screw extruder was used to injection mould all of the test specimens. As shown in Figure 4.5, the temperatures in the two zones of the injection moulding barrel were kept at 265°C and 290°C, respectively, while the mould temperature was kept at 65°C. The screw speed was set to 10-15 rpm and the injection pressure was set to 700-800 bar. During injection moulding, the injection, cooling, and ejection times were all kept at 10, 35, and 2 seconds, respectively. All the molded specimens are as per ASTM standards. C. IMPACT TEST Increase in tensile strength is due to better bonding, adhering to the surface(adhesion) and dispersion of the fiber in the matrix.
But TR-6 sample shows very less tensile strength compared to others due to the addition of filler materials in the sample.
Even, the TR-5 remains exceptional case as on the Where TR-4 gest highest modulus. ➢ Impact test -It was seen that the impact strength of TR-6 is greater than that of all composite and the base material. In contrast, TR-5 was found to have very less impact strength. Hence, it is conclude that by adding the fillers and reinforcement as in proportion of TR-6, we could increase the impact strength
V. SCOPE OF FUTURE WORK
Plastic is going to be a future material as a metal substitute. Here composite materials are made with use of plastic (thermoplastics). When compared to traditional materials, composites can meet a wide range of design requirements while saving significant weight and providing a high strength-to-weight ratio.
At room temperature, the effect of notch on mechanical properties is investigated.
Further, the effect of temperature rise on the mechanical properties of composite materials can be studied.
• The effect of V-notch is studied here. Further the effect of different stress concentrates like circular\elliptical holes can be studied in future.
• The wear and abrasive test analysis can be studied.
• The result obtained here can be taken has a data base for future work regarding these concepts. | 2022-01-06T16:04:39.808Z | 2021-11-05T00:00:00.000 | {
"year": 2021,
"sha1": "ed37903e4d564bb878942a3511bdec4235cdf581",
"oa_license": null,
"oa_url": "https://doi.org/10.32628/ijsrst21861",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ae8b6b9f7a3f8ded6a11277187b6f3f6b827ab88",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
24122798 | pes2o/s2orc | v3-fos-license | Agroterrorism: Where Are We in the Ongoing War on Terrorism?
The U.S. agricultural infrastructure is one of the most productive and efficient food-producing systems in the world. Many of the characteristics that contribute to its high productivity and efficiency also make this infrastructure extremely vulnerable to a terrorist attack by a biological weapon. Several experts have repeatedly stated that taking advantage of these vulnerabilities would not require a significant undertaking and that the nation's agricultural infrastructure remains highly vulnerable. As a result of continuing criticism, many initiatives at all levels of government and within the private sector have been undertaken to improve our ability to detect and respond to an agroterrorist attack. However, outbreaks, such as the 1999 West Nile outbreak, the 2001 anthrax attacks, the 2003 monkeypox outbreak, and the 2004 Escherichia coli O157:H7 outbreak, have demonstrated the need for improvements in the areas of communication, emergency response and surveillance efforts, and education for all levels of government, the agricultural community, and the private sector. We recommend establishing an interdisciplinary advisory group that consists of experts from public health, human health, and animal health communities to prioritize improvement efforts in these areas. The primary objective of this group would include establishing communication, surveillance, and education benchmarks to determine current weaknesses in preparedness and activities designed to mitigate weaknesses. We also recommend broader utilization of current food and agricultural preparedness guidelines, such as those developed by the U.S. Department of Agriculture and the U.S. Food and Drug Administration.
The U.S. agricultural infrastructure is one of the most productive and efficient food-producing systems in the world (11,15,43,62,66). In 2001, food production generated cash receipts in excess of $900 billion, about 10% of the U.S. gross domestic product (9,11,62). The U.S. agricultural system also contributes about $50 billion annually to the national trade balance; the share of U.S. agricultural commodities sold overseas is more than double that of exports sold by other U.S. industries (11,15,43,62). And, while the agricultural community employs less than 3% of the U.S. population directly, it employs approximately 15% indirectly (9,11). It is this increased productivity and efficiency of the U.S. farming system that allows Americans to spend less than 11% of their disposable income on food, compared with the global average of around 20 to 30% (15,43). Jon Wefald, President of Kansas State University, said it best when he stated that ''Our ability to produce safe, plentiful, and inexpensive food creates the discretionary spending that drives the American standard of living.'' The 11 September 2001 Al-Qaeda terrorist attack and the subsequent anthrax attacks forced Americans to acknowledge their vulnerability to terrorism (35,43). As a result, the U.S. Government has made substantial invest-ments to improve counterterrorism capabilities, including enhancements in the ability to detect, prevent, and respond to terrorist threats and attacks (15,35,43,47). However, according to several experts, the agriculture and food industries remain highly vulnerable to intentional disruption (11,15,35,43). U.S. Department of Agriculture (USDA) officials estimate that a single agroterrorist attack on the livestock industry with a highly infectious agent, such as foot-and-mouth disease (FMD), could cost the U.S. economy between $10 billion and $30 billion (50). This level of impact would have a significant effect on domestic and international livestock markets. Furthermore, by interrupting the physical supply chain, the terrorist can cause not only economic harm but also fear (57). This kind of fear could generate a profound loss of consumer confidence, not unlike that caused in the airline and tourist industries as a result of the 11 September terrorist attack.
VULNERABILITIES IN THE AGRICULTURAL INFRASTRUCTURE
Peters (44) states that an attack on the agricultural infrastructure is more than a passing interest to the Al-Qaeda terrorist network. Hundreds of U.S. agricultural documents that had been translated into Arabic were seized in Afghanistan following the U.S. invasion of that country. Many experts agree that the threat of bioterrorism against the U.S. agricultural and food infrastructure is growing and TABLE 1. Key vulnerabilities in agriculture and the exploitation of these vulnerabilities (11,35,42) Funding of programs designed to increase our ability to respond to an agroterrorist event is less likely to be a priority in the minds of many Americans that the nation is not adequately prepared to handle such an attack (9,10,16,43). There are several key vulnerabilities (Table 1) within the agricultural infrastructure, which makes it the likely target of a bioterrorist attack. First, the U.S. agricultural market is largely dependent on large populations of domestic livestock and poultry (10,43). An average-sized dairy farm in the United States houses at least 1,500 lactating cows at any one time, with some of the larger farms containing about 10,000 animals (11). These herds are usually bred and reared in close proximity to one another in highly crowded populations (10,11,43). Furthermore, the size and scale of contemporary agricultural facilities have largely precluded the option of farmers attending to their animals on an individual basis (12). Instead, producers are forced to monitor and regulate their livestock populations by referring to aggregate statistics, such as total milk yields (12). This factor alone would make it difficult to contain a contagious disease, especially if the disease were airborne, as many infectious disease agents are easily spread in crowded populations (1,13). Second, the United States has not experienced a major foreign an-imal disease outbreak in livestock or poultry over the past 20 years; therefore, our animals have little or no innate resistance to these foreign pathogens (9,11,15,43). By policy, we do not vaccinate our livestock and poultry against these diseases (10,43). Third, changes in husbandry practices and biotechnology innovations designed to increase the quality and quantity of livestock and crop production have led to an increased susceptibility to pathogenic agents (11,43). Modifications, such as sterilization programs, dehorning, branding, hormone injections, and use of antibiotics, have led to an increase in the stress levels and lowered the natural tolerance of animals to disease from contagious organisms (11,43). Lastly, the problem is made worse by the rapid movement of vast amounts of product over broad geographies and through many hands from farm to fork (11,15). According to one survey of U.S. barn auctions, 20 to 30% of cattle are regularly dispatched to locations at least 48.28 km from their original point of purchase and, in many cases, cross several states within 36 to 48 h of leaving the sales yard (11). This rapid transfer of livestock only helps to increase the possibility that pathogenic agents will spread well beyond the original site of a specific outbreak before health officials become aware that a problem exists (12,15). Clearly, the methods used by the U.S. agricultural system to increase productivity and efficiency also contribute to the increase in vulnerability to a biological attack (11,15).
A general lack of physical security and robust surveillance systems further exacerbates our vulnerability to a biological attack (11,15,16,35,43). According to Chalk (11), the majority of the agricultural community has simply not thought about, much less physically sought to guard itself against, a deliberate act of sabotage. Most U.S. farms tend to operate in a relatively open manner, usually lacking physical security, especially in outlying fields and feedlots, and seldom with vigorous means to prevent unauthorized access (11,43). Food processing and packing plants similarly tend to lack physical security and safety preparedness measures (12). Further complicating matters, the current U.S. disease-reporting system does little to promote early warning and identification of pathogenic outbreaks (12). Responsibility for reporting occurrences of livestock diseases lies with the agricultural producers; however, producers are reluctant to report outbreak occurrences for several reasons (12). First, channels of communication with the appropriate regulatory agencies or primary or secondary personnel are often confusing and rudimentary (12). Second, there are no standardized and consistent programs to compensate producers affected by a pathogenic outbreak, as indemnity payments are usually determined on a case-by-case basis (11,12). For example, the 1999 Emergency Supplemental Appropriations Act provides less than 1% of the budgeted amount for livestock indemnity payments (13). Programs that provide compensation to farmers for depopulation of herds are usually for more common diseases, such as tuberculosis. For example, the Animal and Plant Health Inspection Service has implemented changes to the tuberculosis eradication program in the United States to include payment of indemnity for the depopulation of herds affected by tuberculosis (43). Furthermore, farmers may not want to invite quarantine and disease management officials onto their premises because of the perceived message it could send to the surrounding community (11).
Veterinary training in the area of foreign animal diseases has also declined because of the dwindling number of students pursuing large animal husbandry (7,11,29,35). Only 25% of veterinarians who belong to the American Veterinary Medical Association work in large animal husbandry, and the literature suggests that the declining number of veterinarians who specialize in livestock treatment is attributable to a lack of educational support and career financial incentives (7,11,29). As a result of this decline, college curricula in many veterinary schools have de-emphasized foreign animal diseases education, instead focusing attention on diseases that are endemic to the United States, primarily diseases found in ''pet'' animals (11,29). Consequently, many accredited state and local veterinarians may lack the necessary tools to detect and respond to a foreign animal disease outbreak.
Lastly, a much overlooked aspect of food safety is the mind-set of many Americans (11,15,35). Americans have developed a false sense of security that is fueled by the agricultural sector's relative ''invisibility'' (11). Additionally, according to Parker (42), there is limited appreciation for the economic and social importance of agriculture in the United States. This is partly because the agricultural community (10,11,35) directly employs less than 3% of the U.S. workforce. As a result, it is easy for Americans to equate food with supermarkets and restaurants, not farms. Many Americans take for granted that their food is safe and readily available, and since agricultural safety is not a priority for many Americans, few demands are placed on the government to develop defense strategies (11,15). Americans, for the most part, have not been directly affected by a crop or livestock disasters, such as the FMD outbreak in the United Kingdom (UK), and therefore fail to realize that the downstream effect of a deliberate act of sabotage would be multidimensional, affecting many sectors of the economy and, ultimately, affecting them directly (11,15).
Taking advantage of these vulnerabilities in the agricultural infrastructure would not require a significant undertaking (10,12). There is a large menu of environmentally hardy pathogenic agents ( Table 2), many of which are typically not the focus of concentrated livestock vaccinations (10,11). Of most concern are the List A animal pathogens. According to the Office International des Epizooties, an intergovernmental organization created by the International Agreement of January 1924 signed by 28 countries (each member country works to report animal diseases that it detects on its territory and then shares this information with other countries), there are 15 ''List A'' animal pathogens (12,15,42). The List A animal pathogens are agents that can be easily disseminated; these have the potential to cause high mortality and public panic and social disruption and require special action in terms of preparedness and response planning (15,42). Since many of these pathogens are highly transmissible, there is a reduction in the number of obstacles to weaponization (11). This is important, be-cause the costs and technical difficulties associated with manufacturing disease agents for offensive purposes are frequently cited as one of the most significant deterrents to the use of biological agents (8,11). Also, since many of these disease agents cannot be passed to humans, they pose no risk of infection to the perpetrator, thus eliminating the necessity for sophisticated containment procedures and personal protective equipment (11). Furthermore, because of the scale and openness of the food infrastructure and an overall lack of security, terrorists have a number of entry points to choose from when implementing an attack (11,12).
POTENTIAL IMPACT OF AGROTERRORISM
The deliberate introduction of a disease agent, either against livestock or into the food chain, would have substantial economic, political, social, and public health repercussions (12,15,35,45). The effect of a major biological attack would result in immediate economic disruption (11,35). According to Chalk (11), there are at least three expected levels of cost associated with an agroterrorist attack. The first expected level is the economic cost associated with containment measures and the eradication of diseaseridden animals (11,50). For example, in 1983, the U.S. poultry industry suffered a particularly virulent strain of avian influenza that resulted in about $60 million in eradication costs and nearly $250 million in increased consumer costs (71). The second level is the ''indirect multiplier effects'' associated with both the compensation paid to farmers for the destruction of agricultural commodities and the revenue deficits suffered by both directly and indirectly related industries (11,50). During the 2001 outbreak of FMD in the UK, the British Government paid more than $1.6 billion U.S. dollars in compensation to farmers affected by the mass culling operations (2,11). The third level is the costs associated with international trade embargoes imposed by major external export partners (11). Washington State experienced a loss of about $1 million per week because of embargoes placed on the U.S. import of beef tongues by Japan after one case of bovine spongiform encephalitis was discovered (21). Even the threat of intentional contamination could have an impact on the export market economy. For example, the Chilean grape scare of 1989, in which the anti-Pinochet extremists threatened to lace fruit bound for the United States with sodium cyanide, resulted in the loss of more than $300 million U.S. dollars in revenue earnings (11,32).
A successful agroterrorist attack would have a significant political impact as well (11,43). For instance, consumer confidence in the government's ability to maintain a safe food supply would weaken significantly (6,11,14,35,43,57). Graphic images of diseased animals appearing in the media would further demonstrate our extreme vulnerability to agroterrorism and further weaken the consumer's confidence in the government's ability to protect the food supply (11). Such an attack could also elicit fear and anxiety among the public, especially if the event resulted in foodborne outbreaks or the spread of animal pathogens contagious to humans (7). The combination of these factors could potentially initiate a chain reaction of sociopolitical events that have the potential to undermine the public's trust in both state and federal governments (12). Public criticism would no doubt result from the required containment procedures, as hundreds of animals would need to be slaughtered and disposed of promptly and properly (11,15). The culling and disposal of diseased animals would receive vigorous opposition from farmers, animal rights advocates, and environmental advocates. This cohort would especially oppose the slaughter of susceptible but asymptomatic animals (11). The limited news and television coverage of the U.S. eradication of livestock has precluded the American public from seeing firsthand the effects of mass animal depopulation (11,15). According to Breeze (6), most Americans have no visual point of reference for the massive slaughter of animals that would be required for the containment of a major disease outbreak; consequently, such measures would create a major public relations challenge. Furthermore, the actual process re-quired to adequately dispose of diseased carcasses is an important issue (11,15,35). There is no ecologically friendly manner to dispose of the large number of animals that would perish in an outbreak (11). During the 2001 FMD outbreak in the UK, more than three million animals were slaughtered. Disposal of these carcasses has the potential for long-term effects, such as groundwater contamination and the rendering of large areas of land unusable for many years (2,11,15).
WHAT ARE THE ISSUES?
Agricultural bioterrorism will have major consequences both nationally and internationally (7,11,15,35,43). The consequences of such an event could include human casualties, disruption of markets, difficulties in sustaining an adequate food supply, and loss of income and jobs for many. Experience with naturally occurring outbreaks of infectious diseases has demonstrated that no existing preventive or response system is 100% effective (18,20,22,31), and yet steps can be taken to improve the speed of detection of a naturally occurring or bioterrorism event that will, in turn, help minimize the impact. To improve our prevention and response capabilities, the United States requires better communication, enhanced surveillance capabilities, and adequate training at all levels of government and within the private sector.
Effective communication is critical in the management of an animal disease outbreak (11,25,50,53). However, several factors can often impede communication needs and efforts during times of emergency. For example, communication between the agricultural producers and regulatory agencies is important, because agricultural producers are among the first group to respond in the event of an infectious disease outbreak among animal populations (11,12). However, communications between agricultural producers and the state emergency management regulators are sometimes confusing and rudimentary and lack guidelines that clearly designate the appropriate regulatory agencies or primary or secondary personnel that need to be contacted in the event of a serious infectious disease outbreak (11).
There is also uncertainty about when to report disease-related information and to whom such reports should be made (11,21). Communication is further delayed by the unreliable, passive disease-reporting system (11,12). This system does little to promote early warning in the event of an animal disease outbreak, mainly because of fear on the part of the agricultural producers that they will be forced to carry out uncompensated depopulation measures. Since the United States has not experienced a major animal disease outbreak in almost 20 years, there have been few incentives to modify the passive disease-reporting system currently in place (11,15,43). However, the development of government indemnity plans before a crisis occurs and the use of outreach programs designed to inform farmers of whom they should contact and when they should contact them in the event of a suspected infectious disease outbreak would help improve the overall effectiveness of disease reporting on the part of agricultural personnel.
Another factor that hinders communication is the barrier between the public and animal health community (20,21,31,33,53,55). The 1999 outbreak of West Nile virus in New York City is a good example of how a lack of communication and organization between federal, state, and local agencies and the private sector impedes response efforts when dealing with the early stages of an infectious disease outbreak (20,31,53,55). Officials indicated that the lack of leadership in the initial stages of the outbreak and the lack of sufficient channels for communication were primarily responsible for the delay in the initial response to this outbreak (53,55). Communication and coordinated surveillance efforts between the public and animal health communities were lacking, as key public health officials were not aware of the similarities in the clinical symptoms occurring in the birds and humans until many days or weeks after the human outbreak began (20,31,53,55). Communication was further hindered because state and federal agencies within the animal health community are segregated; that is, domestic animals, such as cats and dogs, are usually the responsibility of state and local health departments, whereas state agricultural agencies are responsible for livestock, and state environmental agencies are responsible for wildlife (20). These various agencies conducted their own investigations with little or no interagency communication or sharing of information (20). Delays in diagnosis were also caused by an inability to quickly access a laboratory to test animal samples. Many veterinary laboratories will test samples only on a fee basis, and public health laboratories usually lack the capacity to test animal samples (20). As a result, the veterinarians were forced to pursue many different channels to find a laboratory willing to perform additional tests on the bird samples (20). Although public health agencies are now tracking sentinel flocks down the East Coast and there is real-time reporting of human infections, several communication problems between public and animal health officials remain. For instance, many states lack a centralized communication center as well as the funding needed to staff these centers with competent professionals, such as epidemiologists, veterinarians, wildlife experts, and biologists. Additionally, the communication systems available to most rural areas, the areas of most concern during an agroterrorist attack, are outdated and do not permit real-time communication.
Another important issue is the inconsistency of messages from the media during an infectious disease outbreak. For example, during the 2001 anthrax attacks, media outlets sought statements from any potential expert, including uninformed spokespersons, and then disseminated this information to the general public, some of which was not always accurate (43,54). The lack of a coordinated media strategy and the dissemination of inaccurate information by the media, combined with a scared public that was poorly educated about anthrax, resulted in the inappropriate use of prophylaxis. This factor, combined with other factors mentioned previously, further strained public and animal health resources. Inconsistent and sometimes inaccurate messages from the media were also an issue during the 2004 to 2005 influenza vaccine shortage (71). State health officials reported several instances in which messages from the media created confusion. Health officials in California reported that local radio stations in the state were running two public service announcements simultaneously: one from the Centers for Disease Control and Prevention (CDC) that advised those aged 65 years and older to be vaccinated and one from the state that advised those aged 50 years and older to be vaccinated. The reality of bioterrorism risks means that authorities should place higher priority on risk communication to achieve an optimal response. Studies of public opinion and perception of biological weapons suggest that many Americans lack basic knowledge about pathogens and how they cause disease (43). One study found that 47% of the population surveyed did not know that anthrax does not spread person to person (43). This suggests that, despite all of the media attention on anthrax from the 2001 attacks, people still need education about basic anthrax information. Efforts to train members of the media to communicate effectively with the public will help, but identifying spokespeople and developing messages before they are needed are important priorities. Governmental organizations in conjunction with private-sector health care clinicians should engage media and risk communication experts to proactively develop effective communication strategies that consider the message, the messenger, and the recipient.
The term surveillance is used to denote the ongoing efforts to collect, analyze, and interpret health-related data. These data can be used to rapidly detect an infectious disease outbreak, thereby improving public and animal health preparedness. Since early detection and response is critical to the rapid containment of any infectious disease, surveillance is a critical component of bioterrorist response planning (11,20,46,53). The effect of poor surveillance was demonstrated during the 1999 West Nile outbreak (20,30,31,53). For the West Nile outbreak, reporting by an alert physician was crucial to the early detection of the outbreak; however, assessments of the response infrastructure suggested that surveillance networks in many locations were not as well prepared. The lack of a common data set between human and animal health communities that would link the surveillance systems led to duplication in efforts and greatly slowed the investigation (20,31). To further compound the problem, there was also a lack of overlap between surveillance networks within the domestic animal health community. As mentioned previously, domestic animals (e.g., dogs, cats) are regulated by state and local health departments, livestock (e.g., cattle, swine) are regulated by state agricultural agencies, and wildlife (e.g., birds, deer) are regulated by state environmental or wildlife agencies (20). The key to identifying the correct source of this outbreak was a consensus that the bird and human outbreaks were related (20,31,53). However, since there was no overlap between the public and animal health surveillance networks and the surveillance networks within the animal community, public and animal health officials were not aware of the similarities in the clinical symptoms occurring in the birds and humans until many days or weeks after the human outbreak began. Once this association was made apparent, the cause of the outbreak was correctly identified as a zoonotic disease (i.e., West Nile fever). Clearly, this outbreak has demonstrated a need for a collaborative outbreak surveillance network that would enable local, state, and federal agencies in both human and animal health communities to view the same data in real time. Similarly, there was also the lack of a common data set between human and animal health communities during the 2003 outbreak of monkeypox (20,53). Furthermore, animal tracking, an important component of surveillance, was difficult because of poor or absent records of sale (20,53). The combination of these factors during the monkeypox outbreak also led to a duplication in efforts and greatly slowed the investigation. Surveillance was also an issue during the 2001 anthrax attacks (54). Surveillance during the anthrax attacks relied primarily on medical practitioners, some of whom lacked adequate training to respond appropriately. For example, while astute physicians in Florida identified the probable threat and acted quickly to contain and warn other citizens, thereby preventing a second death, a Wash-ington postal worker, who later died of inhalation anthrax, was sent home from a Maryland hospital with flu-like symptoms after a coworker was admitted to another hospital with inhalation anthrax (54). Furthermore, insufficient laboratory capacity and a lack of coordination within the laboratory network led to long delays in sample processing (54).
Although many programs have been implemented to improve surveillance, there remain several gaps. For example, many states and cities currently develop their surveillance programs almost completely independently, which generally means that they do not learn from the efforts and experiences of others, which ultimately leads to wasted resources (54). The strain on resources is particularly important because many health departments in the United States initiate nearly all the investigations that lead to recognition of infectious disease outbreaks (54). A strain on resources or a lack of resources can lead to understaffing and other problems. Surveillance efforts are also challenged by the passive disease-reporting system for animal diseases and by the continued use of paper-based disease-reporting systems in many locations where surveillance is sporadic and inadequate, resulting in underreporting or delays in reporting disease outbreaks (12,53). And, most importantly, many of the state and local surveillance systems are designed with either human or animal health in mind, but with little or no overlap. Since approximately three of every four emerging infectious diseases that reach humans occur via transmission from animals, the animal health community should not be overlooked when designing and implementing surveillance systems. Many of the zoonotic pathogens of concern become established in wildlife before they are transmitted to humans and domestic animals.
Education and training are also important issues with regard to emergency planning and response. Since the initial detection of an infectious disease outbreak will likely occur when a physician, veterinarian, or other first responder (e.g., public health worker, extension agent, plant pathologist, biologist, laboratory technician) notices an unusual case or cluster of cases, it is important that they receive adequate bioterrorism-related training to recognize the characteristic features of diseases that could represent novel infections or acts of bioterrorism (15,27,28,35,43). The priority should be the education of first responders regarding the clinical presentation of the most important disease threat agents. According to Franz (19), the most important disease threat agents are those that are highly contagious, are very stable in the environment, have the ability to rapidly evolve, or have the ability to cause enormous economic damage, such as Yersinia pestis (plague), Bacillus anthracis (anthrax), Influenzavirus A virus (highly pathogenic avian influenza or ''bird flu''), and Aphthovirus FMD). The development of an understanding among the humans closest to animals regarding when to report a suspicious case and to whom such a report should be made should also be a focal point of education. First responders from all areas of health and emergency response should receive similar training related to these issues as well as similar training regarding the importance of the timely sharing of information with other groups.
There are several outbreaks that illustrate how the lack of knowledge among first responders regarding infectious diseases can lead to a delay in diagnosis and containment. For instance, about 40 people in six Midwestern states contracted monkeypox in 2003, a life-threatening illness related to smallpox (53). This was the first outbreak of monkeypox in the United States, and as such, there was a lack of awareness of this nonindigenous pathogen among public and animal health care providers. Because of the lack of awareness on the part of the health care providers, testing to determine the cause of the illness took about 2 months. In such cases, early detection is critical to prevent the nonindigenous disease from becoming entrenched in our country. Another example is the 2001 anthrax attacks, which resulted in 22 cases, five deaths, and billions of dollars to contain, decontaminate, and investigate (54). During these attacks, a lack of knowledge related to detection, surveillance, and response on the part of medical practitioners and other first responders, as well as private citizens, made it difficult to contain. The 2004 outbreak of Escherichia coli O157:H7 at a farm animal exhibit in North Carolina also demonstrated the need for increased knowledge of infectious diseases (26,33,49). During this outbreak, several children and adults came in contact with farm animals at an animal exhibit at a county fair. Because of an inadequate understanding of disease transmission, several of the visitors contracted E. coli O157:H7, which can lead to serious, lifelong complications, such as hemolytic uremic syndrome. As a result of this outbreak, the state enacted Aedin's law, which stipulates that educational outreach programs be implemented to inform agricultural fair operators, exhibitors, agritourism business operators, and the general public about the health risks associated with diseases transmitted by physical contact with animals (26,49). Furthermore, this law requires the posting of signage informing the public of health and safety issues related to contact with animals at all petting zoos and animal exhibitions at state-sanctioned fairs.
WHERE ARE WE NOW?
Tommy Thompson, former Secretary of Health and Human Services (HHS), expressed concern about the possibility of a terrorist attack on the nation's food supply during his resignation speech by saying, ''For the life of me, I cannot understand why terrorists have not attacked our food supply because it is so easy to do!'' (44). Others agree that the threat is a serious problem for both imported food products and domestically produced food (8,11,15,35,43). In response to this growing threat, governmental agencies have implemented several initiatives to protect our food and agriculture infrastructures (3,5,13,14,21). One of the primary issues addressed by government is the organization and distribution of regulatory responsibilities.
Several bills have been introduced that address some aspect of terrorism in agriculture. The following is a list of some of those new bills: (i) The Agroterrorism Prevention Act of 2001 was sponsored by Republican George Nethercutt during two separate sessions of Congress but never became a law. Had this bill been passed, it would have amended the federal criminal code to establish and enhance penalties for animal and plant enterprise terrorism (23). Under the guidelines, animal and plant terrorism would be a predicate offense under the Racketeer Influenced and Corrupt Organizations Act and would have required the Director of the National Science Foundation to establish and maintain a national clearinghouse for information on incidents of crime and terrorism committed against or directed at any (i) animal or plant enterprise; (ii) commercial activity because of the perceived impact of such activity on the environment; or (iii) person because of such person's perceived connection with or support of any enterprise or activity. This act would have also authorized appropriations of $5 million to the National Science Foundation for animal and plant research security programs and grants.
(ii) The Agricultural Terrorism Prevention Response Act of 2001 was proposed during several sessions of Congress and introduced in the House of Representatives; however, it also never became a law. The Agricultural Terrorism Prevention Response Act would have established an Interagency Agricultural Terrorism Committee to coordinate the counterterrorism efforts to protect the U.S. agricultural production and food supply system (24). The Agricultural Terrorism Prevention Response Act would have also directed the Secretary of Agriculture to (i) strengthen cooperation with other agencies; (ii) appoint an agricultural liaison to the Homeland Security Office; (iii) establish an Industry Working Group on agricultural terrorism to develop counterterrorism measures to protect the U.S. agricultural production and food supply system; and (iv) establish related training and information programs for agricultural producers.
(iii) The Public Health Security and Bioterrorism Preparedness and Response Act (Bioterrorism Act) of 2002 was signed into law by President Bush on 12 June 2002. The Bioterrorism Act directs the Secretary of HHS to develop and implement a coordinated strategy, building upon core public health capabilities established under provisions of the act, for carrying out health-related activities to prepare for and respond effectively to bioterrorism and other public health emergencies (63). Under these new guidelines, the Secretary of HHS is supplied with new authorities to protect the nation's food supply against the threat of intentional contamination as well as other food-related emergencies. For example, Title III of the Bioterrorism Act deals specifically with agricultural security by increasing inspection capacity at points of origin, improving surveillance at ports of entry, and enhancing methods of protecting against bioterrorism. As part of this provision, the Secretary is authorized to (i) set forth reporting requirements and authorize appropriations (Section 302); (ii) permit an officer or qualified employee of the U.S. Food and Drug Administration (FDA) to order the temporary detention of any article of food if the officer or qualified employee finds, during an inspection, examination, or investigation, credible evidence or information indicating that such article presents a threat of serious adverse health consequences or death to humans or animals (Section 303); (iii) provide for the debarment of importers for repeated or serious food import violations (Section 304); (iv) require that any facility (domestic and foreign) engaged in manufacturing, processing, packing, or holding food for consumption in the United States be registered with the Secretary (Section 305); (v) require that access be permitted to all records needed to assist the Secretary in determining whether food is adulterated and presents a threat of serious adverse health consequences or death to humans or animals (Section 306); (vi) require that all food importers give the Secretary specified prior notice (including specified information about the source of food) of the importation of any food for the purpose of enabling the food to be inspected (Section 307); (vii) require that the owner or consignee of food refused admission into the United States, but not ordered destroyed, affix to the container of the food a label that clearly and conspicuously bears the statement: United States: Refused Entry (Section 308); (viii) prohibit an importer from port shopping with respect to food that has previously been denied entry (Section 309); (ix) provide notice regarding threats associated with a shipment of imported food to the appropriate states (Section 310); (x) allocate funds to states, territories, and Indian tribes to assist with the costs of enhancing food safety efforts as well as costs associated with examinations, inspections, and investigations where a credible threat of adulterated food is present (Sections 311 and 312); (xi) coordinate zoonotic disease surveillance (Section 313); and (xii) commission officers and qualified employees of other federal departments or federal agencies to conduct examinations and inspections for the Secretary under the Federal Food, Drug, and Cosmetic Act. Subtitle B of Title II directs the Secretary of Agriculture to establish and maintain a list of each biological agent and each toxin that the Secretary determines has the potential to pose a severe threat to animal or plant health or to animal or plant products. Subtitle D of Title II amends the federal criminal code provisions concerning the possession of listed biological agents and toxins to provide that whoever (i) transfers a select agent to a person who the transferor knows or has reasonable cause to believe is not registered as required shall be fined, imprisoned for not more than 5 years, or both; and (ii) knowingly possesses a biological agent or toxin where such agent or toxin is a select agent for which such person has not obtained a required registration shall be fined, imprisoned for not more than 5 years, or both.
(iv) The Homeland Security Act of 2002 was enacted as a direct result of the 11 September 2001 terrorist attacks (66,73). The primary purpose of the Homeland Security Act was to create the Department of Homeland Security (DHS). The DHS was established under Section 101 of the guidelines as an executive department of the United States, headed by a Secretary of Homeland Security appointed by the President, to (i) prevent terrorist attacks within the United States; (ii) reduce the vulnerability of the United States to terrorism; (iii) minimize the damage, and assist in the recovery, from terrorist attacks within the United States; (iv) carry out all functions of entities transferred to the DHS; (v) ensure that the functions of the agencies and subdivisions within the DHS that are not related directly to securing the homeland are not diminished or neglected except by a specific Act of Congress; (vi) ensure that the overall economic security of the United States is not diminished by efforts, activities, and programs aimed at securing the homeland; and (vii) monitor connections between illegal drug trafficking and terrorism, coordinate efforts to sever such connections, and otherwise contribute to efforts to interdict illegal drug trafficking. Under Section 202 of the guidelines, all federal agencies must promptly supply the Secretary of the DHS with (i) all reports, assessments, and analytical information related to threats of terrorism and to other responsibilities assigned to the Secretary; (ii) all information concerning the vulnerability of the U.S. infrastructure or other U.S. vulnerabilities to terrorism, whether or not it has been analyzed; (iii) all other information related to significant and credible threats of terrorism, whether or not it has been analyzed; and (iv) such other information or material as the President may direct. And, as part of the largest governmental reorganization in 50 years, the DHS assumed a number of government functions previously conducted by other departments, such as agricultural border inspection, functions under the U.S. Customs Service, and functions under the Transportation Security Administration. Possession of the Plum Island Animal Disease Center in New York was also transferred to the DHS. As a result of the reorganization, the DHS is now the third largest cabinet department in the U.S. Federal Government, after the Department of Defense and the Department of Veterans Affairs.
(v) Homeland Security Presidential Directive-9 was signed by President Bush in 2004 to establish a national policy to defend the agriculture and food system against terrorist attacks, major disasters, and other emergencies by requiring actions in the following areas: (i) awareness and warning; (ii) vulnerability assessments; (iii) mitigation strategies; (iv) response planning and recovery; (v) outreach and professional development; and (vi) research and development (40,68). Homeland Security Presidential Directive-7 made the DHS responsible for coordinating the overall national effort to enhance the protection of the critical infrastructure and key resources of the United States (67). An important issue in Homeland Security Presidential Directive-9 is the focus on veterinary medicine as a critical component of food security (41,68). For example, the policy calls for the creation of a national stockpile of animal drugs and vaccines to better respond to serious animal diseases; grants to veterinary colleges for expanding training in exotic animal diseases, epidemiology, and public health; and inclusion of veterinary diagnostic laboratories in national networks of federal and state laboratories. Ultimately, the goal of implementing this directive is the development of a robust, comprehensive, and fully coordinated surveillance and monitoring system for public and animal health. The policy also directs the Departments of Agriculture, Health and Human Services, and Homeland Security to establish internships, fellowships, and other postgraduate op-portunities for professional development and specialized training in agriculture and food protection that provide for homeland security professional workforce needs. If implemented in its entirety, this document will have a long-standing and positive influence on animal health and animal agriculture.
As a result of the above-mentioned legislation, federal and state agencies have implemented initiatives to address agricultural and food defense issues. The following is a list of some of the agencies and their initiatives: (i) Since its inception, the DHS has focused its efforts on a six-point agenda: (i) to increase overall preparedness; (ii) to create better transportation security systems; (iii) to strengthen border security; (iv) to enhance information sharing; (v) to improve financial management; and (vi) to realign the DHS organization to maximize mission performance (64,72). The National Preparedness System was developed within the DHS as part of this agenda (14). This system was designed to provide a comprehensive assessment of national preparedness and has six basic components: (i) the National Preparedness Goal, which sets a general goal for national preparedness, identifies the means of measuring such preparedness, and establishes national preparedness priorities; (ii) 15 planning scenarios set forth as examples of catastrophic situations to which nonfederal agencies are expected to be able to respond; (iii) the Universal Task List, which identifies specific tasks that federal agencies and nonfederal agencies would be expected to undertake; (iv) the Target Capabilities List, which identifies 36 areas in which responding agencies are expected to be proficient in order to meet the expectations set out in the Universal Task List; (v) the National Preparedness System sets out the framework through which federal agencies operate when a catastrophe occurs; and (vi) the National Incident Management System, which identifies standard operating procedures and approaches to be used by respondent agencies as they work to manage the consequences of a catastrophe. According to Bea (4), the National Preparedness System represents the most comprehensive effort taken to develop an emergency preparedness and response system. Ultimately, the National Preparedness System is intended to increase federal involvement in emergency preparedness and response by providing policy makers and practitioners with the ability to track and improve readiness, locate additional resources as needed, and make informed decisions regarding risk management. The National Biosecurity Integration System is being updated by the DHS to enhance information sharing (14). These updates, referred to as the National Biosecurity Integration System Lite, will integrate data from the CDC, FDA, USDA, and DHS Science Technology Directorate. National Biosecurity Integration System Lite is intended to enable early detection and characterization of biological trends, provide situational understanding to guide response, and enable the sharing of information among its partners. The Vulnerability Identification Self-Assessment Tool and the National Asset Database were also developed by the DHS as a way of improving overall preparedness (14). The Vulnerability Identifi-cation Self-Assessment Tool will allow a self-assessment of vulnerabilities by various sector participants. The National Asset Database is a secure, Web-based application for the exchange of unclassified asset information and is designed to integrate with other, related data repositories. Vulnerability Identification Self-Assessment Tool assessments will be linked to the National Asset Database and will provide a means to assess baseline security system effectiveness against a base set of threat scenarios. Ultimately, the goal of this system is to provide data and tools to assess and quantify risk according to the variables of threat, vulnerability, and consequence.
(ii) HHS is the U.S. Government's principal agency for protecting the health of all Americans and providing essential human services. Eleven operating divisions, including the National Institutes of Health, the FDA, and the CDC, administer its programs. The Bioterrorism Act of 2002 supplied the FDA with more authority to protect the nation's food supply against the threat of intentional contamination as well as other food-related emergencies (48,63,70). FDA rules issued in accordance with the provisions in the Bioterrorism Act can be found at http:www.fda.gov/ oc/bioterrorism/bioact.html. For example, Section 305 of the Bioterrorism Act outlines the rules for the registration of companies involved in the food system. According to new rule, all domestic and foreign facilities that manufacture, process, pack, or hold food for human or animal consumption in the United States were required to register with the FDA no later than 12 December 2003. Registration consists of providing information, such as the firm name, address, product brands, and categories. Farms, restaurants, retail food establishments, nonprofit establishments that prepare or serve food, and fishing vessels not engaged in processing are exempt from this requirement. Section 306 outlines the rules for record keeping and maintenance. This section gives the FDA authority to require that all food processors keep production and distribution records. Facilities will be required to make these records available within 24 h in the event of a suspected food safety problem. Farmers, retailers, restaurants, and other businesses dealing directly with the public do not have to keep records; however, they are required to maintain records on where retail products were obtained. Section 307 deals specifically with prior notice of food imports. Under the new guidelines, all food importers must give advance electronic notification to the FDA prior to the importation of food. The notice must include a description of the article, the manufacturer, the shipper, the grower, the country of origin, the country from which the article is shipped, and the anticipated port of entry. Other issues outlined in the Bioterrorism Act include administration detention, debarment for persons convicted of conduct related to the adulteration of imported food, and allocation of grant funds to assist with the costs of enhancing food safety efforts and the costs associated with taking action when a credible threat of adulterated food is present. As mentioned previously, the overall purpose of these new guidelines is to give the FDA the authority and information needed to protect our food supply. Access to records is intended to better facilitate the tracking and control of food products suspected of being contaminated. The intended purpose of prior notification is to ensure that all imports comply with U.S. regulations and that suspect shipments are identified and inspected. Additionally, through the use of food facility registration, the FDA will have a more accurate inventory of its regulatory domain, which will further enhance its ability to trace intentionally and unintentionally contaminated food.
(iii) Federal responsibilities to protect the agricultural infrastructure against acts of terrorism fall primarily with the USDA. Within the USDA, the Animal and Plant Health Inspection Service and the Food Safety and Inspection Service have the primary authority to protect agriculture and ensure the safety of meat, poultry, and egg products, while the Agricultural Research Service conducts research and development of countermeasures and diagnostic tools. Under the guidance of the USDA agencies, several important agrosecurity initiatives have been implemented. For example, the National Animal Identification System, a collaborative state-federal-industry partnership, was implemented as a means to standardize and expand animal identification programs and practices to include all livestock species and poultry (61). The National Animal Identification System is currently being developed, under the guidance of the USDA, for all animals that will benefit from rapid trace backs in the event of a potential disease outbreak. Many species can already be identified through some sort of identification system, but these systems are not consistently used across the United States. Under such a system, all states will operate under national standards to eliminate inconsistencies and overlap. The National Animal Identification System will integrate systems currently in place, such as premise identification, animal identification, and animal tracking systems. Eventually, it will provide health officials with the capability of identifying all livestock and premises that have had direct contact with the disease of concern within 48 h after the discovery of a potential outbreak. The USDA hopes to make animal identification mandatory in 2008; currently, the program is voluntary. The National Center for Animal Health Surveillance was created by the USDA in 2004 as a result of the restructuring of the Center for Animal Health Monitoring in an effort to strengthen the U.S. animal health surveillance system (59). The National Center for Animal Health Surveillance is one of three centers within the Veterinary Services' Center for Epidemiology and Animal Health and is organized into two units, the National Surveillance Unit and the National Animal Health Monitoring System. The National Surveillance Unit coordinates activities related to U.S. animal health surveillance and addresses recommendations regarding surveillance. The National Animal Health Monitoring System is responsible for collecting, analyzing, and disseminating data on animal health, management, and productivity across the United States. Information sharing was enhanced with the development of a USDA liaison position in the National Counterterrorism Center (14). The National Counterterrorism Center was established by the President in August 2004 to serve as the primary organization in the U.S. Government for integrating and analyzing all intelligence pertain-ing to terrorism and counterterrorism and for conducting strategic operational planning by integrating all instruments of national power. The USDA has also implemented several programs designed to increase education and skills among first responders (58,60). For example, the USDA is working with other federal and state agencies to outline a new education and outreach plan, which includes implementation of hazard analysis and critical control point and the development of a new Website for bovine spongiform encephalitis information (58,60). The goal of these outreach efforts are to inform producers and affiliated industries of the surveillance goals and to encourage the reporting of suspect or targeted cattle on farms and elsewhere.
Other initiatives include the following: (i) The Memorandum of Agreement for an Integrated Consortium of Laboratory Networks, a collaborative effort between the FDA, CDC, USDA, DHS, and U.S. Environmental Protection Agency, was initiated to integrate human and animal laboratory networks (4). The Integrated Consortium of Laboratory Networks initially grew from the collaborative work of the U.S. Environmental Protection Agency and the CDC and was expanded under this agreement to include other laboratory networks, such as the USDA's National Animal Health Laboratory Network and the Food Emergency Response Network, a collaborative effort between the FDA and the USDA's Food Safety and Inspection Service. The long-term goal of this collaborative effort is to provide early detection and effective consequence management of acts of terrorism and other events that involve a variety of agents or more than one segment of the nation (e.g., humans, wildlife, domestic pets, plants, food, the environment).
(ii) The Strategic Partnership Program Agroterrorism Initiative was initiated in 2005 (17). Under this initiative, the DHS, USDA, FDA, and Federal Bureau of Investigation will collaborate with private industry and the states to help identify sector-wide vulnerabilities and develop mitigation strategies to reduce the threat of an agroterrorist attack. To facilitate the implementation of this initiative, a series of site visits at multiple food and agriculture and production facilities will be conducted to validate or identify vulnerabilities at the specific site and the sector as a whole. The information gathered from these site visits will be used to develop mitigation strategies and lessons learned.
(iii) A cooperative agreement between the National Association of State Departments of Agriculture, the USDA, the FDA, and the DHS was drawn up to integrate federal and state response plans for food and agricultural emergencies (64). Implementation of this cooperative agreement will occur in the following three phases. (i) A workgroup consisting of federal, state, and local officials will gather information about existing state emergency response systems and how food and agricultural safety and security emergencies will be handled within the various states. (ii) The information gathered during phase 1, which will include state and local participation, will be used to develop an interagency response plan, to conduct tabletop exercises, to pilot test the functionality of the emergency response plan, and to refine the plan on the basis of the lessons learned and other input. (iii) The information that is gathered during phase 2 will be used to develop guidelines for federal food and agricultural regulatory agencies to cooperate with state and local emergency response efforts. Ultimately, the goal of this initiative is to better facilitate federal assistance more quickly and appropriately to assist the local response and recovery efforts. The Food Emergency Response Plan Template is available on the National Association of State Departments of Agriculture Website (37).
Universities have taken a lead role in the area of outreach and professional development. Numerous training programs and educational Websites have been developed to address bioterrorism education. As mentioned previously, the shortage of trained personnel in state and local public and animal health departments and laboratories is a major issue. Barriers to finding and hiring adequately trained personnel include noncompetitive salaries and a general shortage of people with the necessary skills to respond to a catastrophic event. The National Center for Food Protection and Defense, based at the University of Minnesota, was established as a Homeland Security Center of Excellence in 2004 (38). The National Center for Food Protection and Defense, a multidisciplinary and action-oriented consortium, addresses the issues of vulnerability to the nation's food system by (i) making significant improvements in supply chain security, preparedness, and resiliency; (ii) developing rapid and accurate methods to detect incidents of contamination and to identify the specific agent(s) involved; (iii) applying strategies to reduce the risk of foodborne illness due to intentional contamination in the food supply chain; (iv) developing tools to facilitate recovery from contamination incidents and resumption of safe food system operations; (v) rapidly mobilizing and delivering appropriate and credible risk communication messages to the public; and (vi) delivering high-quality education and training programs to develop a cadre of professionals equipped to deal with future threats to the food system. The National Center for Foreign Animal and Zoonotic Disease Defense, based at Texas A&M University, was also established as a Homeland Security Center of Excellence in 2004 (39). Other core members of the National Center for Foreign Animal and Zoonotic Disease Defense include the University of California at Davis, the University of Southern California, and the University of Texas Medical Branch. The National Center for Foreign Animal and Zoonotic Disease Defense was designed to produce four general products: (i) specific biological research products and outcomes; (ii) a robust database and models that can be used to assist in making decisions, predicting needs, and testing outcomes; (iii) application of the models to specific needs of the Department; and (iv) expanded professional resources directed to foreign animal and zoonotic diseases, all of which are directly relevant to countering the threat of agricultural bioterrorism. The National Agricultural Biosecurity Center was established by Kansas State University in 1999 (36). The goal of the National Agricultural Biosecurity Center is to coor-dinate academic agricultural biosecurity activities with federal, state, and local agencies and the public health community through the following methods: (i) response planning and exercises; (ii) education and awareness; (iii) a syndromic surveillance program; (iv) international initiatives; and (v) efforts on the part of the Biosecurity Research Institute. The Florida Department of Agriculture and Consumer Services at the Florida State University has implemented the State Agricultural Response Team, which is designed to coordinate disaster response for animals and agriculture (52). The State Agricultural Response Team utilizes the skills and resources of various agencies to support the county, regional, and state emergency management efforts and incident management teams. Its mission is to provide Floridians with the necessary training and resources to enhance all-hazard disaster planning and response for animal and agricultural issues. The South Central Center for Public Health Preparedness at the University of Alabama at Birmingham, in partnership with the CDC, the Health Resources and Services Administration, and the universities and public health departments in other states, also provides public and animal health training related to agroterrorism (51). The South Central Center for Public Health Preparedness held its first Agricultural Security conference in June 2005. The focus of this conference was to educate public health, agricultural, regulatory, and law enforcement communities about zoonotic diseases and to foster a multiagency, multidisciplinary dialogue to ensure the safety of the food supply for all Americans. The South Central Center for Public Health Preparedness also offers several training and preparedness center courses, online courses, Webcast courses, and CD-ROMs that focus on agriculturally related issues, such as the following: (34). To expand the pool of health care professionals able to respond to infectious disease outbreaks, the University of Alabama at Birmingham is providing online continuing education and information for rare and emerging infections and potential category A bioterrorist agents (56). And, to help expand the veterinary workforce, the Association of American Veterinary Colleges has lobbied for support of the Veterinary Workforce Expansion Act, which has not yet been implemented (14). The Vet-erinary Workforce Expansion Act is a loan repayment program for veterinarians who agree to work in ''shortage situations.'' Such shortage situations are determined in a competitive process that is requested by state veterinarians, Area Veterinarians in Charge, and Food Safety and Inspection Service District Managers. The goal of this program is to expand capacity and services at existing schools, including teaching laboratories, research facilities, classrooms, and administrative space.
FINAL THOUGHTS
These programs, along with many others, help increase emergency preparedness and response. However, there has been little or no evaluation of the effectiveness of these initiatives. Response efforts during Hurricane Katrina would appear to indicate that there are still major gaps in the emergency response system. According to the White House report on response efforts, the lack of communication and situation awareness has had a debilitating effect on the federal response to emergency situations (69). The response to Hurricane Katrina illustrated greater systemic weaknesses inherent in the nation's preparedness system, such as a lack of expertise in the area of response and recovery as well as insufficient planning, training, and interagency coordination. After the response to Hurricane Katrina was reviewed and analyzed, 17 specific areas of weakness were identified: (i) unified management of the national response; (ii) integrated use of military capabilities; (iii) communication; (iv) logistics and evacuations; (v) search and rescue; (vi) public safety and security; (vii) public health and medical support; (viii) human services; (ix) mass care and housing; (x) public communications; (xi) critical infrastructure and impact assessment; (xii) environmental hazards and debris removal; (xiii) foreign assistance; (xiv) nongovernmental aid; (xv) training, exercises, and lessons learned; (xvi) homeland security professional development and education; and (xvii) citizen and community preparedness. All of the areas identified in this report are critical to any emergency response plan and were the target of many of the above-mentioned initiatives. However, as indicated by this report, several gaps remain. In the area of agricultural security, these issues are also important, and, as such, there remains a need to improve many areas. Education and communication seem to be the areas of greatest need. For example, there is a need for increased educational support and career financial incentives in order to attract suitable people to positions within the agricultural community. There is also a need for better coordination between federal and state agencies and the private sector. To improve coordinated efforts with the private sector, educational campaigns need to be developed to increase public awareness of potential catastrophic events, such as major animal disease outbreaks, as well as response protocols to such events. A better understanding of response efforts on the part of the private sector is essential to ensure adequate preparedness for major disasters. Since relationships are critical during a crisis situation, it is also important that federal and state agencies communicate effectively with the private sector. To ensure that communication efforts are ef-fective, it is important that these agencies work together with the private sector to develop strategic plans, revise existing plans, and evaluate these plans through the use of tabletop exercises. And, finally, there are poorly developed educational strategies for allaying public fears concerning catastrophic events. Terrorists can use the resulting fear and anxiety to their advantage without having to carry out indiscriminate civilian-directed attacks. | 2018-04-03T05:52:42.569Z | 2007-03-01T00:00:00.000 | {
"year": 2007,
"sha1": "286acab4c8982ad32e88cd5501890db3d8add130",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://meridian.allenpress.com/jfp/article-pdf/70/3/791/1679544/0362-028x-70_3_791.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "20187f312c99a626760807fca4cbacd385f1005c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Political Science"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
} |
233778366 | pes2o/s2orc | v3-fos-license | Socio-Cultural Aspect of Design and Construction of Modern Orthodox Churches
The article analyzes the aspects of socio-cultural design and construction of modern churches in Russia. Today, for many residents of our country, religion is a freely manifested spiritual, moral and regulatory framework that regulates the way society acts. Most of the residents consider themselves to be of various faiths. The professional community continues to discuss the problems of searching for a new image of a modern Church and the hereditary development of the system of objects of the Orthodox Church. Since design as a field of professional activity always projects through a specific type of consumer and features of production, it is possible to look at the problem in several planes. Thus, the socio-cultural aspect of the design and construction of Orthodox churches in modern Russia, conferences and seminars held in the capital and regions reflect the historical approach. In the historical approach to the Church building, the issues of its relation to traditions and historical models are discussed, without an active attitude to the social environment. The theological approach focuses on the spiritual dimension of Church activity. Creative activities related to the design and construction of Orthodox religious sites that solve the problems of spatial approach. The spatial characteristics of the Church building and its surroundings are Central. Architects, designers and planners solve the problem of integrating Orthodox aesthetics into the urban environment.
Introduction
The relevance of the research is due to the processes of renewal and significant changes in the Russian Federation regarding the construction of temple architecture. Due to the increasing needs of society, the activation of its social and cultural life, the infrastructure of the urban environment is dynamically transformed and changing [1]. Architecture and design provide the material embodiment of these changes. The city as a special form of social organization acts and grows in accordance with the regularly changing needs of its existence. One of these needs is the religious need. A significant proportion of the population performs religious rituals, visits the temple with different frequency, IOP Publishing doi:10.1088/1757-899X/1079/4/042090 2 celebrates religious holidays, and uses Church symbols and objects in everyday life [2]. The revival of spirituality since the 90s has been accompanied by an active process of reconstruction, revival and construction of churches, temples and monasteries.
In our multi-confessional country, an important place is given to Orthodox Church architecture. This leads to the gradual introduction of temples and temple complexes into the structural composition of urban space. This process does not always go smoothly.in the socio-cultural environment, there are problems with the allocation of land for construction, with the location of temples in relation to residential buildings and urban recreational areas. As a result, there are various kinds of protests of the population. Researchers name a number of reasons for these processes. One of them is the lack of development of the environmental approach to the organization of urban space. According to many experts in Russia, the practice of architectural design and the environmental organization of cities are separate processes. So, S. A. Stepanova (2006.), fifteen years ago noted that environmental architectural design is in its infancy, does not have its own methodology and, most importantly, is not mandatory. Little has changed since then [13].
The Orthodox Church is expanding and improving, showing its attitude to urban development trends. This is confirmed by statistical data on the number of churches, chapels and temples built in the country over thirty years.
Religious buildings and complexes are the objects of in-depth and multilateral research. There are works that consider Orthodox architecture from different angles. Already in the NINETEENTH century, a number of Russian scientists (E. E. Golubinsky, I. A. Blagoveshchenskiy, V. V. Zverinsky, etc.) in their works identified the essential content parameters of objects of religious architecture by time and spatial criteria, identified differences and similarities among them. The works of P. A. Florensky, N. F. Krasnoseltsev, and I. D. Mansvetov are dedicated to Church architecture from the point of view of the process of worship.
If you look at the problem through a specific type of consumer, you can distinguish its heterogeneity. We can consider permanent members of parishes who regularly visit the Church. Believers who do not attend services or attend significant events in their lives. There are non-believers who may live in close proximity to the temple, or encounter it as an object of historical and cultural heritage. In any case, the temple as an object of architecture is connected in the final result -with a person-the consumer of architecture. Design-research of sensory reactions of aesthetic influence of the form of architecture of religious objects, their social properties, and convenience.
If you look at the problem through the features of production, you can turn to modern religious and design practice. On the one hand, the mechanism of the construction process involves the control of any activity related to the design and construction of religious objects in accordance with the Urban planning code of the Russian Federation [19]. This is the preparation of project documentation based on the customer's task, the results of engineering surveys, the urban development plan of the land plot in accordance with the requirements of technical regulations, technical conditions, permission to deviate from the maximum parameters of permitted construction, reconstruction of objects. Approval of the project documentation by the customer in the presence of a positive conclusion of the state expertise. Conformity assessment of the design documentation to requirements of technical regulations, including sanitary-epidemiological, ecological requirements, requirements of state protection of objects of cultural heritage, requirements of fire and other safety and engineering survey results and assessment of conformity of results of engineering researches to requirements of technical regulations.
On the other hand, only people or companies with special powers and competencies can provide this activity. Dioceses try to attract them to their own diocesan divisions, or to separate them into a separate legal entity. This structure concludes contracts, makes estimates, orders and evaluates projects, organizes construction support, technical supervision, and cooperates with a number of design bureaus.
Methods and materials
Information analysis of sources included searching for the source sources of information in combination with a preliminary study of their content, which is reflected in the section of the review of literature related to the study. With this study and obtaining objective data on the problem under study, we were able to determine the goals and directions of our research work, and adopt other scientific methods, such as design research. David de Vaus (2006) points out that the purpose of the study is to collect data that can convincingly answer research questions [18]. If you turn to the design of the environment, then it is formed on construction, art, architecture, it is impossible without design research, knowledge and methods from other fields: psychology, sociology, marketing, etc. [12]. Turning to such a sensitive topic as religious activity, we are faced with its closeness. According to many experts, mass surveys and statistical analysis of data reflecting the volume of religious offerings in the Russian Orthodox Church are either not conducted or are completely closed. We decided to analyze the shared content.
Results
Today, state structures and private architects are engaged in designing churches. However, the most authoritative are several companies that specialize in Church architecture: "Association of restorers. Workshops of Andrey Anisimov", workshops of St. Daniel's monastery, Patriarchal architectural and restoration center in the Trinity-Sergius Lavra, Architectural and art center of the Moscow Patriarchate (ARCHCHRAM). Each of the companies has its own conceptual approach to the style of the modern temple.
Some turn for inspiration to examples of ancient Russian architecture: Pskov-Novgorod and Vladimir-Suzdal schools, the Orthodox tradition of tent churches and the neo-Russian style of the late XIXearly XX centuries. Others are adherents of all stylistic trends of ancient Russian architecture from regional schools of pre-Mongol Russia to the all-Russian artistic style of the XVI-XVII centuries. Others are guided by the traditional five-domed and hipped bell towers or tend to Russian-Byzantine motives, taking into account the wishes of the customer, as well as regional and national traditions [7].
By comparing the statistics of events, we have a picture that construction activities are much larger than others in their scope and intensity. According to information posted on the official website of the Russian Orthodox Church, today there are 38,649 churches or other prayer rooms where the divine Liturgy is celebrated. Metropolitan Hilarion of Volokolamsk notes the dynamics of the construction of churches over the past 32 years, namely, from 1988 to 2020. It is reflected in numbers from 6,5 thousand temples, to almost 40 thousand. This data applies to the entire Russian Orthodox Church, including its parishes abroad.
Discussion
The socio-cultural aspect of the design and construction of Orthodox churches can be traced in certain types of activities of the manufacturer, each of which requires research of its own issues and resources. Today, design is also allocated from production resources (human resources, natural resources and materials produced from them, capital resources). And design research is recognized worldwide as a modern method of research.
Design is currently a highly professional service that is carried out in the research, formation and development of concepts, devices and requirements for improving the set of features and appearance of products, to the mutual benefit of consumers and manufacturers [11].
The mission of design research is the analysis, study and research of objects and objects created by man-made people, scientific research of artificial [17] and how this will be expressed in academic Sciences, or in industrial organizations.
The interaction of practical and research activities in design has become a topic of discussion, both among academic and industrial communities [10]. In addition, in the 90-ies of the XX century, the concept of a religious market was proposed. Stark R., Iannaccone L. they introduce the terms "religious firm", "religious economy", "religious product", and so on. The religious economy, as well as the commercial economy, is a market for interaction between consumers and various firms offering religious products. Like other products, they are produced, selected, and consumed.
Design is directly related to the culture of production and consumption and is not implemented outside of their space. As part of our problem, it acts as an intermediary between the sacred and the ordinary. It is known that in order to organize production, resources are needed, both material and non-material, and we have included organizational measures in this category. During this period (1990 -2020), design and production workshops were organized, whose work was aimed at the design, construction and decoration of temple buildings. As a part of them, departments of architectural design, design, art workshops, public relations and media departments, etc. began to function [9,20].
From the point of view of our design research, analyzing scientific events, we noted the importance of such a concept as the consumer value of the product produced. In marketing theory, the concept of consumer value is quite extensive. It includes properties of the product (quality level and reliability of the product, the functions that it performs, duration of time in which it is valid, aesthetic characteristics, quality and term of service, recognition by the firm and qualification of employees when it comes to providing services). Almquist E., Senior J., Bloch N. they give characteristics of values that reflect certain needs and divide them into 4 basic categories: functional, emotional, improving the quality of life and influencing the social environment [16].
If we take into account that such a product is a temple structure, as a desired object that meets certain needs, we can identify the values that representatives of the ROC would like to see in this product. Functional factors reflect such indicators of consumer value as the presence of one Church for every thousand Orthodox believers, as well as orientation to the economic feasibility of construction. Emotional factors reflect the embodiment of the high ideals of Russian ecclesiastical art in stone and wood, respect for the appearance of buildings being built, rejection of modernist innovations, and blind copying of old models. Factors that improve the quality of life and impact on the social environment are shown in the possibility of building churches on an accelerated scheme using modular structures on the principle of walking distance to the Church building (15 minutes). At the same time, the capacity of the Church is calculated based on the number of believers who regularly attend services, according to sociological research. This approach is similar to the principle of choice of location for the construction of primary schools, gas distribution stations or Bank branches [6,8].
Creative events presented in the form of competitions, exhibitions, and presentations. Almost 30 years ago, a competition was announced for the design of a temple in honor of the 1000th anniversary of the Baptism of Rus in Moscow (1999). Among the latter, we singled out the architectural competition "Project of an Orthodox Church with a capacity of 300, 600 and 900 people with a parish complex" (2015-2016), the review competition for projects and buildings of modern Orthodox churches (2014), the competition for the image of a modern Orthodox Church (2013), and others. Projects of temple architecture by individual authors or groups have been regularly displayed at exhibitions within the framework of the international festival" Architecture " since 1994. As well as at architectural festivals and days of architecture held in Russian cities to this day: Moscow, Vologda, Nizhny Novgorod, Yekaterinburg, Rostov-on-don, Krasnodar. Creative competitions were designed to attract new forces and find creative, conceptual, functional and typological solutions. Active construction activities indicate that all ideas and concepts are approved and successfully implemented [5,14].
As a result, designers solve the problem of integrating Orthodox aesthetics into the urban environment. The perception of a modern temple in urban space is tied on the one hand in a certain utilitarian function, on the other, as a carrier of meaning, symbol, sign that have a certain sociocultural content. We have already noted that the satisfaction of religious needs is one of the functions of the city, and its implementation is important not only for believers, but for the city as a whole. After IOP Publishing doi:10.1088/1757-899X/1079/4/042090 5 all, temples are not only objects of pilgrimage or religious tourism, but also part of the national identity of the country, affect the formation of cultural identity [15].
Designers solve not only aesthetic, but also functional problems. If we take into account that a temple structure as a special product is a desired object that meets certain needs, we can identify the values that architects and builders would like to see in this product. Functional factors reflect such indicators of consumer value as the presence of functional areas that are not traditional for temple buildings: confessional, changing rooms, refectory, kitchen, meeting room, offices, etc. This includes the versatility of the temple complex, a form that is new in itself and the history of Russian architecture does not know it. Architectural and theological solutions for placing volumes not horizontally on the plane, but vertically, in the same volume with the temple on a small plot. Application of new design solutions and modern finishing materials. For example, the use of new design solutions, which is expressed in the typical construction of religious objects. There is no consensus among the architectural community as to whether standard construction is generally applicable to temple architecture. Many believe that a typical architecture, with a competent approach to it, can be interesting because it depends on many factors -the natural landscape, urban planning situation, planning solutions, materials, decorative design in the sign and symbolic system, the selection of the site so that the religious building is traditionally the Central urban element, street planning, so that their perspective is closed by a view of the temple [3,21].
Emotional factors reflect aesthetic concepts, ways of organizing internal space, and organizing space as an event. Factors that improve the quality of life and influence the social environment are manifested in the integration of Orthodox aesthetics into the urban environment, and the entry of old architectural forms into the modern city. The products of both scientific and creative activities include the introduction of the normative document SP 31-103-99 "Buildings, structures and complexes of Orthodox churches" (2000). Kesler developed a detailed guide to this document, where he analyzed the common theoretical foundations and practice of construction of Church buildings, project documentation in accordance with technical solutions, materials and features of interior decoration. The magazines "Temple", "lamp", "Church art and archeology" (2001)
Conclusions
Thus, the socio-Cultural aspect of the design and construction of Orthodox churches in modern Russia, conferences and seminars held in the capital and regions reflect the historical approach. In the historical approach to the Church building, the issues of its relation to traditions and historical models are discussed, without an active attitude to the social environment. The organizational measures we have outlined reflect sociological and theological approaches. In the sociological approach, the social processes around the Church become Central. In addition to the traditional ones, our research links them to the restoration, construction and restoration of iconic architectural objects. Creation and development of documentation regulating the activities of the art criticism Commission, expert Advisory councils. The theological approach focuses on the spiritual dimension of Church activity, on the processes of creating Orthodox politics, Orthodox ideology, Orthodox economy, Orthodox culture, Orthodox legislation, and Orthodox statehood.
Creative activities related to the design and construction of Orthodox places of worship that solve problems of spatial perception are an important aspect in shaping the modern urban environment. The spatial characteristics of the Church building and its surroundings are the Central compositional object in the formation of certain territories. Architects, designers and planners solve the problem of | 2021-05-07T00:04:29.598Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "9f49fc2f5bf4b71fb6ff0e5ffe742cd7cddb938f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1079/4/042090",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ffdf59983818d60fde977e5bf3ca583da101b0d5",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Physics",
"Sociology"
]
} |
44617209 | pes2o/s2orc | v3-fos-license | Rapid communications Observed association between the HA1 mutation
Infection with the recently emerged pandemic influenza A(H1N1) virus causes mild disease in the vast majority of cases, but sporadically also very severe disease. A specific mutation in the viral haemagglutinin (D222G) was found with considerable frequency in fatal and severe cases in Norway, but was virtually absent among clinically mild cases. This difference was statistically significant and our data are consistent with a possible causal relationship between this mutation and the clinical outcome.
The 2009 influenza A(H1N1) pandemic has been characterised by mild and self-limiting disease in the overwhelming majority of cases.However, severe and fatal cases, many of them with primary viral pneumonia, have been occurring in age groups where such clinical outcomes are very rarely seen in seasonal influenza [1,2].It is important to better understand what viral and host-related factors determine this dichotomy.
Genetic characterisation of clinical specimens
As part of the intensified surveillance carried out during the current influenza pandemic, the national reference laboratory for human influenza at the Norwegian Institute of Public Health collected a large number of respiratory specimens from verified and possible cases of pandemic influenza.In the present study we analysed 61 respiratory specimens from severe and fatal cases that occurred between July and December 2009, as well as from 205 cases with mild clinical outcomes collected between May 2009 and January 2010.Genetic characterisation was performed using conventional sequencing, or with a pyrosequencing assay subsequently developed to detect the particular mutations described below and which facilitated investigation of a large number of specimens.
Here we report the occurrence of an amino acid substitution, aspartic acid to glycine in position 222 (D222G) in the HA1 subunit of the viral haemagglutinin, in clinical specimens from 11 out of 61 cases analysed in Norway with severe outcome.Such mutants were not observed in any of 205 mild cases investigated (Table ), thus the frequency of this mutation was significantly higher in severe (including fatal) cases (p<0.001,Fisher's exact test, two-sided) than in mild cases.D222G mutants were detected throughout the sampling period, from the first recorded severe cases in July until early December.The frequency of another substitution in the same position, D222E, did not differ significantly between mild and severe cases (p=0.772).Yet another substitution, D222N, was observed in a very few cases (n=4), and at a higher rate than expected among severe cases (three of four cases, p=0.039).The wild type 222D was, not surprisingly, significantly less frequent in severe than in mild cases (p<0.001).
In several of the patients where D222G mutant viruses were found, they coexisted with wildtype 222D viruses.Further analysis of this phenomenon is ongoing.
The cases infected with the D222G-mutated virus were not epidemiologically related to each other, and the mutated viruses do not cluster together in phylogenetic analysis (data not shown).
Validity and limitations of the analysis
Cases with severe clinical outcomes were much more likely to be included in our study for several reasons: they are more likely to seek healthcare, they are more likely to be prioritised for virological testing, and their specimens are more likely to be forwarded to the national reference laboratory where they have a higher chance of being selected for detailed analysis than viruses from mild cases.Because of this, we chose to record the frequency of a given genotype in each severity group and compare it with the corresponding frequency in other severity groups.This approach is not expected to have a selection bias.
Cases were classified as mild, severe non-fatal and fatal based on the patient information that was available to us.Some seemingly mild cases may later have exacerbated to severe outcomes without our knowledge, or the presented patient information may have been incomplete, but we think these cases must be few.On the other hand, all severe and fatal cases were confirmed as non-mild.Thus, the fact remains that only cases confirmed as severe outcomes exhibited the D222G mutation in our investigation.
The sampling period for the cases analysed spans from the initial detections of the pandemic H1N1 virus in early May 2009 until early January 2010.The first severe and fatal cases occurred in July.By the end of December, the epidemic in Norway had largely passed, and a large proportion of cases in our data set is from the peak period in October and November.At all times an effort was made to include a reasonable number of non-severe cases in our analyses, and such cases were well represented throughout the pandemic.The fractions of severe/fatal cases among all analysed cases during the two-month periods July/August (n=21), September/October (n=84), and November/December (n=149), were within the range of 23% to 26%.Severe outcomes were not recorded among the few cases in May and June (n=11) and in January (n=1).We thus do not see a trend over time in the composition of severe versus mild cases in our dataset that could lead to an artificial difference in the frequency of the D222G substitution.Furthermore, the D222G substitution was represented also among the earliest fatal and severe cases in July and August.Specimens from both the lower and upper respiratory tract were analysed.Lower respiratory tract specimens were available from severe/fatal cases only, and in some cases they were the only materials available.However, in all cases where we had paired upper and lower airway specimens (five cases with 222D and four cases with 222G), the wildtype-versus-D222G pattern was matching between the locations.We have therefore no reason to believe that this difference in proportion of lower airway specimens distorted the analysis.
Discussion
Amino acid position 222 resides in the receptor binding site of the HA protein and may possibly influence the binding specificity and thus the cellular tropism of the virus.The corresponding difference between two viruses from the 1918 Spanish influenza pandemic correlates to a shift in receptor preference [3], which conceivably could make the virus prone to infect a wider range of cells in the lower respiratory tract [4,5].However, the effect of a mutation depends on the molecular context and it is unclear whether the binding properties are affected likewise in the present pandemic virus as they were in the 1918 influenza virus.
Our observations are consistent with an epidemiological pattern where the D222G substitution is absent or infrequent in circulating viruses, with the mutation arising sporadically in single cases where it may have contributed to severity of infection.This may aid in filling some knowledge gaps identified in a recent preliminary review of this and other mutations in the pandemic virus [6].The correlation between presence of the D222G substitution and a severe clinical outcome may reflect an increase in pathogenicity caused by the mutation, possibly related to a change in cellular tropism rendering the virus more pneumotropic.Conversely, it is possible that the likelihood of such mutations arising is higher in patients who fail to fight off the virus rapidly and have virus already colonising the lower respiratory tract.These two possibilities are not mutually exclusive.A large proportion of the fatal and severe cases had underlying risk conditions.However, some of the D222G cases manifested themselves as a rapid unexpected deterioration after a period of mild symptoms in previously healthy subjects, and we consider it likely that there is a causal relationship between the occurrence of the D222G mutation in this virus and severe disease.
It should be borne in mind, however, that the majority of severe and fatal cases investigated did not carry the D222G substitution and, clearly, this mutation is not required for a severe outcome.
Conclusions
To our knowledge, this is the first identification of a change in the pandemic virus that correlates with a severe clinical outcome.However, whereas our data lend statistically significant support to an association between the D222G mutation and severity, the number of mild cases would need to be larger to determine whether mutant viruses are indeed circulating at a very low frequency also in non-severe cases.Provided that D222G mutant viruses are not circulating, i.e. that they are less transmissible, the immediate public health impact of this finding is limited.However, it may have implications for the management of severe cases where the virus, if transmitted through massive exposure, may be more virulent than the commonly circulating variant.Furthermore, it may serve as a reminder that the generally very low virulence of the current pandemic virus is not a fixed characteristic, and that there is no reason for complacency in carrying out measures that limit infection with this virus at individual and population level.
Further virological, clinical and epidemiological investigations are needed to ascertain the role of this and other mutations that may alter the virulence and transmissibility of the pandemic influenza A(H1N1) virus. | 2014-10-01T00:00:00.000Z | 2010-03-04T00:00:00.000 | {
"year": 2010,
"sha1": "2e0e9e89828613c77f3f70e77a1fa9957ebbe503",
"oa_license": "CCBY",
"oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/15/9/art19498-en.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/ese.15.09.19498-en&mimeType=pdf",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "2e0e9e89828613c77f3f70e77a1fa9957ebbe503",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249899537 | pes2o/s2orc | v3-fos-license | The Profile of Land Carrying Capacity and Food Security in Gunungkidul Regency, Yogyakarta
The relationship between the problem of land carrying capacity and food security in Gunungkidul Regency has not been widely studied and has the potential to become an important problem in the field of social agriculture in the coming years. This study determined the level of carrying capacity of rice and corn production land and the food security level in Gunungkidul Regency, Special Region of Yogyakarta Province. Administratively, this research consists of 18 districts. The research data used in the form of secondary data from the Central Statistics Agency, the Department of Agriculture, Bappeda and the Department of Health in the 2020 publication year. Analysis of land carrying capacity profiles and food security by quantitative descriptive. The results showed that the value of the carrying capacity of land based on the production of rice, corn, soybeans, and cassava in Gunungkidul Regency showed that it was in class II where the sub-districts in Gunungkidul Regency were quite optimal in the carrying capacity of the land and were able to meet the demand for the food. The value of food security in each sub-district in Gunungkidul is different based on aspects of food availability, access and use of food. Gunungkidul Regency shows the results of food security in priority category 4. This means that the sub-districts in Gunungkidul Regency are already quite resilient in terms of food security. There is a need for intensification of agricultural land, diversification of food consumption and priority infrastructure as well as strengthening of social support to improve regional food security in Gunungkidul Regency.
Introduction
Food availability and security is a very crucial issue for Indonesia. Therefore, one of the main indicators for the success of development and governance is often measured and linked to the government's ability to provide food for its people. In line with the food security stated by FAO in 1992, namely the availability, accessibility, stability, and utilization of food. FAO includes utility in the pillar of food security, while Indonesia has not included this element in Indonesian food security (Nurhemi et al., 2014).
According to Arsyad (2008), the availability of sustainable agricultural land resources is a requirement for national food security. Availability of agricultural land for food is closely related to several things, namely: 1) Potential of food agricultural land resources, 2) Land productivity, 3) Fragmentation of agricultural land, 4) Scale of agricultural land tenure, 5) Irrigation system, 6) agricultural land use, 7) Conversion, 8) Farmer's income, 9) Agricultural human resource capacity and 10) policies in agriculture. Based on the results of Riptanti et al. (2020) research in East Nusa Tenggara which analyzes the carrying capacity of land and the capacity of dry land farmers to food insecurity, it shows that dry or extreme geographical characters will affect the level of land carrying capacity and food insecurity in dry land farmers. The main determinant of food vulnerability from conditions of land carrying capacity like this is to regulate income and food reserves for daily needs.
In addition, based on a research report from Sinaga & Dewata (2020) that the assessment of the carrying capacity of agricultural land is a determining factor and has a relationship with the ability of an area to be self-sufficient in food with data based on the calorie needs of the population, the results of the research in Tanah Datar district have 3 classes. food security level. Based on the results of Putri et al. (2019) research that on the carrying capacity of land in West Kalimantan Province, Pontianak has a relatively safe level of population pressure, but the city of Pontianak has low land carrying capacity compared to other areas. The results of this study also show that there is a negative correlation on land pressure and food sufficiency in the studied area, so innovation is needed for the development of sustainable agricultural practices.
The results of another study in Thailand conducted by Bunyasiri & Isvilanonda (2009) also stated that the carrying capacity of land in the form of management of rice agricultural practices, especially good water resources, would increase food sufficiency and good food sovereignty in farming families in the country. This study also wants to examine more deeply the determinant factors so that it is based on the conditions of the three cases above. Gunungkidul Regency becomes very interesting to study. Gunungkidul Regency is the southernmost district and has an area of 1,485.36 km2 or 46.63% of the total area of the Special Province of Yogyakarta.
This area has a topography of limestone mountains stretching from west to east. The topography affects the type of land use in Gunungkidul Regency. The condition of the soil that is not very fertile and is exacerbated by the problem of water availability makes this area a poor area with a per capita income of 3.2 million rupiah. This happens because 70% of the population of Gunungkidul Regency are small farmers with various limitations both in nature, technology and capital. This is a factor that the carrying capacity of agricultural land in this region has a major influence on food insecurity in Gunungkidul.
BPS data in 2020 shows that the area of rice fields in Yogyakarta Province is the area of wetland in Gunungkidul Regency is 7,875 ha. The area of dry land in Gunungkidul Regency in 2018 was 117,332 ha, here there was a decrease in land area to 503 ha from an area of 117,829 ha in 2016. The largest area of non-rice field land is in Gunungkidul Regency, which is 117,332 Ha. Agricultural land in Gunungkidul Regency is dry land, and only 5% is in the form of rice fields. The extreme natural conditions that fall into the category of marginal land have caused several areas in Gunungkidul Regency to be categorized as food insecurity.
Methods
This study used quantitative methods with secondary data analysis, secondary data used are data from the Central Statistics Agency from Gunungkidul Regency in 2020 and Data from the Agriculture Service of Gungkidul Regency in 2020 and the Bappeda Rispam in 2020. In general, the method in this research is descriptive quantitative method. with secondary data. Data obtained directly from the department and processed using data tabulation of land carrying capacity and food security for visualization of each map and regional profile using the scoring system method. (Quantitative descriptive).
Land Carrying Capacity
This study uses data analysis that determines the level of carrying capacity of agricultural land for food crops using a formula from the combined concept of the theory of Information: Where, KFM is equivalent to 2600 Calories per capita per day or 265 kilograms of rice per person per year. Based on these values, the classification determined is: Level I : Φ > 2.47 : An area capable of self-sufficiency in food. Level II : 1 < Φ <2.47 : An area which is capable of self-sufficiency in food but has not been able to provide a decent life for its inhabitants. Level III : Φ < 1: Areas that have not been able to eat self-sufficiency.
Food Security Index Analysis
Food security indicators, namely food availability (share of food expenditure), food accessibility and food utilization (adequacy of energy consumption) (Jonsson andToole, 1991 in Maxwel S, et al, 2000). The analysis used in this study uses the same analysis as the Food Security and Vulnerability Atlas of Indonesia (FSVA) 2020. The tables and criteria are as follows:
and Cassava) in Gunungkidul Regency
Calculation of land carrying capacity in food crop commodities (rice, corn, soybeans and cassava) for each sub-district in Gunungkidul district in 2020, have the results of varying levels of land carrying capacity caused by data on the area of food self-sufficiency or not, according to the class level according to the class criteria in sub ba b method. This study uses four types of food crop commodities in accordance with available data in Gunungkidul Regency. Based on secondary data, the results obtained are as follows: The carrying capacity of land based on the value of rice production in Gunungkidul Regency explains that the criteria that fall into class I in the carrying capacity of land based on the value of rice production in 18 subdistricts are not included in that class. This means that in the 18 sub-districts no one is fully capable of being selfsufficient in food. This can be attributed to the different topographic conditions of the Gunungkidul Regency in each sub-district. The area of Gunungkidul Regency tends to have a topography of limestone mountains, so that if all areas are planted with rice plants it is not possible, because rice plants also need sufficient water. Many areas still lack water in Gunungkidul Regency. Regions with class II in 18 sub-districts are in Panggang, Saptosari, Girisubo, Ponjong, and Gedangsari sub-districts.
This means that the carrying capacity of the land is optimal and the area is quite capable of being self-sufficient in food. The criteria included in class III are in 13 sub-districts, namely Purwosari, Paliyan, Tepus, Tanjungsari, Rongkop, Semanu, Karangmojo, Wonosari, Playen, Pathuk, Nglipar, Ngawen, Semin sub-districts. This means that the carrying capacity of agricultural land is low and the region has not been able to be self-sufficient in food. As many as 18 sub-districts in Gunungkidul Regency, most of them are in the criteria of not supporting the carrying capacity of the land and most of the sub-districts are still many that have not been able to become self-sufficient in food. This is also seen based on the value of rice production and the area of rice fields in several sub-districts which are included in the category of not supporting the carrying capacity of the land.
The carrying capacity of land based on the value of corn production in Gunungkidul Regency explained that the criteria that were included in class I in the carrying capacity of land based on the value of corn production in 18 sub-districts were not included in that class. This means that in the 18 subdistricts, no one is fully capable of food self-sufficiency, especially in corn food crops. This can be attributed to the different topographic conditions of the Gunungkidul Regency in each subdistrict. The food crops planted in each sub-district also vary depending on the condition of the land in each sub-district of Gunungkidul Regency. The carrying capacity of land in class II for corn food crop production is in 11 sub-districts including, Panggang, Paliyan, Saptosari, Tepus, Tanjungsari, Rongkop, Girisubo, Ponjong, Karangmojo, Playen, and Gedangsari. The carrying capacity of land in class III in corn food crop production is in 7 subdistricts, namely Purwosari, Semanu, Wonosari, Pathuk, Nglipar, Ngawen, and Semin sub-districts. As many as 18 sub-districts in Gunungkidul Regency, most of them are in the criteria of being quite supportive in the carrying capacity of the land and quite capable of being self-sufficient in food based on the value of corn production. This is also related to the topography of Gunungkidul Regency. Land carrying capacity based on soybean production value in Gunungkidul Regency explained that the criteria that fall into class I and class II in the carrying capacity of land based on soybean production value in 18 subdistricts are not included in that class. This means that in the 18 sub-districts, no one is fully capable, even quite capable of food self-sufficiency, especially in soybeans. All sub-districts still rely on other regions to meet their food requirements in relation to soybeans. This can be attributed to the different topographic conditions of the Gunungkidul Regency in each subdistrict. The food crops planted in each sub-district also vary depending on the condition of the land in each sub-district of Gunungkidul Regency. This means that only a few areas grow soybeans, so they do not become a staple food in Gunungkidul Regency. All 18 subdistricts are included in the land carrying capacity of class III in soybean food crop production. A total of 18 subdistricts in Gunungkidul Regency, most of which are in the criteria of not supporting the carrying capacity of the land and not being able to be selfsufficient in food based on the value of soybean production. This is also related to the topography of Gunungkidul Regency.
The carrying capacity of land based on the production value of cassava in Gunungkidul Regency explained that the criteria that fall into class I in the carrying capacity of land based on the production value of cassava are found in 8 sub-districts namely Purwosari, Saptosari, Girisubo, Ponjong, Karangmojo, Playen, Gedangsari, and sub-districts. fly. This means that the 8 sub-districts have been able to carry out food selfsufficiency, especially with the cassava food crop commodity. Most of the land in Gunungkidul Regency is dry land, where this cassava commodity is very good for planting on dry land. Cassava is a source of local carbohydrates in Indonesia which ranks third after rice and corn (Prabawati et al., 2011). Other sources even mention the position of cassava is number two after rice (Koswara et al., 2009). The carrying capacity of land in class II in the production of cassava food crops is in 10 sub-districts including, Panggang, Paliyan, Tepus, Tanjungsari, Rongkop, Semanu, Wonosari, Pathuk, Ngawen, and Semin. This means that the region is capable of being self-sufficient in food production in cassava production. While for class III on the carrying capacity of cassava production land from 18 sub-districts, none of them occupies class III. There are 18 subdistricts in Gunungkidul Regency, most of which are in the criteria of being quite supportive in the carrying capacity of the land and quite capable of being self-sufficient in food based on the value of cassava production. This is also related to the topography of Gunungkidul Regency. Based on table 3. in Gunungkidul Regency, the average value of the land carrying capacity index is in class II. This means that the land carrying capacity of all food crop commodities is optimal and sufficient to meet the community's need for food based on food plant commodities in Gunungkidul Regency. Kumar et al. (2007) explain that the measurement of the carrying capacity of an ecosystem land is a recent approach that explains how ecosystem productivity provides food security for the population and how the carrying capacity of agroecosystems changes over a period of time explained by taking production, productivity, food security, and employment indicators. Factors that affect the carrying capacity of agroecosystems are also identified implicitly, so that the carrying capacity development policies can be adopted in regional agro-ecosystems. Prabowo (2010) states that in order to ensure the sustainability of food security through increasing the availability of national food, especially rice, as well as improving the welfare of farmers, long-term and short-term policies are needed. Based on the results of the research that has been carried out above, it shows that sustainable food agriculture is very important to realize, especially the strategy undertaken for the conversion of agricultural land so that development can run well. Availability of food in an area can be seen from the net production (net) of staple food sources of carbohydrates, namely rice, corn, and cassava. This study uses 3 commodities produced by Gunungkidul Regency. Soybean commodity is not included in the calculation of food availability, because the commodity used in the calculation of the food availability aspect here is a type of food plant with the main food source of carbohydrates. Based on table 4.8, the calculation of food availability can be seen that the sub-districts in Gunungkidul Regency show a range of values below 0.5. Based on the 2009 Food Security and Vulnerability Atlas (FSVA), a value below 0.5 represents vulnerability to food insecurity in the low category or priority 6. In other words, Gunungkidul Regency has met food sufficiency at the sub-district level because the value of the food availability ratio already exists at quite a high number. The ratio of food availability with the highest number is in the Wonosari sub-district and the lowest is in the Saptosari sub-district. District of Gunungkidul district has the ability to obtain sufficient different nutritious foods. The aspect of access to food uses 3 indicators, namely the percentage of the population with low welfare which is related to poverty, then the percentage of households that do not have access to electricity, and the percentage of households without access to clean water. Based on table 4.9, the subdistrict that has the lowest percentage value of the population with the lowest welfare is in the Wonosari sub-district. This means that the welfare of the population in the Wonosari sub-district tends to be good. Wonosari is one of the districts with a high population and high population density. Wonosari is the capital district of Gunungkidul Regency, being the center of the city with a variety of industries so that it has a low level of welfare which has the lowest level of value compared to other subdistricts.
Food Security in every Sub-district Gunungkidul Regency
The next indicator, namely the percentage of households that do not have access to electricity, shows that in the Nglipar and Rongkop sub-districts, the residents' households already have access to electricity evenly. The percentage of the population without access to electricity is very low, which can affect economic activities which will be higher with the availability of electricity that is accessible to the public. Higher economic activity will open up greater opportunities for job access.
Greater job access opportunities will increase the level of welfare of the people of Gunungkidul Regency. Meanwhile, the highest percentage of households that do not have access to electricity is in the Semanu sub-district. Population without access to electricity is one of the parameters of food security in terms of access to food and livelihoods.
Then, for the indicator of the percentage of households without access to clean water, it shows that Purwosari sub-district has the highest percentage. This means that there are still many households whose access to clean water is quite difficult. The very high level of population without access to clean water will have an impact on public health which will be increasingly guaranteed. This can be seen from the high public consumption of clean water and will affect the nutritional status of the community which can be fulfilled properly.
The better nutritional status/nutrition of the community will affect the level of food security which will also increase. Meanwhile, Playen sub-district has the smallest percentage compared to other sub-districts, although some are relatively small. Manesa et al. (2008), states that food security is basically defined as everyone's access at all times to their food needs in order to live a healthy life. From the various concepts of food security, it can be interpreted that household food security in addition to the availability and purchasing power factors is also determined by the food access factor itself, either obtained directly or through other networks. This study pays attention to two things in determining aspects of the utilization of food security in all subdistricts in Gunungkidul Regency, among others, looking at the ratio of the number of population per health worker to population density and percentage of under-fives with less nutrition. Based on table 5, it shows that the sub-district that has the highest indicator of the ratio of population per health worker to population density is in Girisubo subdistrict. Where the sub-district has adequate health personnel compared to its population density. While the smallest value in the ratio of population per health worker is in Wonosari subdistrict, where the number of health workers in the sub-district is not sufficient with the total population density in the sub-district. This is also due to the dense population in Wonosari sub-district which is so full that it is better to provide health workers in large numbers as well.
The second indicator is the percentage of under-fives with less nutrition with the highest value in Rongkop sub-district. Meanwhile, the lowest percentage value is in Purwosari sub-district. This means that Purwosari sub-district is still safe in providing nutritious food for its residents. A good/standard toddler's weight will affect the absorption/absorption of good food as well. This will also have an impact on the nutritional status of toddlers which will also be fulfilled properly. The nutritional status of toddlers who are well-supplied will affect the situation of food security, so that the condition of food security will also be good. The food security index has three aspects that can be measured to get the value of food security. Aspects of food security consist of aspects of availability, access, and utilization. The value in the food security index states that the best value or value in the resistance range or in priority 5 based on the criteria from the Food Security and Vulnerability Atlas (FSVA) 2020 is in two sub-districts, namely Patuk and Playen sub-districts. The average value of the food security index with a vulnerable range or in priority 2 is found in the Tepus sub-district. Meanwhile, for the entire Gunungkidul Regency, the food security index value is in the moderately resistant range or is in priority 4 based on the criteria from the Food Security and Vulnerability Atlas (FSVA) 2020. The spatial distribution of the average value of food security in Gunungkidul Regency represented on the map shows that the sub-districts in red, which are included in priority 2, are on the south side and only in 1 sub-district, namely the Tepus sub-district. The map representation with pink color, which is included in priority 3, is located on the North end of the West and the South end of the West, which is located in 2 sub-districts. The map representation in yellow, which is included in priority 4, is spread from the north to the south. The map representation with light green color, which is included in priority 5, is spread on the North West side, which is located in 2 sub-districts.
Conclusions
The value of the land carrying capacity based on the production of rice, corn, soybeans and cassava shows class II, which means that the sub-districts in Gunungkidul Regency are optimal and capable enough to meet the need for food or selfsufficiency. the food. Moreover, the value of food security in each subdistrict in Gunungkidul is different based on aspects of food availability, access and use. Gunungkidul Regency shows the results of food security on priority 4 based on the classification of the Food Security and Vulnerability Atlas (FSVA) in 2020.
For the government, to maintain agricultural land, several strategies can be applied to increase the carrying capacity of the land. agriculture and the implementation of sustainable agricultural land, namely by mapping sustainable food agricultural land to the village level. On agricultural land so that there are no differences in data between agencies on rice and corn production data. Sustainable food security requires diversification of food consumption and building priority infrastructure related to sustainable agricultural land and food security. Further research can develop this research by examining issues related to food security and influencing factors as well as with more in-depth variable data. | 2022-06-22T15:13:12.282Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "14b5a15e12b3df042b51cfae68c6b338476e73ef",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.undiksha.ac.id/index.php/MKG/article/download/41911/22046",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "05a13b4d66015395aac04c8b893756feb331245b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
226698848 | pes2o/s2orc | v3-fos-license | Organizational climate, organizational citizenship behaviour and turnover intention: Evidence from Jordan
Article history: Received: June 26, 2020 Received in revised format: June 3
Introduction
It is common for organisations to desire employees who are collaborative, proactive, and demonstrate high commitment highperformance standards. Such employees are effective and efficient, and they would contribute to the achievement of organizational excellence. Individual employee behaviour is vital in the achievement of such excellence, and appropriately, Organizational Citizenship Behaviours (OCBs) encompass a distinctive work behaviour type, expressed as the behaviours of individuals that are deemed constructive to the organization. Notably, these behaviours appear to be discretionary, and are not acknowledged by the by the formal reward system either directly or explicitly. Expressed differently, the behaviours are motivated by personal choice, and thus, their absences usually will not bring punishment. It is believed that OCB significantly affects the effectiveness and efficiency of work teams and organizations. For this reason, Ummah and Athambawa (2018) reported that OCB contributes to the organization's overall productivity. Bateman and Organ (1983) perceived OCB as an added-role behaviour denoting the presentation of constructive propositions leading to organisational development as well as the efforts made to altruistically assist others. Rotundo and Sackett (2002) regard OCB as an essential constituent of job performance. However, OCB goes beyond the conventional measures of job performance; it manifests a behaviour type which refers to the unexpected constructive contributions of employees. The past three decades have seen OCB being regarded as a key construct within the domains of accounting, management, as well as psychology. Indeed, the relevant literature has shown significant amount of studies exploring OCB (e.g., Bateman & Organ, 1983;Boiral et al., 2015;Hammer et al., 2016;Niehoff & Moorman, 1993;Organ & Ryan, 1995;Pitaloka & Sofia, 2014;Podsakoff et al., 2000;Teimouri et al., 2015), implying its importance, particularly within the domains of service. OCB has been implemented in many organizations including restaurants and healthcare facilities. However, in the context of universities, (Farooqui, 2012) mentioned the neglect of this concept. Organ (1988) regarded OCB as a critical factor contributing to the continued existence of an organisation. It is therefore important to have the awareness of the variables which significantly and positively assist in the formation of this sought-after behaviour inside the organisation. As found in various studies (e.g., Alotaibi, 2001;Bateman & Organ, 1983;Chahal & Mehta, 2010;Darto et al., 2015;Foote & Li-Ping Tang, 2008;Jahangir et al., 2004;Khokhar & Zia-ur-Rehman, 2017;Organ, 1990Organ, , 1997Organ & Lingl, 1995;Organ & Moorman, 1993;Penner et al., 1997;Shahin et al., 2014;Tang & Ibrahim, 1998), in an organization, the behaviour of citizenship is impacted by variables including employee satisfaction, leadership, leadership behaviour, motivation, organisational commitment, organisational justice, organizational climate, organizational culture, as well as personality.
As explained in Denison (1996), organizational climate encompasses a lasting quality experienced by employees, affecting their behaviour; this can be regarded as a portion of the organization's environment. Relevantly in Hughes et al. (2008), supportive climate encompasses the degree of perceived cooperation, coordination and support of direct supervisor which impact the organizational commitment of employees in a constructive manner. In fact, supportive climate has been found to have strong linkage with outcomes including innovation, employee diligence, employee performance, organizational commitment, and job satisfaction (Chory & Hubbell, 2008;Huang et al., 2010;Lambert et al., 2012). Supportive climate has also been linked to the decrease of problems including hostility, interpersonal aggression, employee burnout, obstructionism, absenteeism and deception (Chory & Hubbell, 2008;Huang et al., 2010;Lambert et al., 2012). Furthermore, climate has a strong linkage to citizenship behaviour (Qadeer & Jaffery, 2014). Hence, organizational climate is examined as OCB predictor in the present study.
The reasons underpinning OCB are sufficiently understood. Nonetheless, the impacts of OCB on organizational effectiveness or some other measures of organizational effectiveness (e.g., employee turnover) have not been sufficiently examined, creating an empirical gap to the OCB literature. Also, from the many studies that identify OCB's impacting factors, only a handful that actually examined if these human behaviours indeed contribute to the quitting intention. Furthermore, the presently available literature shows the dearth of studies that addressed the impacts of OCB on variables such as employee turnover. In fact, only a handful has examined the linkage between OCB and employee turnover (e.g., Chen et al., 1998). As such, not much generalization can be made to the issue at hand, and this also calls for more probe and validation of the research findings.
According to Angle and Perry (1981), turnover is among the criteria for organizational accomplishment or a substitute of organizational effectiveness, and for this reason, turnover outcome is worth the scrutiny. Turnover encompasses a permanent elimination of an employee from the organization, and the elimination can be voluntary or involuntary. As indicated in Boshoff and Mels (2000), the most injurious to the organization is voluntary turnover because this type of turnover generally takes the organization by surprise. As proven by a significant number of findings, employee turnover impacts organization performance. As indicated in Koys (2003), turnover causes the costs associated with separation, replacement, and training to escalate. The factors that contribute to turnover have been explored by a lot of studies (Shaw et al., 1998). However, most of these studies were concentrating mostly on antecedents such as demographic factors, cognitive process, commitment, and job satisfaction, while the role of behavioural antecedents has not been adequately examined.
In the context of internal auditors, their retention is of an exceptional concern because as indicated in Ahmed and Shil (2015), internal auditors add value to the decisions made by management. Accordingly, the notion of value-added internal audit has been directly linked to the effectiveness of internal audit (Lenz & Hahn, 2015). Furthermore, as indicated in Ahmed and Shil (2015), internal auditors give advices on the usefulness and implementation of internal controls. Chambers (2015) further added that internal audit assists organization in accomplishing its organizational missions and visions while also making available a procedure for the assessment of risk management, control, as well as the processes of governance. Accordingly, the merit of an internal audit function has been acknowledged by the New York Stock Exchange, and due to this, Chambers (2015) reported that all listed companies are required to have internal audit function. In this regard, a successful company without an internal audit function should be a signal for warning.
For any organization, internal auditors play a crucial role, and for this reason, it is important for an organization to have the awareness of the factors impacting OCB of internal auditors to assure its financial success. Also, internal auditors work in the organization so that they could monitor and assess how well risks are being handled, the business is being run, as well as how well the internal processes are functioning. Hence, internal auditors are obliged to perform optimally to fulfil the needs of the organization. On the other hand, organization is obliged to have the appropriate strategies for retaining its employees, in this context, the internal auditors, and achieve better performance. Hence, this study attempts to find out how the organizational climate as antecedent leads to OCB of internal auditors, and how OCB affect the internal auditors' turnover intention.
In the context of Jordanian private universities, the establishment of OCB among employees is now highly crucial for the improvement of competitiveness of these universities. As can be observed in the OCB literature, a gap is present relating to the studies that explored this variable in the country's sector of higher education. In fact, most past works on OCB were carried out by industrial organizational and occupational psychologists. Hence, having the factors impacting OCB among university internal auditors examined in this study, constructive suggestions are hoped to be presented to the management in higher educational institutions. This would allow these institutions to construct the strategies which could assist them in bringing in and keeping their employees in the long term.
Organizational climate and OCB
Organizational climate has been evidenced as a factor affecting the behaviour of members of organization (Alipour, 2011;Öz et al., 2010). Essentially, organizational climate encompasses a group containing quantifiable criteria within the working atmosphere which is understood, directly or indirectly, by those who act in such a situation, and it impacts both their motivation and behaviour. Hence, the scrutiny of organizational climate, the determination of the type, and the attempt at improving it, can improve other organizational variables as well. Meanwhile, Podsakoff et al. (2000) mentioned that organizational citizenship behaviour, which is a non-compulsory behaviour, can be more strongly affected by numerous factors associated with attitudes as well as personalities. Hence, by taking into account the effect of organizational climate on the behaviour of employees, organizations should always pursue the identification, change, and improvement of organizational climate. This will affect the employees' personal behaviour while also easing the accomplishment of organizational goals. As today's environment is highly competitive, organizational effectiveness and survival are impacted by the attitudes and behaviours of employees, increasing the importance of organizational climate among scholars of organizational behaviour. Organizational climate has in fact been examined in numerous organizational contexts, and it has been linked to several results relating to individual, group, and organization. In specific, in past studies (e.g., Ahmad et al., 2012;Bellou & Andronikidis, 2009;Dickson et al., 2006;Rahimic, 2013;Zhang & Liu, 2010), organizational climate has been linked to variables including productivity, job satisfaction, employee performance, organizational effectiveness, organizational justice, organizational commitment, work motivation, organizational alienation, predisposition to leave, as well as anxiety. However, organizational climate has been suggested to promote favorable behaviours in organizations. These favourable behaviours include OCB, as well as creative, proactive, and innovative behaviour (Lin & Lin, 2011;Moghimi & Subramaniam, 2013;Patterson et al., 2004;Peterson, 2002;Randhawa & Kaur, 2015). Considering these findings, the hypothesis below is presented; H1: There is a positive relationship between the organizational climate and the levels of OCB.
OCB and turnover intention
OCB is a concept that has been debated by scholars. For instance, Danayifar et al. (2010) viewed OCB as the reflection of how employees voluntarily act towards their work and the act is clearly not part of their job descriptions. There are employees who would do good deeds willingly, while others would not. In this regard, OCB can be illustrated as a performance of certain good deeds towards the organization with no anticipation of retribution of any kind from the involved third party. Relevantly, Khalid et al. (2013) describe OCB as voluntary and discretionary unexpected behaviours for assisting the peers in achieving success. Also, OCB causes the work culture of cooperation to increase, and it also incidentally forms an entity in organization classified under unprompted behaviour. Hence, OCB impacts the image and reputation of the organization. The linkage between OCB and turnover intention is currently a subject of interest among scholars (Chen, 2005;Chen et al., 1998;Mossholder et al., 2005;Paillé, 2013;Saraih et al., 2017). In Chen et al. (1998;2005), the authors found behavioural antecedents as crucial predictors of turnover intention and actual turnover. Accordingly, OCB can justifiably be employed in predicting turnover intention. Several past works (e.g., Chen et al., 1998;Chen, 2005;Podsakoff et al., 2009) have also scrutinized the linkage between OCB and turnover intention. Furthermore, Chen et al. (1998) found that the intensities of OCB denote the true willingness and inclination of employees, how much involvement they would like to have with their organization, or how much they want to steer clear from the organization. Here, the key argument is that lower OCB level denotes stronger reluctance signal of the employee in being part of the organization, which can be translated into higher likelihood of employees to leave the organization. The linkage between OCB and turnover intention has been examined in past studies. Oren et al. (2012) for instance, reported an adverse linkage between both variables, and regard OCB as a behaviour that is advantageous to the organization. On the other hand, the authors classed turnover intention under a withdrawal reaction in a non-favourable manner towards the organization. Also, employees with high level of OCB are less inclined to be eliminated from their present workplace as opposed to those with low level of OCB (Sharma et al., 2010). Interestingly, in the context of hotel industry employees, Khalid et al. (2009) reported a positive linkage of OCB to turnover intention. Notably, low intentions of employees to leave the organization are signified by a high level of OCB, and according to Khalid et al. (2013), it constantly fashioned other constructive attitudes and behaviour at workplace. Considering these findings, the hypothesis below is presented;
Research Respondents
Internal auditors employed at Jordanian private universities were chosen as respondents in this study. As at 2019, Jordan has 20 private universities (Ministry of Higher Education & Scientific Research, 2019) and these universities were chosen as this study's population. The sample determination in this study was based on . In particular, the authors suggested selecting sample size of at least 10-20 times more than the required variables in order that it will be appropriately sufficient for the analysis. As such, a total of 130 respondents were chosen, and questionnaire was the method employed for data gathering. The 130 sets of questionnaires were personally handed to the respondents. This is to assure that sufficient amount of responses would be obtained. Accordingly, this study obtained 74% response rate (96 returned questionnaires), but due to incompletion of 18 sets of the returned questionnaires, only 78 sets of questionnaires underwent further analysis. The study period started in July 2019 and ended in September, 2019. From the responses obtained, most respondents were aged 30 and below (67%), were male (88.3%), were married (53%), were holder of Master's degree (12%), and had been employed as internal auditor for 1-3 years (71%).
Measuring Instrument
Organizational climate: This construct was measured using the modified scales of Kao (2015Kao ( , 2017. In particular, Kao's (2015) scales were grounded upon the Organizational Climate Questionnaire (LSOCQ) created by Litwin and Stringer (1968). In the questionnaire, this construct is represented by 9 items as detailed as follows: interpersonal relationship (4 items), structure climate (3 items), and responsibility climate (2 items) (Kao, 2015). Cronbach's Alpha was used to determine the instrument's reliability and, in this study, the value obtained was 0.834.
Organizational Citizenship Behaviour:
This construct was measured using the modified scales of "Organizational Citizenship Behaviour (Individual)" by Lee and Allen (2002), which contains a statement leading to interpersonal facilitation (Van Scotter & Motowidlo, 1996), interpersonal harmony (Farh et al., 1997;Okeke & Nwankpa, 2018) and interpersonal helping (Graham, 1991). In responding to the items of this construct, the participants were asked to denote how often they would be engaged in the identified behaviours. For the responses, the construct employs the 7-point scale (1 = never, 7 = always). Cronbach's Alpha was used to determine the instrument's reliability and, in this study, the value obtained was 0.799. Turnover Intention: This construct was measured using 6 items obtained from Bothma and Roodt (2013). These items particularly evaluate the intent of the respondents to leave his/her current job. This measure has been previously employed in the literature relating to auditors and turnover (e.g., Al-Shbiel et al., 2018). For the responses, the construct employs the 7-point Likert scale. For the items of this construct, the previously achieved Cronbach's alpha was 0.911, and according to Bothma and Roodt (2013), the obtained value denotes that the internal reliability and construct validity of the items are acceptable.
Analytical method
Partial least squares (PLS) structural equation modeling (SEM) technique run with SmartPLS version 3.0 was used in this study for data analysis. As a SEM method, PLS assists in the analysis of the models of structural measurement and the associated paths. This method also offers factor loadings that are identical to the principal of component analysis (see Sosik et al., 2009). Hence, the use of PLS enables the examination of the research model in terms of its validity and the analysis of the empirical model in regards to the hypothesized relationships along with their significance. As mentioned in past studies (e.g., Hair et al., 2013), the application of PLS is advantageous over other possible methods related to structural modeling. To begin with, the use of PLS enables the scrutiny of variables that have non-normal distributions (e.g., measures of turnover intention). Another advantage is that, for estimations of path model, PLS appears to work. It is also appropriate for small sample sizes in relation to the complexity of the research model (n = 78). Also, PLS is appropriate for exploratory studies such as this study.
In the context of this study, previous knowledge regarding the relationships between OCB and turnover intention in the domain of auditing is very little.
Measurement model
Construct reliability is affirmed by checking the item loadings and values of composite reliability. In particular, for the reliability of individual item, the loadings are checked. In general, loadings should at least 0.7 (Hair et al., 2017). Still, Hulland (1999) mentioned the possibility of some items in an estimated model to have lower loadings, particularly for newly constructed scales or the presently available scales that were transferred to a different context. Meanwhile, Hair et al. (2017) stated that loadings between 0.40 and 0.70 would only cause removal to the given items if their removal would increase the composite reliability or average variance extracted (AVE). Hence, 2 items from organizational climate, 2 from OCB, and 1 from turnover intention were eliminated as their loadings were lower than 0.4 and were also suitable to be eliminated. The elimination increased the composite reliability and AVE values. Meanwhile, the rest of the items were retained. In the use of PLS-SEM, Hair et al. (2017) suggested the use of the composite reliability score for construct reliability, even though the more common measure of internal consistency is Cronbach's alpha. As opposed to Cronbach's alpha, composite reliability does not presuppose the equal reliability of all indicators. Cronbach's alpha also demonstrates sensitivity to the number of items within the scale Hair et al., 2017). Based on the outcomes shown in Table 1, for all latent variables, the composite reliability appears to be higher the commended threshold of 0.7. As for this study's convergent validity measure, it employs AVE, which should be higher than 0.5 as suggested in Hair et al. (2017). In this study, the lowest AVE is 0.63, implying the fulfilment of the requirement for all constructs. In terms of the internal consistency of each construct, it is deemed sufficient. As assurance of discriminant validity, each construct also must share more variance with its measures as opposed to with any other constructs, and this is signified by a higher square root of the AVE for each construct in comparison to its correlations with other constructs (Fornell & Larcker, 1981;Hair et al., 2017). Furthermore, it is required that the square root of the AVE is no less than 0.7 (Chin, 1998). In this study, all the constructs met these criteria (see Table 2), implying the existence of discriminant validity.
Structural model
In this study, the structural model is evaluated using the values of R 2 of the dependent variable as follows: 0.38 for OCB and 0.49 for turnover intention. These values are all greater than the commonly tolerated thresholds mentioned in Falk and Miller (1992), Chin (1998) and . Moreover, these values appear to correspond with the values mentioned in PLS research in accounting and auditing (e.g., Al Shbail et al., 2018aShbail et al., , 2018bObeid et al., 2017). In this study, the significance of the coefficients is decided based on 5000 bootstrap samples . Accordingly, the hypotheses testing results are displayed in Table 3 below. Consistent with hypothesis H1 established in this study, organizational climate imparts a positive impact on OCB (β = 0.385, p < 0.05). This result is also in agreement with the past models of research which posit organizational climate as OCB antecedent. Also, as posited in H2, OCB appears to have a significant negative impact on the turnover intention of internal auditors (β = -0.288, p < 0.05).
Conclusions
Among internal auditors, the deterioration rate is high, and it is now a global issue, while also causing great challenges for business leaders. Turnover among internal auditors has both direct and indirect costs, implying high cost of turnover to the organization. As found in this study, OCB of high level was the general reason why internal auditors remain in their job. In addition, organizational climate appears to be the OCB antecedent. This study further plays a significant role in the evaluation of the factors known to increase OCB among internal auditors. As such, in the context of private universities, internal auditor empowerment should be improved by way of work environment. Systematic assessment of work environment should be made for monitoring as well as evaluating the physical work condition, the communication climate, as well as how appropriate the rules and procedures correspond with the strategies of the organization. Hence, the internal auditor's psychological empowerment and performance could be enhanced. Furthermore, competence-based performance appraisals should be implemented, and employees should have the awareness regarding their roles, competences, as well as the performance output expected. As proven in this study, with OCB fostered, the university will have employees that are loyal, and this will decrease the problem of turnover. At the same time, the quality of work displayed by the employees will increase organizational performance and competitive advantage. Nonetheless, several limitations to this study should be pointed out as well. Firstly, this study is a cross-sectional study since conducting longitudinal studies on this subject is difficult. Another limitation is the uniqueness of the sample which brings to the question of whether public universities have similar characteristics. Notably, the proposed model may be tested in other economic professions and sectors as well. The measurement of turnover using future intentions is another limitation in this study. It should be noted that even though the scale employed in this study has been employed in many past works, it is essentially a subjective behaviour measurement. As a final point, the model proposed in this work does not consider the linkage between organizational climate and the turnover intention of internal auditors. | 2020-08-06T09:08:41.042Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "2b023a8e93be82230a0f8d5bce5a6ae4c9f77dc9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5267/j.msl.2020.7.037",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "261d22c96900234ed98933eb16ea1b635205abc5",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": [
"Business"
]
} |
120270034 | pes2o/s2orc | v3-fos-license | Spline Curve Modeling Based Gait Recognition
Gait recognition has become an active research area with increasing demand for effective video surveillance systems. This paper deals with an innovative method of modelling human gait with spline curves. The method proposed involves finding the locations of several human joints namely, coxal joint, a pair of knee joints and a pair of ankle joints. The five joints located are used as control points to construct spline curve. Instead of comparing the gait models constructed, for which time complexity is high, we consider the area under the spline curve constructed, which is a linear metric, as our gait feature and construct feature vector containing area signals of the sequence of images considered. DCT (Discrete Cosine Transform) is applied to the feature vector to obtain the feature matrix. The dimensional reduction of the constructed feature matrix is achieved by adopting the method of MSPCA (Multi-scale Principal Component Analysis). The classification of the feature vectors is done using K-NN and Neuro-Fuzzy classifiers, for the subjects considered in CASIA datasets A, B and DTW (dynamic time warping) for the subjects in CASIA dataset C.
Introduction
Human identification at a distance has gained a lot of attention recently due to increasing need for video surveillance systems. Gait is an attractive feature for human identification at a distance and has gained a lot of interest from computer-vision researchers in the recent past.The genesis of the idea of human tracking can be traced back to Cutting and Kozlowski's perception experiments based on light point displays [1] [2]. In stark contrast with the conventional biometric features such as face, iris, palm print and finger print, Gait has unique characteristics such as being noncontact, non-invasive and perceivable at a distance.
Gait recognition's pragmatic implementation faces several challenges. For instance Gait analysis is very sensitive to deficient or incomplete segmentation of the subject silhouette. Variations in clothing and footwear, distortions in gait pattern produced by carrying objects or walking speed could make analysis an arduous task. These complexities lead to low recognition rates in the algorithms proposed so far. Existing methods on Gait recognition can be classified into model based ones and holistic ones. Model based methods model the human body with appropriate geometric curves. Holistic methods extract spatio-temporal and statistical features.
A. Model based method: Model based approaches describe the topology of human body parts using geometrical curves. One of the first attempts at modelling could be seen in [3] and Cunado et al [4], in which the legs were considered as interlinked pendulum. Then, phase weighted Fourier magnitude spectrum was used to recognize the Gait signatures, which were derived from frequency components of variations in human thigh inclination. Lee et al [5] fit ellipses to seven regions of human body and derived magnitude and phase of these moment based region features. Furthermore, statistical methods were used such as, Principal Component Analysis (PCA), and Multiple Discriminant Analysis (MDA) to analyse effective features.
B. Holistic /Model free methods: The holistic methods characterize spatial variation in the dynamic variables like stride length, width vector, etc. They analyse the variations in shape and distance vectors in the sequence of images to characterize the gait features. Early efforts at gait recognition adopting holistic approach can be traced back to Niyogi and Adelson [2], who distinguished different subjects from their spatiotemporal gait patterns obtained from the curve fitted snake . Little and Boyd [6] used frequency and phase features from optical flow information of walking figures to differentiate individuals. Chai et al [7], introduced perpetual shape descriptor to analyze human gait.
R.Tanawongsuwan and A.Bobick [8] used timenormalized joint angle trajectories to create gait signatures.
Though a lot of progress has been achieved using the above stated approaches, there is no foolproof method established, which is why the scope of research in this area is diverse.
Though a lot of well established gait recognition methods exist, they are either sensitive to variations in silhouette shape or covariate features like walking speed, clothing of the subject. This was the prime motivation for our proposed method.
In this paper a novel feature, adopting model based approach, to model the lower limbs of the silhouette is proposed. The motive behind this approach was to mitigate the sensitivity of recognition to variations in silhouette shape due to the variations in clothing or carrying conditions. This method was also applied to the cases where the walking speed of the subject is variable.
The rest of the paper is organized into 5 sections. Section 2 and 3 deal with Approach overview and preprocessing. Section 4 deals with gait feature extraction, MSPCA dimensional reduction. Section 5 deals with experimental results and comparison with recent methods. Section 6 concludes the paper.
Approach Overview
The proposed approach can be implemented in the following steps: 1) Background subtraction technique is used to extract the silhouette from the background, and preprocessed to remove noise components introduced.
2) The silhouette is resized by cropping, to create image template [2]. A gait cycle is then extracted by exploiting the variation in the width vector, as a feature.
3) The proposed feature namely, area under the limbs of the subject, is computed after modeling the lower limbs with spline curves, DCT is applied on the feature matrix created and MSPCA is adopted for dimensional reduction of area signals extracted.
4)
After dimensional reduction, the feature matrix is fed to Neuro-fuzzy and K-NN classifiers for evaluation.
Silhouette Extraction
Silhouette extraction holds great importance in effective gait analysis. This is essential so as to analyze the value of each pixel in every frame in the video sequence. The method of background subtraction is adopted to acquire the subject of interest. Here, the subject should be the only object in motion in the sequence of frames.
Image Template
After background subtraction it is apparent that the subject occupies a small area of the image. To eliminate the redundant boundary around the object that occupies a larger portion of image, we resize the image by cropping the extra portion and fit the subject into a smaller image template choosing appropriate width and height so that the image is not corrupted. Firstly, height of the human silhouette is chosen as the height of the image and secondly a fixed width is chosen in such a way so as to avoid most of the computational ambiguities. This type of scaling not only reduces computational complexity but also corrects the scale changes due to the variation of object distance from the camera. Similar work can be seen in [9].
Gait Feature Extraction
From the gait silhouette sequence obtained, the only cue to identify the gait signature depends on the temporal changes in the silhouette. We propose a novel silhouette modeling method which uses spline curves to model the limbs. The procedure involves finding the coordinates of coxal joint, two knee joints and two ankle joints of each silhouette. The five joints thus found, are used as interpolating points to construct a cubic spline curve. The procedure for finding the joints and constructing the spline curve is enumerated in the following sections.
Joint Positioning
The novel feature extracted in this paper, the area under the limbs, requires silhouette's joints as interpolating points. The control points on the curve are the coxal, ankle and knee joints which are obtained by the process below: a) Coxal Point -The y co-ordinate is at 0.72H from the top of the image. When horizontal scanning is done it leads to the following cases: One Region: The center of the region is taken as the coxal point.
Two regions: This happens if our scanning position is below the actual coxal hence we need to regulate the scanning width 0.165H to find the coxal point b) Knee Point -A circle with radius 0.245H is drawn with the coxal point as the center. Two cases arise here Two Regions: This is the condition of left and right biped bracing. Center of each region is the corresponding knee joint.
One Region: This is when the left or right knee standing. The human knee is about 0.1H wide, so we choose the point 0.05H left/right from the rightmost/leftmost point as the right/left knee joint. c) Ankle joint: This is similar to the knee joint. The left and right knees are chosen as the centers of the circles and length of shank 0.246H as radius.
Area under the spline curve
As observed from a sequence of frames, the area under the limbs has a periodic temporal variance just like width vector of silhouette. This area is found by constructing a spline curve and finding the area under the limbs enclosed by the curve. For constructing an interpolating curve given a set of points, there are three different possibilities namely, polynomial interpolation, Bézier curves and spline curves. All three methods produce polynomial curves as a linear combination of a set of basis polynomials. Our choice of spline curves is based on their properties which allow us to design complex shapes with lower degree polynomials as compared to the other two methods. In the Fig. 3, B-spline curve of degree 3 and Bézier curve of degree 10 are constructed for the same set of control points and it is pretty evident that the Bézier curve still cannot follow the polyline.
Since the degree of the constructed interpolating curve is lower using splines the computational time which is O(n2), for the same is reduced considerably, with n as its degree.
The interpolating spline curve has the human body joints as its control points, namely coxal joint, a pair of knee and ankle coordinates. A polygon was first constructed with the body joints as its vertices, and then a cubic spline curve was constructed with the joints as control points.
A spline is a piecewise-polynomial real function The restriction of S to an interval i is a polynomial So that, The highest order of P i is known as the order of spline curve, which in our case is 3. For a spline of order n, S is required to be continuously differentiable to order n − 1 at the points ti for all i = 1, 2, · · · k − 1 and all j ∈ [0, n − 1] In our method of spline curve interpolation of knee joints, we use the B-form of spline curves which is a weighted sum with the weights as B-spline functions. The spline f (t) is given by The function B i,d is called a B-spline of degree d which is given by the recursive formula Thus for each silhouette image, we obtain the area under the spline curve constructed and for given N training samples and M images in each, we create a feature matrix This matrix is considered for further processing, using Discrete Cosine Transform (DCT) to describe the area feature better, followed by dimensional reduction using MSPCA.
Multi-scale principal component analysis
The dimensionality of the feature matrix containing the area signals is very large and contains redundant information so, we adopt the method of Multi scale principal component analysis (MSPCA) to find transformation for dimensionality reduction. MSPCA was first proposed by Bakshi [10], for statistical process monitoring. Multi scale principal component analysis (MSPCA) combines the ability of PCA to decorrelate the variables by extracting a linear relationship, with that of wavelet analysis to extract deterministic features. MSPCA implements PCA to wavelet coefficients at each scale to filter the unwanted components. The essence of MSPCA is enumerated in Fig. 5 and Fig. 6.
W is the Discrete wavelet transform (DWT) operator, Implementing Inverse DWT (IDWT),Ŷ can be reconstructed via (14). Denote Where τ j is defined as in [10]. Traditional PCA is then applied onŶ, the wavelet coefficients matrix, to acquire the final feature matrix which is fed to the classifiers for recognition in the following subsection.
Due to its multi-scale nature, MSPCA is appropriate for modeling of data containing contributions from events whose behavior changes over time and frequency. Process monitoring by MSPCA involves combining only those scales where significant events are detected, and is equivalent to adaptively filtering the scores and residuals, and adjusting the detection limits for easiest detection of deterministic changes in measurements.
Recognition
After the extraction of gait features, followed by dimensional reduction classification is done using two different classifiers namely, KNN and Neuro-fuzzy. First we evaluate the proposed method using Neuro-Fuzzy classifier as it is the main method of classification adopted. Then we compare the achieved results with the results of K-NN classifier. The gait feature matrix extracted using the proposed method is used to train the classifiers.
Gait database
In our experiments, we used the CASIA Gait Database which is one of the largest gait databases in gait-research community currently. We have tested the algorithm on the CASIA Gait database due to its completeness and wide availability. • CASIA Dataset-B The database consists of 124 subjects (93 males and 31 females) captured from 11 view angles (ranging from 0 to 180 degrees, with view angle interval of 18). The frame size is 320 × 240 pixels, and the frame rate is 25 fps. There are 10 walking sequences for each subject per view. We use gait sequences numbered from 001 to 124 (subject ID, i.e., 124 subjects) of view angle 90 degrees in Dataset B to carry out our experiments.
Out of the 10 samples chosen from each subject, 2 samples have images with subject carrying a bag, and 2 have subject wearing a coat.
• CASIA Dataset-C The Infrared -CASIA C dataset was chosen to evaluate the performance of the proposed algorithm. It contains 153 subjects and takes into account four walking conditions namely, normal walking , slow walking , fast walking and normal walking with a bag. Each subject has got 10 sequences, 4 normal walking (fn), 2 slow walking (fs), 2 fast walking (fq) and 2 normal walking carrying a bag (fb). The length of each sequence varies with the pace of walking.
Experimental Results on CASIA datasets A,B
The proposed feature matrix is transformed by applying the Discrete Cosine Transform (DCT) and then reduced in dimensionality using the proposed MSPCA method. The first implementation of this method is on CASIA dataset-A, considering 4 samples for each of the 20 subjects. Of the 4 samples we chose, 3 samples are fed to the classifier for training and 1 sample is put aside for testing. Cumulative match scores (CMS) are used to assess the performance quantitatively. The CMS value δ corresponding to rank r indicates a fraction 100.δ % of probes whose top r matches must include the real identity matches.
Unlike the Neuro-fuzzy classifier that uses membership functions extracted from the data set describing the system, K-NN applies Euclidean distances as the measurement parameter in classifying the data. The test results of K-NN classifier are enumerated in the Table 1.
Four different methods of testing are adopted to compute the accuracies: directly using Neuro-fuzzy classifier on the feature matrix and the cosine transform coefficient matrix; using Neuro-fuzzy classifier on the feature matrix and the cosine transform coefficient matrix after dimensional reduction using the proposed MSPCA method. The results are as shown in Table 2.
In order to test the robustness of our proposed feature extraction method, we also test our algorithm's performance on K-NN classifier. We adopt similar strategies of testing as in the case of Neuro-fuzzy classifier.
The above results demonstrate the robustness of our method to changes in direction of motion of the subject in dataset-A. Our method of modeling spline lower limbs using spline curves was found to be very effective, even though the direction of subject's motion changed in two samples. The best accuracy of 95% retained is promising and the method itself is quite feasible for recognition Two different strategies are used to test the proposed algorithm on CASIA dataset-B: Discrete Cosine Transform is applied on the feature matrix and the coefficient matrix acquired undergoes training and testing using Neuro-fuzzy classifier; Dimensional reduction of cosine transform coefficient matrix is done by MSPCA and then fed to the Neurofuzzy classifier.
The results presented on dataset-B in Table 3, show convincing results even after considering covariate features, in which subject is either carrying a bag or wearing a bulky coat.
This shows that our method is robust to these covariate features and the best accuracy of 91.2% acquired is in itself, quite feasible for recognition, considering the fact that covariate features are taken into account.
The consistent CMS for all the 6 sets of a subject shows that our method is robust to covariate features of CASIA dataset-B. The best accuracy of 97.1% CMS was obtained for set 7 in which the subject carries a bag, strengthening the claim of our algorithm's robustness to covariate features. The feature, area under the limbs, chosen is clearly insensitive to subject wearing a bulky coat or carrying a bag which makes it much more effective and reliable for recognition.
From the CMS curves plotted it has been observed that the high accuracy is achieved for modest values of rank of the KNN classifier used. This ensures higher confidence in the classification of the subjects and moderates the error percentage. Recognition method Best CCR (%) Su-li [11] 89.7 Chen [12] 95.2 Proposed method 97.1
Comparison
In this section we compare the performance of the proposed method with two recently proposed methods. Table 4 presents the gait based recognition rates of various algorithms proposed recently. Su-li et al [11], proposed a feature extraction method based on Fuzzy principal component analysis. They use the CASIA database-A with 20 subjects under consideration. chen et al [12] proposed a method based on Frame difference energy image. They performed experiments on CMU Mobo gait database and the CASIA dataset B with 100 subjects under consideration. Note that the numerical accuracies from these two techniques are obtained from CMS curves.
It is pretty evident from the table that the proposed Algorithm outperforms the other Algorithms in terms of Cumulative match scores (CMS). The best CMS of 97.1% was obtained over CASIA data set B, with 124 subjects under consideration. Our experimental results show that the method of MSPCA performs fairly good even with complications like carrying of covariate objects involved which is our key interest.
Dynamic time Warping
Dynamic time warping is an algorithm to find an optimal match between two sequences that vary in time or speed. Similarities in walking patterns would be detected, even if in one video the person was walking slowly and if in another he or she were walking more quickly, which makes it ideal for varying walking speed gait recognition. Detailed analysis of Dynamic time warping algorithm can be seen in [13].
The Dynamic time warping is very effective even when the sampling rate of two different video sequences is different. This classification method coupled with the multi-scale principal component analysis method was found to have a considerable impact in enhancing the accuracy of recognition.
Experimental Results on CASIA-C dataset
Another covariate feature that often leads to devious results in gait recognition is the walking speed of the subject. To
Input Template
Optimal path DTW distance calculated from the optimal predecessor's DWT distance The optimal predecessor has the smallest DWT distance counter this problem we have adopted the DTW (Dynamic Time Warping) as mentioned earlier.
We perform the experiments with probe sequences from CASIA C dataset, where subject's motion is parallel to image plane. We use Dynamic time warping method to find the optimal path distance between the probe sequence and the reference sequence.
Using DTW the distances between each frame of the probe sequence and the reference sequence are computed and then the total distance is defined as the accumulated distance along the optimal distance path which is also termed as optical warping path. This distance is used as a metric to compare the similarity between the probe sequence and the gallery sequence.
Having computed the distances between the probe and the reference sequences the best match decision is taken on the basis of Where D i j is the accumulated distance matrix for i th probe sequence and the j th reference sequence. This means the best match to the test sequence is assumed to be the reference sequence with the least distance ( Fig. 9 and Fig. 10).
After the calculation of distance between the two sequences and the similarity measure is established a thresh- old value has to be selected so that the sequences with distance lower than the threshold are ACCEPTED and the ones with value higher than the threshold are REJECTED.
For training, 3 sequences from 50 subjects were considered to determine the rejection threshold. 3 sequences from the 50 subjects are used as the enrollment and 3 other sequences and all the sequences of remaining 103 subjects are taken as probes to establish the FAR (False acceptance rate ) and FRR (False rejection rate). The mean FAR and FRR is determined and the rejection threshold is selected after acquiring the EER (Error equal rate). In the test phase, the threshold defined previously is used to decide whether the probe sequence is a match to reference. In addition to this, cumulative match scores (CMS) are used to assess the performance quantitatively as in [14]). The CMS value δ corresponding to rank r indicates a fraction 100.δ % of probes whose top r matches must include the real identity matches. The performance percentages presented in Table 5 and Table 6 are rank 1 CMS values acquired in each test case which means the closest sequence to probe sequence is selected from the gallery sequence. As seen from Fig. 9 and Fig. 10 the optimal path in a spectrogram for two similar sequences is almost linear as opposed to the irregular path in the case of different sequences.
Experiments were performed with different pairs of sequences under consideration ,one of each type from fs, fq, fn as gallery and probe sequences to evaluate the crossspeed gait recognition performance of the proposed algorithm and the results are as shown in Table 5. The proposed gait recognition algorithm achieves high accuracy on within walking condition tests. For cross speed walking conditions only tests D, F, G achieve good accuracies, which is because the normal walking sequences are still a close match to fast walking and slow walking sequences. Moderate accuracies are achieved in E, H, I test cases as the important factor, walking speed comes into the picture which can significantly vary the walking patterns. The proposed algorithm could still achieve fairly good results with varying walking speed condition tests. The CMS curves for the above scenarios are shown in Fig. 11 . The proposed gait recognition algorithm is tested on the remaining two sequences with subject carrying a bag. The results of the evaluation involving sequences fb are shown in Table 6. High accuracies are achieved with all the 4 tests J, K, L and M in which one of the sequences has subject carrying a bag. The CMS curves for the above scenarios are shown in Fig. 12 .
The fairly good results show that the proposed method of spline curve modeling is insensitive to carrying of covariate objects as it involves modeling of only the lower limbs. The promising results of cross-speed comparison between fq and fb ascertain that the method is invariant to speed variations as well. Table 7 shows the comparison of the proposed method with Tan [14] and WBP [15] approaches on CASIA C database. Note that the numerical accuracies from these two techniques are obtained from CMS curves. For completeness the values of FAR and FRR are evaluated as 2.37% and 3.07% respectively. The proposed algorithm though it is on par with other two methods in the first two cases, it significantly outperforms the other two methods for the last two cases. The last case with subject carrying a bag is highly accurate as compared with the other two methods showing that the proposed method is insensitive to carrying of covariate objects.
Conclusion
In this paper, we propose a novel method for gait recognition based on modeling of the limbs using spline curves. The Area signals obtained after feature matrix construction are compared. With the help of MSPCA, the components of the feature matrix are projected into a lower dimension space. MSPCA retains the information of original data better as compared to the traditional PCA even when data sequence changes over time or frequency. Neuro-Fuzzy and K-NN classifiers are used for classification of feature vectors in case of subjects from CASIA datasets A, B. DTW is adopted to classify the subjects in case of CASIA dataset C, so as to reduce the sensitivity of recognition to variations in walking speed. The Experimental results demonstrate the insensitivity of our method to covariate features like subject's walking Speed, subject carrying a bag or wearing a thick coat.
Reducing the sensitivity of gait recognition to the above mentioned covariate features was the preeminent concern of the method proposed. Decent results obtained on a large database like CASIA with covariate features ascertain the feasibility of our method. | 2019-04-18T13:10:39.823Z | 2014-01-25T00:00:00.000 | {
"year": 2014,
"sha1": "ebc41f02f323e9fc84b480dac8932160b0879484",
"oa_license": "CCBY",
"oa_url": "https://www2.ia-engineers.org/Journal_E/index.php/jiiae/article/download/23/63",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "51a8eb4f0e21e1c3b9d9135a01031ad7ae170859",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
235966743 | pes2o/s2orc | v3-fos-license | A cross-sectional survey of stigma towards people with a mental illness in the general public. The role of employment, domestic noise disturbance and age
Introduction Stigmatization impedes the social integration of persons recovering from mental illnesses. Little is known about characteristics of the stigmatized person that lessen or aggravate public stigma. Purpose This study investigates which characteristics of persons with mental illnesses (i.e. with a depression or a psychotic disorder) might increase or decrease the likelihood of public stigma. Methods Over 2,000 adults read one of sixteen vignettes describing a person with a depressive disorder or a psychotic disorder and answered a set of items measuring social distance. Results The person who was employed (vs. unemployed), or whose neighbors did not experience domestic noise disturbance (vs. disturbance) elicited significantly less social distance. Also persons with a depressive disorder elicited less social distance, vs. persons with a psychotic disorder. Conclusion Employment and good housing circumstances may destigmatize persons coping with mental illnesses. Mental health and social services should encourage paid employment, quality housing and other paths to community integration.
Introduction
Stigma is a major concern for people living with a mental illness. The term stigma is applied when the following elements co-occur: (a) a distinction of labelling of human differences is made (such as skin colour, but also receiving mental health treatment), (b) dominant cultural beliefs link labelled people to undesirable characteristics, (c) labelled people are placed in categories ('them' and not 'us'), and (d) labelled people experience status loss and discrimination [1]. Furthermore, stigmatization is contingent on access to social, economic, and political power that allows the four above components to occur [1]. The general public's negative beliefs and behaviours are known as public stigma and can contain several beliefs and behaviours as seeing and treating people with mental illness as unintelligent, incapable, dangerous, and blaming or shaming them for their illness [2,3].
International research shows that public stigma has an adverse impact on life opportunities of people with a mental illness. It is associated with diminished quality of life, social isolation, self-stigma, symptom exacerbation and relapse [4][5][6][7]. Furthermore, (anticipated) negative beliefs, exclusion or discrimination may act as a barrier in treatment seeking and for optimal health care for people with mental illnesses [7,8]. Almost 80% of the people with depression report discrimination on one or more life domains [9], around two-thirds of the people with schizophrenia feel forced to (selectively) hide their diagnosis 1 3 [10], and a similar proportion anticipates negative discrimination in applying for work, training or education [10].
In order to target and design interventions aimed at the reduction of stigma, it is important to know which living conditions and which characteristics of people with mental illness play a role in stigmatisation [11,12]. The general picture is that public beliefs and opinions vary over different mental illnesses, with a gradient in rejection depending on the type of the mental illness, where the percentage of respondents endorsing stigmatizing responses generally increases from depression to schizophrenia to alcohol dependence, and finally, to drug dependence [13,14]. For instance, in a household survey in 2006 in the United States, 74% of the public expressed an unwillingness to work with the individual described in a vignette when it concerned someone with alcohol dependence, against 62% and 47% when the individual in the vignette described had schizophrenia or depression, respectively [15]. Similar patterns are found in more recent studies in different countries, where depicted individuals with schizophrenia elicited more stigmatizing attitudes than individuals with depression [14,16,17], and individuals with alcoholism more than individuals with schizophrenia [13,18]. In addition, familiarity or contact with someone with a mental illness is associated with more positive responses toward people with a mental illness [19][20][21]. The latter is known to mitigate stigmatizing attitudes due to more knowledge and experience [20,22]. Having more knowledge on mental health and mental illness is associated with less stigmatizing attitudes towards people with a mental illness [23][24][25].
Less is known about which characteristics (other than psychiatric diagnosis) or living conditions of people with mental illnesses might affect the likelihood of public stigma. Perkins et al. [26] showed that people with mental illnesses who are employed, elicit less exclusive attitudes than unemployed people. This is both important and a paradox, since stigma is also a serious problem for obtaining and keeping a job in the case of a mental illness [27,28]. This suggests that effective interventions targeting employment of people with a mental illness, like Individual Placement and Support [29,30], can have a destigmatizing side-effect, thus further promoting recovery. Therefore it would be useful to see if there are other characteristics of people with a mental illness that might increase or decrease the likelihood of public stigma and if these characteristics interact with the diagnosis of the potentially stigmatized individual.
Next to employment, we focus in this study on the themes youth and housing. Youth is of high importance since mental health problems and mental health stigma can emerge at a young age, and therefore consequences can be drastic [31]. Whether the youth deals with other or more stigma than adults is not known: research on differences in nature or level of stigmatizing attitudes towards either younger or older people is scarce and results are mixed. Speerforck [32] found that the reactions of feeling pity and sympathy were endorsed by significantly more respondents after reading a vignette describing a child with Attention Deficit and Hyperactive Disorder (ADHD), compared to a vignette describing an adult with ADHD. However, for other emotional reactions, like annoyance or anger, no differences in reactions between the vignettes were found. Another study, investigating public stigma towards people with a depression using two different vignettes, found more stigmatizing attitudes towards the depicted younger individual of 25 years old, against the individual of 71 years old [33].
A focus on stigma regarding housing and communities is important, since inclusion of people with a mental illness in their communities contributes to social support, participation and recovery, and often stigma and exclusion emerges in the communities where people with a mental illness live [34][35][36]. People with mental illness often live in substandard accommodations that are crowded, noisy and located in undesirable neighborhoods [37,38]. On the one hand, appropriate housing facilities improve the sense of belonging to the neighborhood, and on the other hand: in poor quality neighborhoods, more fear and stigma towards people with mental illness is present [39]. Given the fact that adequate housing, neighborhood order and social cohesion are positively associated with mental health, we are interested in the influence of neighborhood nuisance on stigma [40,41]. Priority of these themes are acknowledged by the National knowledge consortium on destigmatization in the Netherlands that was established in the spring of 2018 [42].
For this study, we translated the themes employment, youth and housing in characteristics of an individual with a mental illness in (a) being gainfully employed or not, (b) being younger or middle-aged and (c) being the source of domestic noise disturbance or not. Knowing more on social distance associated with such characteristics can serve as an input for developing or stimulating programs aiming at employment, youth and good quality housing, to further empower people with a mental illness. We chose to assess social distance as a measure for stigma because it is seen as one of the core components of stigma and a commonly used for the assessment of the concept [1,43].
Our hypotheses are that: 1. An individual with a mental illness who is actively engaged in gainful employment will elicit less social distance than an individual with a mental illness that is unemployed; 2. An individual with a mental illness whose neighbors experience no domestic noise disturbance will elicit less social distance than an individual with a mental illness whose neighbors do experience domestic noise disturbance; 3. A young individual with a mental illness will elicit less social distance than a middle-aged individual with a mental illness; 4. An individual with a depressive disorder will elicit less social distance than an individual with a psychotic disorder. 5. An interaction effect for type of disorder on the one hand and unemployment, domestic noise disturbance and older age on the other hand is expected, with the latter characteristics having a stronger negative effect on social distance for a psychotic disorder.
Design, participants and procedure
The study employed a cross-sectional, population-based design. Participants were Dutch citizens recruited from the CentERpanel, a panel set up in 1993 and maintained by CentERdata, which is a Dutch research institute specialized in data collection [44]. The panel is designed to offer an accurate reflection of the Dutch-speaking population. In general, the panel is representative along various dimensions, although small exceptions exist with respect to education (overrepresentation of the upper echelons and underrepresentation of the middle level), household composition (underrepresentation of single households), urbanization (underrepresentation of people living in a highly urbanized setting) and non-western foreigners (including strong underrepresentation on account of language problems and of strong concentration in urban areas) [44]. In January 2018, the 3209 active panel members received a questionnaire. One week after the initial questionnaire invitation, a reminder to complete the questionnaire was sent. One week later, data collection was closed. Respondents completed the questionnaire online, via a secured internet connection on their home computers. Eventually, 2388 panel members started and 12 of them did not complete the procedure, leaving 2376 (74%) questionnaires eligible for analyses. Mean age of the respondents was 54.6 (SD = 16.6), with a minimum of 16 and a maximum of 94 years old; 52% of the respondents were males.
Measurements
The online questionnaire contained one (from sixteen) randomly assigned vignette describing a fictional male (called by the name of Jeroen, a very common name in the Netherlands) diagnosed with a mental illness and living in the community of a small-sized city. As a city we chose Nieuwegein, a typical Dutch city in the center of the country (similar to Muncie, known from the Middletown Study [45]). To create the descriptions we adapted and extended the vignettes depicting a male with schizophrenia, as used by Perkins et al. [26]. The sixteen vignettes all contained one out of two levels of four variables: (1) diagnosis (a depressive disorder or a psychotic disorder), (2) age (19 years old or 40 years old), (3) causing domestic noise disturbance (present or absent), (4) employment (being employed or not). The vignettes were around 175 words in length (see "Box 1" for an example). As we were mainly focused on the effect of the three variables in the presence of a mental illness, and in the interaction between these, a control vignette for diagnosis (depicting an individual without a mental illness) was omitted.
After reading the vignette, respondents indicated on the Social Distance Scale (SDS) [21,46] how willing they were for Jeroen to (1) move next door to them, (2) spend an evening socializing with them, (3) make friends with them, (4) start working closely as a colleague with them, and (5) marry into their family. Social distance was rated on a five point scale, with 1 representing 'definitely not' and 5 representing the answer 'definitely'. A middle category was offered as well, with a score of 3 representing the answer 'maybe'. A total score for social distance was calculated by adding the -reverse coded-answers on the 5-point scale for the five distance levels, resulting in scores varying between 5 (no or very little social distance) and 25 (much social distance). From earlier research, the SDS is known to have good internal consistency [47].
To evaluate the effectiveness of the manipulation in the vignettes, respondents also evaluated Jeroen's propensity for violence and contribution to his community on a five point scale, with 1 representing 'very unlikely' and 5 representing the answer 'very likely'. It was expected that Jeroen causing domestic noise disturbance would be evaluated at being more prone to violence (compared to Jeroen not causing domestic noise disturbance), and Jeroen being unemployed would be seen as less likely to contribute to his community (compared to Jeroen being employed).
In addition, respondents' gender, level of contact with people with a mental illness, level of mental health literacy was assessed, and included as covariates in the analyses. Gender was assessed as a standard variable in the CentERpanel. Level of contact was assessed with the Level of Contact Report (LCR), containing seven levels, ranging from having 'no contact' with an individual with a mental illness to 'I do (or did) have a mental illness myself' [19,48]. Mental health knowledge was assessed with the MAKS (Mental Health Knowledge Schedule), a 12 item questionnaire containing six stigma-related mental health knowledge areas: help seeking, recognition, support, employment, treatment, and recovery, and six items that inquire about knowledge of mental illness conditions. Response categories vary between (1) 'strongly disagree' to (5) 'strongly agree', total scores range between 12 and 60, with a higher score indicating more mental health knowledge. Although earlier research showed that the overall internal consistency of the MAKS was moderate [49], to our knowledge, no other short instrument covering mental health knowledge was available.
Box 1
Example of a vignette of Jeroen (middle aged, the source of domestic noise disturbance, depressive disorder, unemployed). Italics indicate text that differ for the levels of the four variables.
Jeroen is a 40 year old man and lives in an apartment in Nieuwegein. After finishing school he started working as a logistic employee. After a few years he started to feel down, often for longer times. He had no appetite and lost quite some weight. His ability to concentrate on daily activities disappeared, as well as the energy to undertake any outings with his girlfriend. When he lost his job, he felt even more useless and he started suffering from a feeling of guilt and insomnia. Jeroen is often awake at night. Sometimes the neighbors complain about noise disturbance. A year ago his brother convinced him to seek help at a local mental health organization, where he was diagnosed with a depressive disorder. He has been taking medication ever since, next to group therapy. At this moment, Jeroen is unemployed and often visits the library to read some magazines.
Analyses
Descriptive statistics were analyzed with frequencies and means. For the evaluation the effectiveness of the manipulation in the vignettes, t tests were performed. Gender, level of contact (LCR) and mental health knowledge (MAKS) of the respondents were included as covariates, and their univariate associations with the outcome (the aggregated SDS-score) were analyzed with an one-way ANOVA and bivariate pearson correlations. To test the five hypotheses, a four-way ANOVA was performed, with age, employment, domestic noise disturbance and diagnosis as main effects. In addition, gender, the LCR and MAKS scores added as covariates in the four-way ANOVA. Bivariate correlations between, and means, percentages (for different categories) and effect sizes of the covariates and study variables are shown in Table 1. Distribution of the covariates over the vignettes showed no differences for the MAKS-score and gender. For the LCR score differences were found [F(15, 2360) = 2.50, p < 0.01]. As can be seen in Table 1, respondents that read the vignettes with Jeroen having a psychosis is associated with noise disturbance and is 40 years of age have lower mean LCR scores than respondents that read vignettes with the other level of these variables.
Results
Mean SDS-score for all vignettes are shown in Fig. 1. The aggregated mean SDS-score -covering all levels of the four variables-was 15.3 (SD = 3.89). Regarding covariates, mean SDS-scores were lower for females, for respondents with a closer level of contact according to the LCR, and for respondents with a higher MAKS score (see Table 1).
The four-way ANOVA without covariates, yielded main effects for employment (F(1, 2360) = 47.65, p < 0.01, Table 1 shows results for the four-way ANOVA with covariates, with main effects remaining similar and effect sizes to be classified as small [50]. Explained variance of this model was 12% (R 2 = 0.121). Jeroen being employed elicited less social distance, as well as Jeroen not being associated with domestic noise disturbance and having a depressive disorder. The vignette in which Jeroen has a depressive disorder, is employed, is not associated with domestic noise disturbance and is 19 years old elicited the lowest SDS-score (M = 13.8, SD = 4.02), whereas the vignette in which Jeroen has a psychotic disorder, is unemployed, is associated with domestic noise disturbance and is 40 years of age, elicited the highest SDSscore (M = 16.8, SD = 3.66). Translated to actual answers of respondents, this means that 33.3% percent of the respondents answered 'no' on the question 'Would you like Jeroen to spend an evening socializing with you?', for Jeroen, 40 years of age with a psychotic disorder, who is unemployed and is associated with domestic noise disturbance. Whereas 17.5% of the respondents answered similar for Jeroen with a depressive disorder, who is employed, is not associated with domestic noise disturbance and is 19 years old. For the willingness of Jeroen marrying into the family, the differences were sharper: 68.0% 'no' vs. 32.4% 'no' for the vignettes described above, respectively.
Discussion
Unemployment and an association with domestic noise disturbance of a fictional individual with a mental illness were independently associated with increased stigma, as measured with the SDS. Also, in line with other research, stigma is stronger for a psychotic disorder compared to a depressive disorder [13]. Age of the fictional individual was not associated with the level of stigma. Not in line with our hypothesis, unemployment and domestic noise disturbance did not interact with the type of mental illness, indicating that the stigmatizing effects of these characteristics are of similar strength for people with a depressive disorder and a psychotic disorder. Furthermore, level of contact and mental health knowledge are negatively associated with stigma.
Social distance was about the same for Jeroen with a psychotic disorder, with domestic noise disturbance and employment, as for Jeroen with a depressive disorder (and domestic noise disturbance) without employment. Similar patterns were found in the study of Perkins [26], suggesting that one single characteristic can mitigate a stereotype that people may hold of people with a mental illness.
Strength of the study is the high response on the questionnaire, resulting in a relatively accurate reflection of the Dutch-speaking population in the Netherlands and sufficient power to study subgroups or correct for other associations (as we did in adding gender, level of contact, mental health knowledge as covariates). Simultaneously, resulting in a disadvantage, this sample size also yields that even very small differences or effects reach significance very easily, as is the case in this study. Additional limitations of the study are the absence of a 'control' vignette: the situation in which the fictional persona has no mental illness. This prevents taking conclusions about an absolute effect of characteristics on stigma. Furthermore, we used the MAKS as an unidimensional scale for mental health knowledge. As indicated, the reliability of this scale was modest, and therefore results based on this use of the MAKS should be interpreted with caution. More research and attention to the use of the MAKS to assess mental health knowledge is warranted. Lastly, the reader should be aware that the variance explained by the model was a modest 12%, and effect sizes were (very) small. This means that offering additional, potentially destigmatizing information about people with mental illnesses only slightly alters someone's perception. For the effect of noise disturbance the practical relevance is questionable since its effect neared 0%.
Our results suggest that supporting clients in getting and keeping gainful employment can have a positive effect on the process of destigmatisation and social inclusion of people with a mental illness, by directly reducing the negative perceptions held by the general public. We also showed that this is independent of the diagnosis of the potentially stigmatized individual: it can be equally effective for people with diagnoses that differ in the strength of stigmatization. We also showed that this effect is independent of gender, level of contact and knowledge about the mental health of the general public.
These findings underline the importance, added value and paradox-solving of methods like Individual Placement and Support (IPS), which is effective in helping people with severe mental illnesses to find competitive employment, and is implemented in many countries [30,51,52].
As mentioned, the practical relevance of intervening in noise disturbance seems questionable given the small effect size in this study. However, the SDS contains just one question touching the topic of 'living next door'. The other questions in the scale imply a closer contact, where the element of 'noise disturbance' is might be less relevant. This might have resulted in a relatively small opportunity to get the effect of noise disturbance expressed via a total score on the SDS. Further research should reveal if this remains the case. Additional implications for further research involve investigation of the generalizability of these findings toward other diagnoses, like drug or alcohol dependence. This is important, since in general-and in this study as well-the gradient of desired social distance follows 'the more intimate the setting, the more likely the desired social distance', but the gradient is not neat: for drug and alcohol dependence, 'living next door' produced a greater stigmatizing response than 'friendship' in social distance scale terms [13]. Also, investigating generalizability of the current findings towards other samples in other countries would be of interest, although the concordance with results of Perkins' study [26] suggests the current results being applicable to more settings, next to the finding that regardless of the presence of a mental illness, unemployment elicits more negative attitudes [53,54]. Next, although in this study included as a covariate: design, scoring and scaling of the MAKS as a measure for mental health should be subject of further research, especially because improving knowledge can be a relatively feasible and successful method to lessen public stigma, as the effect size in this study indicates as well. This would yield more research on the level of mental health knowledge, and about correlates of mental health knowledge and stigma, that appears to be negatively associated (the more knowledge, the less stigma) [24]. Lastly, the absence of a 'control' vignette (the situation in which the fictional individual has no mental illness) in this study calls for a study that includes one, allowing conclusions about an absolute effect of characteristics on social distance.
Author contributions SCCO designed the study, compiled the survey, coordinated data collection, performed analyses and wrote the paper. MES designed the study, compiled the survey, coordinated data collection, performed preliminary analyses and revised the paper. JvW designed the study, provided critical revision of the compiled survey and co-wrote the paper.
Funding The study was funded by the Phrenos center of expertise, Utrecht, the Netherlands.
Conflict of interest The authors declare that they have no conflict of interest
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-07-17T13:35:18.605Z | 2021-07-16T00:00:00.000 | {
"year": 2021,
"sha1": "71ab9ec55dfd30e0ac71188825d9e72235e85545",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00127-021-02111-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "71ab9ec55dfd30e0ac71188825d9e72235e85545",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5673416 | pes2o/s2orc | v3-fos-license | The Salpeter plasma correction for solar fusion reactions
We review five different derivations that demonstrate that the Salpeter formula for the plasma corrections to fusion rates is valid at the center of the sun with insignificant errors (~ percent). We point out errors in several recent papers that have obtained a variety of answers, some even with the wrong sign or the wrong functional dependence.
Introduction
The plasma in the core of the sun is sufficiently dense that non-ideal gas corrections to nuclear reaction rates are significant. The plasma coupling parameter is g = (e 2 /D)(1/kT ), where D is the Debye length and T the temperature of the plasma. This parameter is the ratio of the Coulomb potential energy for two particles a Debye length apart to the kinetic energy in the plasma. Near the center of the sun, g ≃ 0.04.
Recently, there have been a number of papers (Carraro, Schäfer, & Koonin 1988;Shaviv & Shaviv 1996, 2000Savchenko 1999;Tsytovich 2000;Opher & Opher 2000a, 2000bLavagno & Quarati 2000) suggesting that the standard screening corrections, originally derived by Salpeter (1954), need to be replaced by some other plasma physics correction, and that moreover the changes could lead to substantial improvements in the standard solar model and the predicted solar neutrino fluxes.
The motivation for many of these papers is to 'solve the solar neutrino problem' without invoking new weak interaction physics, such as neutrino oscillations. However, the results of solar neutrino experiments cannot be accounted for in this manner even if one goes to the extreme limit of treating the nuclear reaction rates as free parameters (see, e.g., Bahcall, Krastev & Smirnov 1998;Hata, Bludman, & Langacker 1994;and Heeger & Robertson 1996) . Some distortion of the energy spectrum of electron type neutrinos is required.
The purpose of this paper is to highlight the compelling evidence for the Salpeter screening formula under the conditions that are relevant at the center of the sun, i.e., in the limit of weak screening. Our goal is to show that a necessary (but not sufficient) condition for the validity of a screening calculation is that the calculation must yield the Salpeter result in the limit when g is very small. We also point out errors in some of the recent treatments of screening. The raison d'etre for our paper is the requests that we have had from colleagues for a written response to the numerous papers claiming large new effects (all different) in the calculation of solar fusion rates.
We summarize in § 2 the results of five different derivations that all yield the Salpeter formula for screening. In § 3, we describe briefly the flaws that lead to five different, non-Salpeter screening formulae. We summarize our principal conclusions in § 4.
Salpeter electrostatic derivation
As shown by Salpeter (1954), fusion rates are enhanced by electrostatic screening. Here is the physical plausibility argument used by Salpeter.
If one of the fusing ions has charge Z 1 e, it creates an electrostatic potential φ = (Z 1 e/r) exp(−r/D), where r is the distance from the ion, and D is the Debye radius.
For r ≪ D, φ = Z 1 e/r − Z 1 e/D is the Coulomb potential minus a constant potential drop.
This potential drop increases the concentration of ions Z 2 in the neighborhood of Z 1 by the Equation (1) is the Salpeter formula. According to Salpeter, the quantity f 0 is the ratio of the true reaction rate to the reaction rate calculated using the ideal gas formula.
Salpeter's derivation makes physically clear that electrostatic screening causes an enhancement in the density of fusing partners by lowering the potential in the vicinity of a fusing ion. We shall come back to this physical argument in § 3.5.
WKB derivation
The correction due to screening can be derived by calculating the barrier penetration factor in the presence of a plasma. Bahcall, Chen, & Kamionkowski (1998) evaluated the barrier penetration for a Debye-Huckel plasma and showed that the usual Gamow penetration factor, e −2πη , is replaced by where x = x(E) = R c /D. Here, D is again the Debye-Huckel radius and R c is the classical turning-point radius defined by V sc (R c ) = E. Averaging Γ(E) over a Maxwell-Boltzman distribution, the effect of e xπη is just to multiply the total reaction rate by the Salpeter This derivation is more rigorous than the Salpeter formulation, but is perhaps less transparent physically. The WKB derivation shows that the Salpeter formula is valid in the moderate density limit in which Debye-Huckel screening is a good approximation to the charge density distribution. Gruzinov and Bahcall (1998) calculated the electron density in the vicinity of fusing nuclei using the partial differential equation for the density matrix that is derived in quantum statistical mechanics. This is the first calculation to describe properly the electron density close to the fusing nuclei. Given the electron density, Gruzinov and Bahcall then evaluated screening corrections in a mean field approximation by solving numerically the Poisson-Boltzman equation for a mixture of electrons and ions. The electron density distribution obtained from the density matrix calculation was included self-consistently and iteratively in the mean field equation.
Density matrix derivation
The mean-field calculation yields exactly the Salpeter result, f 0 , in the limit of low density. Higher order screening corrections were evaluated and found to be of order 1% for all of the important solar fusion reactions.
Free-energy calculation
Dewitt, Graboske, & Cooper (1973) gave a rigorous derivation of the fusion rate corrections in the week screening limit based on the free energy of fusing ions. Stimulated by one of the incorrect derivations of screening corrections, Bruggen & Gough (1997) explained why the free energy is useful in this context.
For a given relative position of the two ions, one considers the electrostatic contribution to free energy from the rest of the plasma. In fact, it is sufficient to calculate the free energy correction for a single charge Z, δF (Z). Then the rate enhancement factor is In the limit of small plasma density -the week screening limit -this free energy correction can be calculated exactly. The result is the Salpeter formula.
Equation (3) has the physically obvious characteristics that the enhancement is symmetric in Z 1 and Z 2 and goes to zero in the limit of Z 1 or Z 2 going to zero. We shall come back to these physically obvious characteristics in § 3.5.
Quantum field theory derivation
Brown & Sawyer (1997) have developed a rigorous, general formulation for calculating the rate of fusion reactions in plasmas. The Brown-Sawyer formalism can be used to develop an unambiguous perturbation expansion in the plasma coupling parameter g = e 2 /DkT .
The general formula derived in Brown & Sawyer (1997) reduces to the Salpeter correction to first order in g; this correction should suffice for solar model calculations.
The Salpeter correction is formally of order e 3 . Terms of order e 4 are also examined in Brown & Sawyer (1997). The leading correction comes from the fact that the electrons are slightly degenerate, so that the first-order effects of Fermi statistics must be retained. This small effect is explicitly computed in Brown & Sawyer (1997). The remaining terms of the formal e 4 order are of relative order ℓ/D to the basic Salpeter correction, where ℓ is either a ionic thermal wavelength λ or the Coulomb classical turning point r of the fusing ionic motion. An upper bound shows that these are negligible contributions. These higher-order calculations provide evidence, which goes beyond the qualitative statement that the plasma is "weakly coupled," that the Brown-Sawyer perturbation expansion is applicable in the solar domain.
3. Papers claiming that Salpeter formula does not work 3.1. Dynamic screening "Dynamic screening" (Carraro et al. 1988; see also Shaviv & Shaviv 1997, 2000 is a generic name for attempts to rectify the following perceived defect in the Salpeter argument: Approximately one-half of the squared Debye wave number D −2 comes from screening by electrons in the plasma and one-half from ions. As the two nuclei that are to fuse approach each other, the electron speeds are fast enough for the electronic cloud to adjust to the positions of the nuclei. But the by-standing ions in the plasma may be thought to have a problem making the adjustment, since their speeds are of order of the speeds of the ions that are to fuse. Thus it could appear that the effects of the ionic screening could be less than those in the Salpeter result, with a consequent reduction in fusion rates. However, Gruzinov (1998Gruzinov ( , 1999 gave a general argument showing that in an equilibrium plasma there is no such reduction. For the Gibbs distribution [probability of a state being occupied proportional to exp(−energy/kT )], momentum and configuration probabilities are independent, and velocities of fusing particles have no effect on screening.
Moreover, in the most straightforward model for "dynamical" screening one can see how the "dynamical" part of the correction terms get canceled. The paper (Carraro et al. 1988) calculates a modified potential function for the fusing ions by following the motion of test bodies with positive charge approaching one another in a plasma characterized by the standard dynamic plasma dielectric constant. This modified potential produces corrections in fusion rates that are perturbatively of order e 3 times the uncorrected rates, as are the Salpeter effects when expanded. However, in appendix D of Brown & Sawyer (1997) it is shown that resulting modifications of the Salpeter result are exactly cancelled (to order e 3 ) when processes are included in which the plasma has been excited or de-excited in a Coulomb interaction with one of the incoming ions. This is the reason that a calculation based only on a modified potential for elastic scattering fails.
To summarize, "dynamical screening" results, both in their simple realization in Carraro et al. (1988), and in any calculation that implements the qualitative argument given above, are refuted both by the general argument regarding the factorability of the distribution function and by explicit calculation.
Unconventional interpretation of the Gibbs distribution
Opher & Opher (2000b) The idea that coordinates and their conjugate momenta are independent statistical variables is familiar from elementary quantum mechanics where one calculates the phase space for a free particle as proportional to d 3 xd 3 p, the product of the differential volume in space and the differential volume in momentum. Shaviv and Shaviv (1996) claimed that the screening energy, Z 1 Z 2 e 2 /D, that appears in the exponent of Sapeter's formula should be multiplied by a factor of 3/2. They argued that a proper inclusion of the electrostatic interaction between screening clouds surrounding the fusing ions should lead to a modification of the Salpter formula. As explained by Bruggen & Gough (1997), the Shaviv and Shaviv treatment amounts to evaluating the potential energy, V , for use in Schroodinger's equation by setting V = (∂U/∂r) T rather than using the correct expression V = (∂U/∂r) S , where the subscripts T and S indicate derivatives taken at constant temperature or constant entropy.
Unconventional statistics
There are claims in the literature (see e.g., Savchenko 1999;Lavagno & Quarati 2000) that the usual Salpeter expression does not apply because standard statistical mechanics (Gibbs distribution) is not valid; different statistical distributions are proposed. There are at least three reasons why these (and other authors) suppose that the Gibbs distribution is not valid in the sun. These three reasons are: 1) perhaps there is not enough time for statistical equilibrium to be established; 2) perhaps there are interactions which distort the phase space distribution; and 3) perhaps the Gibbs distribution is not the correct equilibrium distribution. We discuss these three possibilities in the following subsections.
There is not enough time
Some of the suggested distributions seem to be based upon the assumption that the core of the Sun is not in thermodynamic equilibrium, and that there exist deviations from the Gibbs distribution. Both analytic calculations and Monte Carlo simulations show that the energy distribution of ions in a plasma rapidly approaches a Gibbs distribution on the time scale for the exchange of a major fraction of the typical particle energy among the interacting ions (see, e.g., MacDonald, Rosenbluth, & Chuck 1957).
There is a slight departure from statistical equilibrium in the energy distribution of ions in the solar core, but the magnitude of the effect is too small to be of significance for any measurable quantity. The burning of nuclei in the sun is a non-equilibrium process, which causes a departure from the ideal Gibbs distribution. The magnitude of the deviation, δ, is of order the Coulomb collision time, τ Coulomb , over the nuclear burning time (Bahcall 1989).
For the solar core, (4) The characteristic times for the most important solar fusion reates range from 10 2 yr to 10 10 yr. For purposes of calculating solar fusion rates, the solar interior is in almost perfect thermodynamic equilibrium.
Phase space distortion
The rate, R, for a binary nuclear reaction can be written symbolically as The term d 3 p 1 d 3 p 2 in Eq. (5) represents the free-particle density of states calculated when the particles are very far separated; the Gibbs distribution is represented by the exponential; and the interactions are described by the matrix element of the Hamiltonian between initial and final states, | < f |H|i > | 2 .
The basic error made by some authors (see e.g. Savchenko 1999) is to confuse the role of the density of states, which can be calculated when the particles are at very large separations (d 3 p 1 d 3 p 2 ), with the role of the interactions (| < f |H|i > | 2 ), which occur when the particles are very close together.
The Gibbs distribution is not the correct equilbrium distribution
Many areas of modern physics, including large branches of condensed matter physics, as well as many classical subjects are successfully described by conventional statistical mechanics. There is no convincing evidence for any phenomena that lie outside the domain of standard statistical theory, which is described in the classical works of, e.g., Tolman (1938), Feynman (1972, and Landau & Lifschitz (1996).
Tsytovich Suppression
Tsytovich (2000) suggests that screening leads to suppression rather than enhancement of fusion rates. There are two ways to see that the result given by Tsytovich (2000) is wrong, namely, the calculated sign of the effect is incorrect and the functional dependence upon the charges of the ions is incorrect.
First consider the sign of the effect, suppression rather than enhancement of the reaction rate. The original Salpeter discussion, summarized in § 2.1, showed that screening enhances the reaction rate by lowering the potential in the vicinity of the fusing ions. This is the basic physical effect which must be described by any correct theory of screening and which the elaborate treatment of Tsytovich (2000) fails to recover.
Since the treatment of Tsytovich is apparently very general, one may also consider a limiting case in which very large impurities of charges ±Q are introduced into a plasma undergoing p − p fusion. The impurity charges are hypothesized to be so large that they dominate over electrons and protons in the electrostatic interactions. In these circumstances protons will preferentially clump around negative charges −Q. Locally, the proton density will increase and fusion will proceed faster. In this case, just as in the general case discussed by Salpeter, electrostatic screening enhances rather than suppresses fusion.
The result of Tsytovich (2000) does not pass an even more basic check. In the week screening limit, the Salpeter formula can be written as where g does not depend on the charges of the reacting particles Z 1 , Z 2 . In the limit when one of the reacting particles has a vanishingly small charge, the Salpeter screening effect goes to zero, that is the screening enhancement f 0 = 1. The Tsytovich formula has a different structure, f (Tsytovich) = 1 − g 1 Z 2 1 − g 2 Z 2 2 , which is a compact and revealing way to write Eq. (11) of Tsytovich (2000). Thus, f (Tsytovich) = 1 if one of the particles is neutral, which is obviously incorrect.
Summary and discussion
There is only one right answer, but there are many wrong answers.
We have reviewed five different derivations that all yield the Salpeter screening formula, are: the original Salpeter electrostatic argument, the WKB barrier penetration calculation, the quantum statistical density matrix evaluation, the free-energy calculation, and the rigorous quantum field theory derivation.
In recent years, a number of authors have given alternative expressions, each different from all the others, for the weak field screening limit. We have described briefly in § 3 the basic reason why each of these different non-Salpeter formulae are incorrect.
What can one say about some future claim to have discovered an error in the weak screening limit? Most readers, even those actively concerned with fusion reactions in stellar interiors, do not have the time to examine each of the claims for a new answer that are published.
We suggest instead that the burden of proof should be upon the authors claiming to have a result that differs from the Salpeter formula. Discriminating readers may require of authors that claim to have found a new answer that the authors first demonstrate fatal errors in each of the five different derivations of the Salpeter formula that are discussed in § 2. | 2014-10-01T00:00:00.000Z | 2000-10-02T00:00:00.000 | {
"year": 2000,
"sha1": "95ce8899f8ecf2eb671692d473134f771025f674",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2002/07/aah3060.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "bdabed8affaa4aeb3ba2c0af42ddb56ce5ab1fc0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
70366778 | pes2o/s2orc | v3-fos-license | Inhibiting influenza's immunopathology
Although a host of companies are pursuing sphingosine 1-phosphate as a target in cancer and autoimmune diseases, researchers at Scripps may have found new therapeutic real estate for S1P's receptor: lowering cytokine-related pulmonary tissue damage in influenza.
By Brian Moy, Staff Writer
Although a host of companies are pursuing sphingosine 1-phosphate as a target in cancer and autoimmune diseases such as multiple sclerosis, a team at The Scripps Research Institute may have found new therapeutic real estate for S1P's receptor: lowering cytokine-related pulmonary tissue damage in influenza. The finding could lead to the development of combination therapies that treat the virus while managing the risk of infection-associated immunopathology.
Upon influenza infection, an immune response is triggered when T cells and dendritic cells attack virus-infected cells. This causes the release of cytokines and chemokines that attract leukocytes and macrophages to the site of infection. The reaction can lead to a cytokine storm and result in tissue damage and a potentially fatal immune reaction.
Although it is a rare occurrence in conventional influenza, the cytokine storm has been well documented in humans succumbing to H5N1 influenza virus infection. 1 Thus, the Scripps team wrote in the Proceedings of the National Academy of Sciences that "although antiviral drugs can be used to treat the virus, a strategy to balance the resultant cytokine release and lung injury while maintaining benefits of the antiviral protective immune response is needed." 2 In a mouse model of H1N1 influenza virus infection, a single dose of (R)-2-amino-4-(4heptyloxyphenyl)-2-methylbutanol (AAL-R) into the lungs at the time of infection decreased the release of cytokines and chemokines from dying, virusinfected cells compared with what was seen in controls. Importantly, influenza-neutralizing antibody titers and the overall cytotoxic T cell response were maintained with use of the sphingosine 1-phosphate receptor (S1PR)-binding sphingosine analog. 3 AAL-R was also effective in controlling T cell accumulation in the lungs when given four days after initiation of influenza infection.
Michael Oldstone, an author on the paper, told SciBX that even though AAL-R is a proof-of-concept chemical tool, "the purpose of our study was to draw attention to an important biomedical role and an interesting mechanism of targeting the sphingosine 1-phosphate receptor. " By doing so, he said, "we were able to limit the cytokine storm without affecting the protective capacity of the virus-specific T cells." Oldstone is a professor in the Department of Immunology and Microbial Science and the Department of Infectology at Scripps.
Storm control
To develop a therapeutic strategy that simultaneously treats the virus and controls infection-associated immunopathology, additional research will be needed to develop improved sphingosine analogs that are suitable for clinical development.
According to Roger Sabbadini, VP and CSO of Lpath Inc., such studies need to determine the mechanism of action behind how AAL-R modulates S1PR and leads to the dampening of the cytokine response.
Lpath is developing Asonep, a mAb against S1P in Phase I testing to treat cancer. Merck Serono S.A., a division of Merck KGaA, has an exclusive worldwide license to Asonep. The mAb is a systemic version of sonepcizumab, which in turn is a humanized version of Lpath's Sphingomab, a murine antibody.
Lpath retains rights to iSonep, an ocular formulation of sonepcizumab that is in Phase I testing for wet age-related macular degeneration (AMD).
Indeed, ongoing preclinical research by Oldstone and colleagues already is evaluating combination therapies of sphingosine analogs with antivirals such as Tamiflu oseltamivir.
Tamiflu, a neuraminidase inhibitor from Gilead Sciences Inc. and Roche, is marketed to treat and prevent influenza.
Oldstone's group is also investigating the effects of modulating S1PRs in other pulmonary diseases in which a cytokine storm could affect disease outcome, such as severe acute respiratory syndrome (SARS) and hantavirus.
"The results of the paper are very intriguing from the perspective that the approach could lead to therapies that are aimed at treating the entire disease process associated with influenza infection," said George Kemble, VP of R&D and general manager of vaccines at the MedImmune Inc. subsidiary of AstraZeneca plc. "In many viral diseases, including influenza, not only does the virus itself do damage, but sometimes the body's immune response adds another level of damage that can't be fixed by just removing the virus. In these situations, the body has to repair its own reaction. A strategy to reduce the body's immune response and prevent a cytokine storm could prove to be very useful. " MedImmune markets FluMist, an intranasal live attenuated influenza vaccine, to prevent influenza.
In addition to applying sphingosine analogs to other animal models of influenza, Kemble said "it will be useful to look in other disease systems to investigate if the PNAS findings are restricted to the flu or if they can be applied to other indications." Sabbadini agreed. "A major importance of the paper is that it provides yet another example of how modulation of the sphingolipid signaling system can be an important therapeutic strategy. The "We were able to limit the cytokine storm without affecting the protective capacity of the virusspecific T cells." | 2019-03-07T14:21:02.712Z | 2009-02-05T00:00:00.000 | {
"year": 2009,
"sha1": "dc8d6283235382538822f73fd3fa7db117257ae7",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1038/scibx.2009.169.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c07c611ec1e351c15918e8ff3ee7a893b509e22c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
22625319 | pes2o/s2orc | v3-fos-license | Added Value of Use of a Purified Protein Derivative-Based Enzyme-Linked Immunosorbent Spot Assay for Patients with Mycobacterium bovis BCG Infection after Intravesical BCG Instillations
ABSTRACT In this case series, we describe four cases in which the use of gamma interferon release assays with purified protein derivative (PPD) as a stimulating antigen was able to demonstrate PPD-specific immune activation. This may help to improve the adequate diagnosis of (systemic) Mycobacterium bovis BCG infections after intravesical BCG instillations for bladder carcinoma.
In this case series, we describe four cases in which the use of gamma interferon release assays with purified protein derivative (PPD) as a stimulating antigen was able to demonstrate PPD-specific immune activation. This may help to improve the adequate diagnosis of (systemic) Mycobacterium bovis BCG infections after intravesical BCG instillations for bladder carcinoma.
CASE REPORT
C ase 1. In May 2010, a 60-year-old man was diagnosed with pTaG3 urothelial cell carcinoma of the bladder. He was treated with a transurethral resection of the bladder tumor. Afterward, he was started on intravesical Mycobacterium bovis bacillus Calmette-Guérin (BCG) instillation maintenance therapy. After 10 BCG instillations, he presented at the emergency department with chills, fever of 40°C, and vomiting. The patient had no other localizing symptoms. He had a C-reactive protein (CRP) level of 104 mg/liter and a leukocyte level of 8.6 ϫ 10 9 /liter. Blood cultures and urine were sampled. Afterward, the patient was started on intravenous ciprofloxacin therapy. Blood and urine cultures stayed negative. After a switch to oral ciprofloxacin therapy, the patient developed fever again. Computed tomography (CT) of the thorax showed fine nodular lesions with an infiltrate in the lower left lobe, which could fit miliary tuberculosis (TB), sarcoidosis, or diffuse metastases. There were no enlarged lymph nodes visible. A bronchoalveolar lavage (BAL) was performed. BAL fluid showed negative auramine staining, no pathogenic microbes (including mycobacterial cultures), and a negative PCR result for Mycobacterium tuberculosis complex. An enzyme-linked immunosorbent spot (ELISPOT) TB assay of peripheral blood mononuclear cells (PBMCs) was negative, while an ELISPOT assay of purified protein derivative (PPD) was marginally reactive with 8 spots. An ELISPOT PPD assay on the BAL fluid was strongly reactive (Ͼ100 spots), while an ELISPOT TB assay was negative (see Fig. 1). The patient was suspected of systemic BCG infection and treated with ethambutol, isoniazid, and rifampin for 6 months, after which he recovered. A chest CT 6 months after cessation of the medication showed a decreased number of nodules and disappearance of the infiltrate in the lower left lobe compared with the first chest CT.
Case 2. An 82-year-old male was treated for high-grade nonmuscle-invasive bladder carcinoma with intravesical BCG instillations. The patient had a transurethral resection of the bladder tumor in March 2008. In April 2008, he started intravesical BCG instillations. After 4 intravesical BCG instillations, he presented with deterioration, fever of 40°C, chills, fatigue, and diminished appetite. The patient had a CRP level of 173 mg/liter, a leukocyte level of 8.9 ϫ 10 9 /liter, and an erythrocyte sedimentation rate of 56 mm/h. CT of the thorax showed some aspecific nodular lesions that were centrilobular and others in the upper left lobe. Urine culture showed an Escherichia coli infection, for which the patient was treated with ceftri-axone. Auramine staining was negative, and mycobacterial culture of the urine remained negative. Blood cultures stayed negative as well. ELISPOT TB and ELISPOT PPD assays of PBMCs were also negative. On 19 May 2008, CT of the abdomen showed an abscess ventrally of the bladder and prostate, which was drained. Mycobacterium tuberculosis complex PCR of the abscess was positive, and Mycobacterium bovis BCG type Pasteur was cultured after 12 days. The antibiotic susceptibility pattern showed sensitivity to ethambutol, streptomycin, isoniazid, and rifampin and resistance to pyrazinamide. On 3 June 2008, an ELISPOT TB assay of PBMCs remained negative, while the ELISPOT PPD assay was reactive with 7 spots. In July 2008, an ELISPOT TB assay on blood still remained negative, while the ELISPOT PPD assay rose further to 80 spots. This finding in combination with the systemic symptoms made us conclude that the patient suffered from a systemic BCG infection; he was treated with ethambutol, isoniazid, and rifampin for 9 months and recovered.
Case 3. A 74-year-old male was treated for intermediate-grade non-muscle-invasive bladder carcinoma with intravesical BCG instillations. In November 2005, the patient was treated with a transurethral resection of the bladder tumor. In December 2006, he was treated with a second transurethral resection of the bladder tumor and intravesical mitomycin instillations for recurrence of disease. In September 2007, he was treated with a transurethral resection of the bladder tumor once more for recurrence, after which he started intravesical BCG instillations. In November 2007, 1 day after the second BCG instillation, the patient developed fever, nausea, vomiting, and erythrocyturia. The general practitioner, suspecting cystitis, treated him with trimethoprimsulfamethoxazole. Because the patient did not recover, antibiotics were switched to ciprofloxacin. Twelve days after onset of the symptoms, the patient started coughing and became dyspneic. At the same time, he developed night sweats. The patient was referred to the pulmonary outpatient clinic on suspicion of a systemic BCG infection. On physical examination, normal breath sounds with diffuse crackles were heard. The patient had a CRP level of 84 mg/liter, a leukocyte level of 4.6 ϫ 10 9 /liter, and a erythrocyte sedimentation rate of 40 mm/h. Chest X-ray showed fine disseminated nodules in all lung fields with hilar and mediastinal lymphadenopathy. Urine, BAL fluid, bone marrow aspirate, a lung biopsy specimen, and blood cultures were sampled. Auramine staining results of urine, BAL fluid, and a lung biopsy specimen were negative. Mycobacterium tuberculosis complex PCR and culture of the urine, BAL fluid, lung biopsy specimen, and bone marrow were also negative. No bacteria were cultured from the lung biopsy specimen or urine. Cultivation of the BAL fluid showed hemolytic streptococcus group G and Candida albicans. Blood cultures remained negative. In blood cultures, an ELISPOT TB assay was negative and an ELISPOT PPD assay was marginally reactive with 4 spots. In this patient, no ELISPOT analysis of BAL fluid was performed. Because the patient was suspected of disseminated BCG sepsis, he was treated with ethambutol, isoniazid, and rifampin, and he responded well on this. In March 2008, an ELISPOT TB assay of PBMCs was still negative, while an ELISPOT PPD assay of PBMCs rose to more than 100 spots. The patient had no clinical sign at that moment, and medication was stopped. In December 2008, an ELISPOT PPD assay of PBMCs decreased to 25 spots, while the ELISPOT TB assay was still negative. A CT of the chest showed a decreased number and size of the nodules with no pathological lymphadenopathy.
Case 4. A 74-year-old male was given adjuvant therapy with intravesical BCG instillations after transurethral resection of a high-grade non-muscle-invasive bladder cancer. In September 2007, the patient had a transurethral resection of the bladder tumor, after which he started intravesical BCG instillations. After the 10th BCG instillation, the patient developed dyspnea, night sweats, chills, and fatigue. Physical examination was unremarkable. A chest CT showed small nodules in all lung fields compatible with miliary tuberculosis. The patient had a CRP level of 11 mg/liter and a leukocyte level of 5.8 ϫ 10 9 /liter. Auramine staining results of urine, BAL fluid, and bone marrow were negative. Mycobacterium tuberculosis complex PCR results for the bone marrow and BAL fluid were negative. Cultures of urine, bone marrow, and BAL fluid were negative. An ELISPOT TB assay of BAL fluid was negative, while an ELISPOT PPD assay on the same BAL fluid showed more than 100 spots. An ELISPOT TB assay on blood was negative, while an ELISPOT PPD assay was marginally reactive with 6 spots. The patient was treated for a suspected BCG infection with isoniazid, rifampin, and ethambutol. After 3 months of treatment, the symptoms were resolved and the chest CT was almost normalized. An ELISPOT PPD assay of PBMCs showed 1 spot, whereas an ELISPOT TB assay was negative. After 6 months, the medication was stopped and the ELISPOT PPD assay of PBMCs was now showing 12 spots, while the ELISPOT TB assay remained negative. A chest CT 3 months after cessation of the medication showed a decreased number of nodules compared with the first chest CT. An ELISPOT PPD assay on PBMCs revealed 10 spots, whereas an ELISPOT TB assay remained negative.
To date, all four cases do not have any signs of recurrence of infection.
Worldwide, hundreds of thousands of people are treated with intravesical bacillus Calmette-Guérin (BCG) instillations for bladder carcinoma each year (1). Fewer than 5% of these patients develop severe complications of this therapy, including BCG pneumonitis and systemic infection (16). One in 15,000 patients develops a septic reaction and needs to be treated with antimycobacterial antibiotics (15). Unfortunately, it is cumbersome to confirm the diagnosis of BCG pneumonitis or sepsis because cultures often remain negative and, if they are positive, it may take up to 12 weeks until mycobacteria are grown.
This hampers distinguishing between (systemic) BCG infections and other causes of infection or other complications of BCG instillations. Hence, in the case of a systemic BCG infection rapid and accurate diagnosis is important because BCG instillations should be stopped and treatment with antimycobacterial medication should be initiated, generally for a long period (3 to 6 months) (14).
We hypothesized that in the case of a (systemic) BCG infection as a complication after BCG instillations, a specific immune response will be initiated, which could be detected using purified protein derivative (PPD) as an antigen. The tuberculin skin test (TST) detects a cell-mediated immune response in the form of a delayed-hypersensitivity reaction to PPD (18,22). This test can be used for diagnosing Mycobacterium tuberculosis infections and has cross-reactivity with other mycobacterial strains, including the BCG strain and nontuberculous strains. However, TST has some limitations, such as interobserver variations, the need to recall people for test reading, low specificity in people who are vaccinated with BCG, and false-negative results in patients with immunosuppression (5,9,13,20).
A newer immunological method, the gamma interferon release assay (IGRA), overcomes these limitations. Using this method, T cells of patients sensitized to mycobacterial antigens will produce gamma interferon when stimulated by mycobacterial antigens (18). Two commercial variants of this test exist: QuantiFERON is an enzymelinked immunosorbent assay (ELISA) which measures gamma interferon production in patients' plasma and T-SPOT.TB is an enzymelinked immunospot (ELISPOT) assay which enumerates the gamma interferon-producing T cells. Initially, both methods used PPD as a stimulating antigen. To avoid the high degree of antigenic cross-reactivity of PPDs from different mycobacterial species, including the BCG strain and nontuberculous mycobacterial strains (12), IGRAs nowadays use Mycobacterium tuberculosis-specific antigens, such as recombinant early secretory antigenic target 6 (ESAT-6) and recombinant culture filtrate protein 10 (CFP-10). IGRAs for Mycobacterium tuberculosis have a good sensitivity in immunocompromised patients, such as HIV patients (6,17). Jafari et al. have shown that IGRAs could also be used on specimens from the site of infection, such as cells obtained from bronchoalveolar lavage (BAL) fluid (10).
We developed an in-house IGRA using PPD as a stimulating antigen and hypothesized that this assay could be useful for rapid detection of BCG infections resulting from intravesical BCG therapy. In this case series, we describe 4 cases in which the use of an IGRA with PPD was able to detect PPD-specific immune activation, which could be of value for adequately detecting infections due to BCG.
In this report, we describe four cases with systemic BCG infections after intravesical BCG instillations for bladder carcinoma. All patients were treated for systemic BCG infections and survived; only in 1 case the culture turned positive for BCG. This case series shows that IGRAs using PPD as the stimulating antigen are able to detect immune activation in both blood and BAL fluid, which could be valuable in diagnosing infections with BCG strains in patients treated with intravesical BCG instillations.
Yearly, 357,000 patients are diagnosed with bladder carcinomas worldwide (19), of which 70 to 75% are non-muscle invasive (25). Intravesical BCG instillations are recommended as adjuvant therapy for patients with intermediate-risk and high-risk nonmuscle-invasive bladder carcinoma (1). After instillation, BCG induces a complex immune response in the bladder with an increase in cytokines and chemokines. Repeated instillations lead to a local type 1 cellular immune response, which lasts for 6 months. The exact antitumor mechanism is not known (25). The success of BCG immunotherapy relies on the intravesical administration of live BCG strains and the generation of a localized immune response in the bladder. Since BCG contains viable bacteria, it has the potential to produce (systemic) adverse events. Major adverse reactions, such as pneumonitis, prostatitis, or sepsis, occur in fewer than 5% of patients with intravesical BCG instillations (16). Therefore, all patients with fever after intravesical BCG instillations are suspected of BCG sepsis. BCG sepsis might be the result of systemic absorption (26), which may induce a hypersensitivity reaction in which elevated levels of cytokines are released in the bloodstream (16). Historically, poor technique in intravesical instillations and nonrecognition of BCG-related adverse events have led to serious morbidity and, in some cases, to mortality. Risk factors for BCG sepsis include traumatic catheterization and absorption through the inflamed bladder wall in patients to whom BCG was given soon after the transurethral resection of the bladder tumor (14,25). Cultures generally stay negative, and treatment is started based on clinical suspicion (14). Patients with a BCG infection are treated for 3 to 6 months with antimycobacterial medication, and intravesical BCG instillations are stopped until the patient is recovered. To distinguish between BCG infection and other causes of infection or other complications of intravesical BCG instillation, IGRAs using PPD as a stimulating antigen could be valuable in diagnosing disseminated BCG infection. Both ESAT-6 and CFP-10 are not presented by any commercially used BCG strain, and so IGRAs using ESAT-6 or CFP-10 are not able to detect BCG sepsis (7). Therefore, IGRAs using PPD should be used to diagnose disseminated BCG infection.
Silverman et al. (21) studied the value of TST and IGRAs using ESAT-6 and CFP-10 as antigens in patients receiving intravesical BCG instillations for bladder carcinoma who were exposed to tuberculosis. They concluded that these patients should be assessed for latent tuberculosis infection with IGRAs rather than with TST (21). To our knowledge, no other studies have been performed using IGRAs in patients treated with intravesical BCG instillations for bladder carcinoma.
In cases 1 and 4, the ELISPOT PPD assay in BAL fluid is strongly reactive while the ELISPOT PPD assay in blood is weakly reactive. During active infection, T cells are clonally increased and recruited to the site of infection (2,8). In patients with active Mycobacterium tuberculosis infection, some studies have been performed on this issue. Wilkinson et al. showed a much higher concentration of ESAT-6-specific T cells in pleural effusions than in peripheral blood in patients with pleural tuberculosis (24). Jafari et al. showed that, in sputum acid-fast bacillus smear-negative tuberculosis, IGRAs on BAL fluid mononuclear cells are superior to IGRAs on peripheral blood mononuclear cells in diagnosing pulmonary tuberculosis (11). Thus, performing IGRAs on cells obtained at the actual site of infection could be more sensitive than performing them on peripheral blood (10), which could explain the discrepancy in ELISPOT PPD assay results between the BAL fluid and the peripheral blood, where the reactive T cells have been recruited to the site of infection and are therefore not available in circulating blood. Figure 1 shows the results of the ELISPOT assay of case 1. The negative control of the BAL fluid shows more background signals, probably due to the impurity of the material and the presence of macrophages in the BAL fluid. These background signals are on the same level as that of the samples stimulated with the M. tuberculosisspecific antigens ESAT-6 and CFP-10. PPD-stimulated cells give rise to a signal comparable to that of the positive control, indicating the presence of large amounts of PPD-specific T cells. In the bottom row, it seems that the positive control of the blood is not strongly positive. However, the wells are measured by an automatic reader, which is a more objective way to enumerate the activated T cells and which gives a clear positive result. Cells stimulated with M. tuberculosis-specific antigens ESAT-6 and CFP-10 give no signal, whereas stimulation with PPD antigen gives rise to clearly visible spots, indicating the presence of PPD-specific T cells in peripheral blood, although at a much lower level than that in the BAL fluid.
As direct specimens are sometimes difficult to obtain and therefore not always available, it is useful to repeat ELISPOT PPD assays on PBMCs, as they can become more reactive over time. In case 3, the initial ELISPOT PPD assay on PBMCs was negative, while the ELISPOT PPD assay on blood became reactive a few months after the start of the BCG sepsis. In case 2, the ELISPOT PPD assay was more reactive 1 month after the start of the infection. This could be explained by the fact that during active infection T cells are clonally increased and recruited to the site of infection. Consequently, the ELISPOT assay of a direct specimen is positive early in the infection, while the ELISPOT assay of PBMCs becomes positive later during the infection.
Patients who are BCG vaccinated or are exposed to nontuberculous mycobacteria (NTM) are at risk for false-positive results in the ELISPOT PPD assay (4,5,23). Brock et al. showed that 47% of patients who are BCG vaccinated respond to PPD, while 10% respond to the Mycobacterium tuberculosis-specific antigens ESAT-6 and CFP-10 (5). However, in The Netherlands BCG vaccination is rare and not commonly used. In addition, the prevalence of NTM in The Netherlands is low. Not much is known about the specificity of the ELISPOT PPD assay in this group of patients. In an unpublished study by our department, 70% and 86.4% of 103 patients with interstitial lung diseases had fewer than 4 spots or fewer than 14 spots on an ELISPOT PPD assay of PBMCs, respectively (S. F. T. Thijsen, M. van der Wel, J. J. M. Bouwman, and A. W. J. Bossink, unpublished data).
For intradermal PPD, reactivity is expected 12 weeks after exposure to BCG or mycobacteria. However, this time frame is arbitrary and conservative; in other words, after 12 weeks most exposed patients are responsive to intradermal PPD. Nevertheless, a local immune response to PPD can be expected earlier. Patients who are treated with intravesical BCG instillations receive much higher doses of antigen (i.e., BCG) than do those given intradermal BCG vaccination. In BCG vaccination, 0.8 ϫ 10 6 to 1.2 ϫ 10 6 CFU of Mycobacterium bovis is given percutaneously, while in intravesical BCG instillations 2 ϫ 10 8 to 8 ϫ 10 8 CFU of Mycobacterium bovis is given. Bilen et al. showed that 68% of patients became positive for intradermal PPD reactivity 1 week after they were treated for the first time with intravesical BCG instillations (3). Thus, intravesical BCG instillations may provoke a T-cellmediated immune response to PPD which can be expected earlier than the 3-month period for TST which is used in TB contact tracing.
Our cases were treated for 4 to 9 months with antimycobacterial medication. Pyrazinamide was not given because Mycobacterium bovis is usually resistant to pyrazinamide. This is confirmed in case 2, in which Mycobacterium bovis was cultured. In our department, treatment for 9 months is given to patients with a positive culture for Mycobacterium bovis. The patients for whom there was no positive culture were treated with antimycobacterial medication until clinical recovery.
In conclusion, this case series shows that IGRAs using PPD as a stimulating antigen are potentially valuable in diagnosing infection with BCG, especially when fluids of affected organs such as BAL fluids can be tested. BCG instillations are widely used in the treatment of non-muscle-invasive bladder carcinoma. IGRAs using PPD as a stimulating antigen are a promising new tool in diagnosing and/or monitoring complications of this treatment and should be evaluated in greater series. | 2018-04-03T00:50:17.610Z | 2012-03-29T00:00:00.000 | {
"year": 2012,
"sha1": "98c9e3485ddc126943ba6132e50db147824c2882",
"oa_license": null,
"oa_url": "https://doi.org/10.1128/cvi.05597-11",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "fbfb9676473e61a07ee9d94b41aa30a8e8ab72e8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234055370 | pes2o/s2orc | v3-fos-license | Landscape ecological concepts in planning: review of recent developments
Context Landscape ecology as an interdisciplinary science has great potential to inform landscape planning, an integrated, collaborative practice on a regional scale. It is commonly assumed that landscape ecological concepts play a key role in this quest. Objectives The aim of the paper is to identify landscape ecological concepts that are currently receiving attention in the scientific literature, analyze the prevalence of these concepts and understand how these concepts can inform the steps of the planning processes, from goal establishment to monitoring. Methods We analyzed all empirical and overview papers that have been published in four key academic journals in the field of landscape ecology and landscape planning in the years 2015–2019 (n = 1918). Title, abstract and keywords of all papers were read in order to identify landscape ecological concepts. A keyword search was applied to identify the use of these and previously mentioned concepts in common steps of the planning cycle. Results The concepts Structure, Function, Change, Scale, Landscape as human experience, Land use, Landscape and ecosystem services, Green infrastructure, and Landscape resilience were prominently represented in the analyzed literature. Landscape ecological concepts were most often mentioned in context of the landscape analysis steps and least in context of goal establishment and monitoring. Conclusions The current literature spots landscape ecological concepts with great potential to support landscape planning. However, future studies need to address directly how these concepts can inform all steps in the planning process. Supplementary Information The online version of this article (10.1007/s10980-021-01193-y) contains supplementary material, which is available to authorized users.
Introduction
Prompted by fast and extensive landscape changes throughout the world, landscape ecology aims to provide policy relevant information about landscape change and form the base for landscape management, design and policy (Wu 2013;Mayer et al. 2016). The discipline has a long tradition in reaching out and building bridges to fields of action such as landscape sustainability (Wu 2010), landscape approach (Reed et al. 2016), landscape design (Nassauer and Opdam 2008) and regional and landscape planning (Forman 2008). The contribution of landscape ecology to inform planning and research management has been addressed in conceptual and empirical studies (see e.g., Ahern 1999;Pedroli et al. 2006;Opdam et al. 2013;Wu 2013;Milovanović et al. 2020). Few studies have also analyzed how landscape ecology has been used in landscape planning practices and plan making (e.g., Termorshuizen et al. 2007;Bjärstig et al. 2018;Trammell et al. 2018).
How landscape ecology has reached out to landscape planning, i.e., the focus of this research, is especially interesting. Landscape ecology is an interdisciplinary scientific discipline that focuses on spatial pattern and heterogeneity, and specifically their characterization and description over time, their causes and consequences and how humans manage those (Turner et al. 2001). The conceptual and theoretical core of landscape ecology links natural and social sciences to understand landscapes as arenas where structural features and social construction converge (Pinto-Correia and Kristensen 2013).
Landscape planning is prominent across the world as an integrated, collaborative practice on a regional scale (Steiner 2008;Selman 2012) and benefits from landscape ecology in manifold ways. It focuses often on rural areas or open landscapes, where conflicts between urban sprawl and recreational landscape values, agricultural production and nature conservation, and renewable energy production and aesthetics dominate (Mann et al. 2018). Landscape planning greatly varies from place to place and can be integrated into the institutions (e.g., in Germany), provide an input into strategic spatial planning (e.g., in Switzerland), be conducted as an ad hoc initiative (e.g., in the USA) or be largely missing (e.g., in Romania) .
Landscape planning as an academic field is undertheorized, as evidenced by the fact that very few scientific journals are devoted to landscape planning (with the notable exception of ''Landscape and Urban Planning''). However, landscape planning has a strong tradition in addressing procedural aspects that has led to established planning procedures. They operationalize the planning process through a sequence of steps and are well suited to investigate the link between landscape ecology and planning. Well-known examples are Steiner's Ecological Planning Model (Steiner 2008), Steinitz' Framework for Landscape Planning (Steinitz 2012), and Ahern's Framework Method for Sustainable Ecological Planning (Ahern 1999). In this line of work are also proposals that explicitly address landscape ecological planning (Wang et al. 2001;Hersperger 2006;Miklós and Š pinerová 2019). The pragmatic conceptualization of the planning process into a sequence of steps should not undermine the fact that landscape planning, like any kind of spatial planning, must be accepted as an ongoing political activity that is geared towards negotiation and conflict resolution between different public and private actors, within an arena of dynamic multi-level power relations and funding regimes (Oliveira and Hersperger 2019).
Landscape ecological concepts hold a great potential for integrating landscape ecological knowledge into landscape planning (Botequillha Leitao and Ahern 2002). We understand ''concept'' in line with Merriam-Webster's online dictionary as representing an abstract or generic idea generalized from particular instances (Merriam-Webster 2020). In the case of landscape ecology, these ideas can refer to the representation and organization of landscape elements (e.g., in terms of connectivity), to landscape characteristics (e.g., patterns) or to frameworks for landscape analysis (e.g., landscape services). Most of these concepts have an intrinsic spatial nature. The goal of this paper is to review recent publications to assess the use of landscape ecological concepts in planning. Specifically, we address the following research questions: 1. Landscape ecological concepts: What are they?
How frequently are they mentioned in current research? 2. How have landscape ecological concepts been integrated into landscape planning?
We present results on the identified landscape ecological concepts, their prevalence and integration into planning. The discussion centers on the use of landscape ecological concepts and on promising opportunities for landscape ecological concepts in planning.
Data collection
To collect our data, we adopted the PRISMA approach for systematic review (Moher et al. 2009). Four key journals in the field of landscape ecology were selected to conduct the analysis, respectively Landscape Ecology (LE), Landscape Online (LO), Current Landscape Ecology Reports (CLER), and Landscape and Urban Planning (LUP). The choice was based on (1) the relevance for landscape ecology science and (2) the clear linkages between landscape science into planning, based on aim and scope descriptions (for details see Supplementary material 1). All articles published in the four journals in the period 2015-2019 were downloaded and served as a basis for the analysis (n = 1918). The five years period was considered long enough to prevent distortions caused by special issues and short enough to keep the workload manageable.
Identification and prevalence of landscape ecological concepts Since we are not aware of a list of well-accepted landscape ecological concepts that would be suitable for our analysis, we resorted to an early publication that identified landscape ecological concepts when discussing landscape ecology and its potential application to planning (Hersperger (1994). To account for recent developments, we analyzed the sample of publications described above. Based on reading the title, abstract and keywords of all papers, an extensive list of concepts, topics and types of landscapes was extracted (n = 39). The high number can be explained by the fact that these concepts are often rather specific because their names have been taken directly from the paper. Each concept was assigned to a type (landscape ecology sensu stricto, ecology, land change science, planning/management, landscape perception). These types were used for a first grouping. We distinguished concepts from (1) topics, in the sense that the later are considered a theme addressed within the broader scientific discourse rather than abstract or generic idea in landscape ecology (e.g., climate change, sustainability), and (2) types of landscapes (e.g., agricultural landscapes, historic landscapes). The extensive list of concepts extracted from the first screening went through subsequent regrouping. Synthesizing led to the definition of seven additional concepts, where the detailed entries in the original list are often used to describe the concepts.
Then, all 1918 papers went through a keyword search to identify the use of early and additional concepts. We used the ''pdfsearch'' package in R programming language, version 3.6 (R Core Team 2020; LeBeau 2018) and searched for singular and plural forms and different variations of the concepts, e.g., for ''holism'', we also searched for ''holistic''; and for ''classification of landscape types'', we searched for ''classification of landscape'', ''landscape classification'', ''landscape classes'' (see Supplementary material 1, Table A). Results are reported as frequency of use per journal and/or period and can be interpreted as an indicator of how prevalent these concepts are.
Integration of landscape ecological concepts into planning
The title, abstract and keywords of the papers (n = 1918 articles) were screened to identify papers which might show how landscape ecological concepts are integrated into planning. A subsample of n = 131 papers was identified, which was further assessed for eligibility by full-reading. We retained 84 papers: 52 empirical papers and 32 overview papers for further analysis (see Supplementary material 4). The overview papers were further differentiated into reviews of scientific papers, evaluations of plans and projects, and frameworks and essays.
Full reading of the empirical papers allowed us to evaluate how landscape ecology concepts have been integrated into each planning step of the planning cycle. The planning steps were derived from works by Steiner (2008), Steinitz (2012), and Botequillha Leitao and Ahern (2002) (see Table 1). To systematically collect the data, we used a protocol which addressed the following questions: (a) which type of planning is addressed by the paper?, (b) to which planning level does the paper refer to?, (c) which concepts are integrated in any of the planning steps described above? The insights from the overview papers on the integration of landscape ecological concepts into planning were synthesized after careful reading. To ensure systematic interpretation, all readers applied the protocol in two articles, and we calibrated the assessments and interpretation through detailed discussions (for more detail see Supplementary material 2).
Results
Landscape ecological concepts in current research Table 2a lists the eight concepts discussed by Hersperger (1994). GIS was also mentioned as a concept but was omitted from our analysis since it has developed into a widely used tool. Over time, many differentiations within the composite concept of Structure, function, change have been developed. The three components of the concept now form the basis of many quantitative landscape assessments, e.g., with landscape metrics (Costanza and Terando 2019), and change (Land change) became a science of its own. Thus, Structure, Function and Change will be treated as separate concepts in the quantitative analysis.
Our analysis of the papers published in the past 5 years identified seven additional concepts (Table 2b). In the following paragraphs, the concepts are described, while the potential of the concepts for linking landscape ecology and planning will be explored in the discussion section. Table 1 Steps of the planning process for the analysis, derived from Steiner (2008), Steinitz (2012) and Botequillha Leitao and Ahern (2002) Steiner (2008) Steinitz (2012) Botequillha Leitao and Ahern (2002) Steps of the planning process used in this study Goal establishment Is the current study area working well? (evaluation model)
Goal establishment
What are the problems?
What should be achieved?
Inventory and analysis of biophysical and socioeconomic processes (different scales, regional to local) How should the study area be described? ( Scientific framework of landscape ecology based on the following three characteristics of the landscape system: structure: spatial relationship between patches, corridors and the matrix; function: determined by the ecological processes, as the flow of energy, material, animals and plants across the landscape; change: product of interaction of structure and function over time (Forman and Godron 1986) Structure Function Change Stability (a) Landscapes are considered metastable, a state of being in equilibrium, but susceptible to being diverted to another equilibrium; (b) stochastic view (Forman and Godron 1986;Botkin 1990) Stability Chaos Theory A way to explain system behavior where, despite rules, systems can be fundamentally unpredictable and behavior is sensitive to initial conditions; it expands the traditional understanding of changes in physical and social systems (Cartwright 1991)
Scale
The concept of scales allows analyses at different levels of a hierarchical system, whereas landscape might appear to be heterogeneous at one scale but quite homogeneous at another scale (Forman 1987;Meetenmeyer and Box 1987 Scale Hierarchy Theory Hierarchy theory developed as a framework to analyze systems of a certain type of complexity. A hierarchy-theory approach towards landscape ecology recognizes that landscape ecology extends over many spatial and temporal scales (Allan and Starr 1982;Urban et al. 1987) Hierarchy General Systems Theory General systems theory formalizes the way a system, such as a landscape, is perceived. It stresses the hierarchical order of nature as an open system and crosslinkages between various components (Naveh and Lieberman 1984) GSD
Holism
The basic concept of holism is that holistic entities have an existence other than the mere sum of their parts, and that reality consists of wholes in a hierarchical structure (Smuts 1926;Zonneveld 1990) Holism
Classification of landscape types
The classification of landscapes is based on a description of landscape attributes, such as structural characteristics or land-use units (Zonneveld 1990) Classification
Landscapes as socio-ecological systems
Socio-ecological systems, also called coupled humanenvironment (H-E) systems, provide a useful integrated analytical framework to understand the relationships between humans and environment (Holling 2001;Miyasaka et al. 2017). While heterogeneity, hierarchy, and feedback mechanisms are essential characteristics of socio-ecological systems, different integrated approaches have been developed to understand socio-ecological systems, including system dynamic models, spatial optimization models, spatial Bayesian Network models, and agent-based models (Liu et al. 2007;Le et al. 2012;Miyasaka et al. 2017).
Landscape resilience
Holling introduced the concept of resilience in ecological systems in 1973, as the persistence of relationships within a system that measures the ability of these systems to absorb changes (Holling 1973). Specifically, Landscape resilience is the capacity of a landscape/system to maintain the landscape process, ecological, economic, and social functions under changing conditions, and under diverse physical and socioeconomic challenges (Beller et al. 2018;Mock and Salvemini 2018). Schippers et al. (2015) suggest that resilient landscapes are determined by landscape diversity and spatial organization, and that greater variation in ecosystem elements provides more ecosystem services and enhances the resilience of landscape.
Landscape and ecosystem services
The Millennium Ecosystem Assessment (MEA) (2005) popularized the ecosystem services concept in the early 2000s. The mapping and assessment of ecosystem services have since been high on the agenda of many administrations. Like ecosystems, landscapes provide vital services to people (Keller and Backhaus 2020), i.e., the many and varied benefits to humans gifted by the natural environment. The ecosystem services concept is by far more prevalent in the scientific discourse than the landscape service concept. Some of the ideas that have inspired the development of the landscape service concept have been taken up by the broadening ecosystem services concept, as witnessed by the formulation ''ecosystem services in the landscape context'' and by the landscape approach. Termorshuizen and Opdam (2009) point out that in the context of landscape and ecosystem service discussions, ''landscape'' is used for all kinds of areas, whereas ''ecosystem'' is often associated with protected areas and biodiversity.
Green infrastructure
The concept of Green infrastructure refers to the network of green and blue elements such as remnant native vegetation, parks, private gardens, golf courses, street trees, and engineered options such as green roofs, green walls, bio filters, and rain gardens (Norton et al. 2015). Green infrastructure can promote ecosystem and human health in urban areas (Tzoulas et al. 2007). Unlike other types of public infrastructure such as roads, storm water systems, and schools, green infrastructure is often considered as amenity, not as a necessity (Benedict et al. 2006). Furthermore, the contribution of green infrastructure to mitigating high temperatures in urban landscapes, and to adapt to climate change more generally, has been widely recognized (Norton et al. 2015).
Multifunctionality
The concept of Multifunctionality highlights that landscapes tend to have multiple outputs and provides perspectives for ''delivering joined-up policy where its core property of interactivity can be harnessed in ways that produce qualities valued by people'' (Selman 2009). The concept developed from a feature of European agricultural landscapes (Otte et al. 2007) into an interdisciplinary concept which allows for understanding and analyzing landscapes from various perspectives, e.g., social, cultural, ecological, aesthetic (Bolliger et al. 2011). Landscapes serve multiple functions at the same time through (1) the same piece of land serving several uses, (2) an area being made up by many small areas dedicated to specific uses, and (3) interactions of uses (Otte et al. 2007). The concept is in line with the current shift from taming nature to reconnecting with nature, reflected by research directions on human-nature interactions, such as socioecological systems and human-wildlife coexistence (König et al. 2020). Table 2. The concepts Change, Scale, Structure, Function, Landscape as human experience, Land use, Landscape and ecosystem services, Green infrastructure and Resilience were mentioned more than 500 times (Table 2a refers to early concepts; Right (Table 2b) to additional concepts. For the full name of concepts, see Table 2. Journals clearly differ in terms of the prevalence of landscape ecological concepts
Land use
Land use can be defined as ''the total of arrangements, activities and inputs undertaken in a certain land cover type to produce, change or maintain it'' (FAO 1997;Verburg et al. 2015). In other words, land use indicates the way geographic space is occupied by society and its activities. Typical land use categories include agriculture, grazing, forestry, transportation, residential, commercial, and recreation. The type of management and the intensity of land use affect stress and potential environmental degradation. The concept allows an integrated focus on structural and functional landscape aspects while addressing human agency.
Landscape as human experience
The concept of Landscape as human experience evolved from early conceptual research on perceptual and psychological processes related to nature, such as the framework by Kaplan (1995) on human-nature relationships and the conceptual model by Gobster et al. (2007) on the relationship between aesthetics and ecology. The concept flourished with the application of new technologies that allowed for quantitative measurements of human experience, such as stress measurement based on salivary cortisol (Ward Thompson et al. 2012). The concept integrates social and cultural processes affecting landscape valuation and includes, among others, aspects of sense of place and soundscapes. Sense of place is particularly used to reflect the way people or communities attribute meaning, value, and significance to landscapes (Soini et al. 2012). The term soundscape is most often used to refer to the acoustic environment as perceived, experienced and/or understood by individuals and communities (Alleta et al. 2016).
Prevalence of landscape ecological concepts
Findings of the keyword search show that four of the early concepts in Table 2a are frequently used in today's publications, namely Structure, Function, Change and Scale (Fig. 1). Concepts that refer to theories are rarely mentioned in our sample, i.e., Hierarchy theory (12 mentions), General system theory (two mentions), and Chaos theory (no mentions). Findings further show that three of the additional concepts in Table 1b are widely used in today's publications: Landscape as human experience, (Table 3a) refers to early concepts; Right (Table 3b) to additional concepts. The journals Landscape and Urban Planning (LUP) and Landscape Ecology (LE) regularly publish articles that clearly focus on certain concepts, i.e., a concept is used more than 100 times per article (a and b) Land use and Landscape and ecosystem services (Fig. 1). They are followed by Green infrastructure and Resilience. Socio-ecological systems and Multifunctionality are rarely mentioned. The numbers per year remained rather stable (Fig. 1). Journals clearly differ in terms of the prevalence of landscape ecological concepts. Regarding early concepts, Change has been the most prominent concept in all four journals, followed by Scale and Structure (Fig. 2a). In Landscape and Urban Planning (LUP) Change is relatively prominent, in Landscape Online (LO) Structure, and in Landscape Ecology (LE) and Current Landscape Ecology Reports (CLER) Scale (Fig. 2a, Table B in Supplementary material 3). The analysis of the additional concepts shows that certain concepts are more prominent in certain journals. For example, papers referring to Landscape resilience are predominantly published in Landscape Ecology (LE), while articles addressing Landscape and ecosystem services are most prominent in the journal Landscape Online (LO) (Fig. 2b, Table C in Supplementary material 3).
The journals Landscape and Urban Planning (LUP) and Landscape Ecology (LE) regularly publish articles that clearly focus on certain concepts, i.e., a concept is used more than 100 times per article ( Fig. 3a and b). Articles published in Current Landscape Ecology Reports (CLER) use early concepts more frequently than articles published in any of the other three journals (Fig. 3a). Furthermore, the concept Holism is most often present in papers published by Landscape Online (LO). Interestingly, we found that in the journals Landscape Online (LO) and Landscape and Urban Planning (LUP) the additional concepts are more prevalent than the early concepts, whereas in Current Landscape Ecology Reports (CLER) and Landscape Ecology (LE) we see the inverse pattern (Fig. 3a, b).
Integration of landscape ecological concepts into planning in current research
Empirical papers Most of the 52 empirical papers in this cohort address urban planning (20 papers) and conservation planning (15), followed by land use planning and landscape planning (both with 8 papers), and landscape restoration (3). Eight papers refer to other types of planning, including strategic environmental assessment and community-based landscape management. Most papers refer to planning at the landscape (28), local (15) and regional level (11).
Out of all concepts, only Structure is prominent throughout the planning process (Fig. 4, Table D in Supplementary material 3). Also present in all steps are Land use and Landscape as human experience. For the full name of concepts, see Table 2. Concepts were most often addressed in the Landscape analysis step and least in Monitoring The other concepts were only occasionally present and Holism and Stability were mentioned only once in connection with a planning step (i.e., grouped in category Other in Fig. 4). Most of the 52 papers address landscape ecological concepts in the Analysis step, followed by Preferred plan, Participation and communication, Alternative options, and Goal establishment. Very few papers address landscape ecological concepts in Monitoring. Thus, the concepts are often used for the analysis of the study area, with no deep integration into the entire planning process.
Overview papers
In this cohort of 32 papers, eight literature reviews address the integration of landscape ecology into planning. New planning approaches are addressed in reviews on novel ecosystems and socio-ecological resilience by Collier (2015) and on sustainable landscape/landscape sustainability by Zhou et al. (2019). Most reviews focus on integration of specific aspects into planning, i.e., connectivity (Godfree et al. 2017;Costanza and Terando 2019), human perception (Dorning et al. 2017;, and urban biodiversity (Norton et al. 2016).
Several papers evaluate plans or projects that have been based on landscape ecological approaches. The focus is on landscape patterns (e.g., Meyer et al. 2015), landscape and ecosystem services (Spyra et al. 2019;van der Sluis et al. 2019), integrated landscape initiatives (Zanzanaini et al. 2017) and urban tree initiatives (Foo and Bebbington 2018). One paper directly addresses the evidence and opportunity for integrating landscape ecology into natural resource planning in public lands of the USA by evaluating the implementation of two plans (Trammell et al. 2018).
Most prominent among the overview contributions are essays and conceptual frameworks. They focus on the potential of planning and management and the role of planners for addressing a range of issues. They relate to landscape and ecosystem services (Musacchio 2018), socio-ecological systems (Fischer 2018), conservation (Gagne et al. 2015), integrated landscape management (Mann et al. 2018), and nature-based solutions (Albert et al. 2019). Two papers of a special issue addressed ecological wisdom (Young 2016;Wang et al. 2016). Most papers, however, provide frameworks and discussions for improving certain aspects of landscape planning and governance: They provide, for example, frameworks for prioritizing green infrastructure (Norton et al. 2015), restoration strategies (Hessburg 2015) and small-scale urban heterogeneity in urban environments (Zhou et al. 2017). Several contributions focus on the planning process for landscape and ecosystem services (e.g., Babí Almenar et al. 2018;Vialatte et al. 2019).
Discussion
We first reflect on the findings regarding landscape ecological concepts and the frequency of their mentioning (research question 1) and continue with how landscape ecological concepts have been integrated into the six main steps of the planning process (research question 2). We then explore how the additional concepts can support the link between landscape ecology and planning. We also point out limitations of our study and outline potential further research.
Landscape ecological concepts and their frequency
The most often mentioned concepts include early concepts such as Change, Scale, Structure and Function, as well as newer concepts such as Landscape as human experience, Land use and Landscape and ecosystem services. It implies that while the science of landscape ecology is evolving, it is not leaving its roots. Indeed, the distinction between early concepts and additional concepts allows an interpretation of developments over time. Early concepts, particularly Structure, Function, Change and Scale, are useful for examining and evaluating landscape patterns and processes and have been used heavily in recent years. Newer concepts emphasize more strongly the use of landscapes for human benefits. This is especially true for concepts such as Landscape as human experience, Land use, and Landscape and ecosystem services. The early concepts focusing on specific systems behavior, i.e., Chaos theory, Hierarchy theory and General system theory, have lost importance and are likely integrated into the new concept Landscapes as socioecological systems. This change could be interpreted as a transition towards a more applied discipline.
We found additional concepts to be more prevalent than the early concepts in the journals LUP and LO, while the opposite patterns were found in journals CLER and LE. While the differences are rather small, they are in line with the differences in the aims and scopes of the respective journals (see Supplementary material 1). Most importantly, LE and CLER explicitly focus on landscape structure and function or change, while LO and LUP focus on landscapes as human experience.
Landscape ecological concepts in the steps of the planning process Surprisingly, out of almost two thousand publications in the four key journals in landscape ecology and landscape planning, only a small number was found promising for analyzing the integration of landscape ecological concepts into landscape planning (52 empirical and 32 overview papers). Many more publications of course recommended in a general statement that their findings may improve planning. These papers provide, for example, novel insights in human-environment interactions and propose new methods to describe and assess landscapes. Many also address landscape ecological concepts. However, a clear link from the concepts to planning, and moreover to planning steps remains the exception.
The inventory and analysis of the biophysical and socioeconomic landscape patterns and processes provide an understanding of how the landscape works (Steiner 2008;Steinitz 2012). This research lends itself to scientific approaches. It is therefore not surprising that we found that most papers addressed landscape ecology concepts in the Analysis step. In contrast, few papers clearly addressed the Preferred plan step, and even when they did, they recommended very generic actions. Notable exceptions are, for example, referring to the design of greenbelts (Siedentop et al. 2016), and the proposal for patches for restoration and protection along preferred routes of movement to build ecological corridors (Babí Almenar et al. 2019). The limited number of papers contributing to the step Monitoring may be because the field of planning evaluation is still evolving ).
In our sample, only few papers connect landscape ecology concepts with all steps of the planning process. We interpret this finding twofold. First, this might be a consequence of the publication tradition: word limits for journal articles make it difficult to address all steps in sufficient detail. Secondly, and perhaps more importantly, the focus on only one or a few planning steps probably reflects a disciplinary division. Landscape ecology scientists might have a limited understanding of the planning process. As the Analysis step fits their experience the best, the link to other steps is done at a more general level.
To overcome the limited integration of landscape ecology concepts in all steps of the planning process, more dialogues between the disciplines are needed. For example, dialogue could be established through conference co-production with landscape ecologists and planners. For the research community, making use of all the publication options (e.g., supplementary material, data in brief, interactive data visualizations) could be a way of describing research on all steps of the planning process in a rigorous manner.
How landscape ecological concepts can provide a link to planning Due to its characteristics, each landscape ecological concept offers unique opportunities to link landscape ecological knowledge with planning. The potential use of the early concepts in planning was already explored by Hersperger (1994). Since then, Structure, Function and Change have become key concepts in landscape ecology, and systematic landscape analysis guided by these concepts supports the planning and design of patterns, processes and human-environment interactions. Landscape Classification often forms the basis for landscape analysis of this kind. The concept of Scale supports analysis in hierarchical systems and is therefore ideally suited to support planning at multiple administrative scales, from neighborhoods to nations. The public often perceives landscapes as holistic entities and therefore Holism can be an important aspect in participatory landscape processes. Early theoretical concepts such as Systems theory, Hierarchy and Stability seem to offer less direct links to today's landscape planning. Below, the possible links of the additional concepts to planning are explained in more detail.
Landscapes as social ecological systems
An understanding of landscapes as social ecological systems can facilitate the development of integrated models that conceptualize landscapes as nested sets of co-evolving social and natural subsystems connected through feedbacks, time lags, and cross-scale interactions. These models can be used to assess the effects of policies on dynamically linked social and ecological components of the landscape system (Miyasaka et al. 2017). Such models may lead to holistic approaches to manage forest landscape (Fischer 2018) or to resolve land use conflict (Karimi and Hockings 2018).
Landscape resilience
To efficiently plan intact natural systems as well as heavily modified landscapes, it is essential to understand how landscapes might react to impacts and challenges. Planning activities based on the Landscape resilience concept can help to improve the chances of rapid and effective response to a range of impacts, including extreme events and catastrophes (Ahern 2013;Beller et al. 2018). The Landscape resilience concept, as well as the Green infrastructure concept, are thus suited to support planning for climate change mitigation and adaptation.
Landscape and ecosystem services
A structured assessment of Landscape and ecosystem services supports the design of broadly accepted plans that ensure the optimal provision of multiple services to humans. Furthermore, landscape and ecosystem services have been proposed as a unifying common ground where scientists from various disciplines can cooperate in producing a common knowledge base that can be integrated into multifunctional, actor-led landscape development (Termorshuizen and Opdam 2009).
Green infrastructure
The concept of Green infrastructure supports the integration of multifunctionality and connectivity into planning. Conceived as a network with patches and corridors, this landscape ecological concept is easily integrated into landscape and spatial planning. Recent research on how users perceive green spaces and which green spaces users prefer has the potential to improve planning for quality of life and health, especially for urban residents . The concept of Green infrastructure is well suited to guide the development of planning options and specifically, to support planning for climate change mitigation and adaptation.
Multifunctionality
For planning and policy, multifunctionality paves the way for integration of ecological concerns into multiple policy domains, such as climate change through green infrastructure or agricultural policy, illustrated by Common Agricultural Policy in Europe and the Land Stewardship project in Australia (Cocklin et al. 2006). In urban settings, Multifunctionality can be used to plan the urban fringe or shift away from mono-functional uses. Its delivery entails integrated planning approaches such as participatory planning (Selman 2009).
Land use
The concept is at the heart of land-use and landscape planning. A landscape ecological perspective on land use is expected to provide detailed knowledge on landuse systems and land-use intensity as well as on the management options for sustainable land use. Furthermore, a focus on land use stresses how global environmental change results in severe impacts on biodiversity, and ecosystem integrity and landscape and ecosystem services (Verburg et al. 2015).
Landscape as human experience
Participatory landscape planning is closely linked with participants' landscape experience. Thus, assessments of human landscape experience and landscape perception greatly support landscape planning and design (Downes et al. 2015). The concept Landscape as human experience is well suited to represent the heterogeneous expectations towards landscape planning. Hersperger (1994) suggested that there were only a few applications of landscape ecology concepts into planning of urbanized areas. However, in our sample of recently published research, we found many papers that integrate landscape ecology concepts into urban planning showing that the number of applications has increased and diversified over time. These studies particularly rely on concepts such as Landscape and ecosystem services, Green infrastructure, Landscape as human perception and address planning steps such as analyses, participation and communication. In the same publication, it was furthermore suggested that landscape ecological planning in rural and natural areas mainly focus on conservation planning. We observe that conservation planning continues to be a frequent topic, and we came across many papers that address landscape structure as an important concept for conservation planning, and specifically focus on enhancing landscape connectivity in protected areas.
Limitations of the analysis Our findings show that there is limited integration of landscape ecology and planning. A certain bias in the findings could be due to the data in our sample. We focused on the period 2015-2019 in four key journals in the field of landscape ecology and landscape planning to conduct our analysis. While these four journals provide insights into the state-of-the-art research in the field with a broad range of cultural and language regions and easy accessibility, applied research might be underrepresented in our sample. Further research may consider to include other journals (e.g., on landscape architecture, planning practice) or to conduct an analysis on landscape projects. Furthermore, the assessment on integration of the concepts into planning showed that articles often address this aspect in a general manner. As we collected information on explicit integration into the planning steps, a less conservative approach than ours could lead to different results. Regular planning and project evaluation studies could be useful to observe how effectively landscape ecological concepts have been integrated into planning (see e.g., Hersperger et al. 2020).
Future research
To overcome the weak integration of landscape ecological concepts into the planning process shown in this research, we propose the following measures. More funding could be provided to research on translating disciplinary landscape ecological research into concepts that can be used in planning. Setting up landscape monitoring systems could encourage both planners and researchers to develop the theoretical aspects related to the Monitoring step. Case studies of landscape ecological planning and the developments of tools to evaluate and monitor the planning activities would be good as a start to promote this dialogue between theory and practice. Journals could open up to publishing more articles on science-practice interactions. For example, formats such as notes or policy briefs could be a way to encourage involvement of landscape ecology scientists in landscape planning. Furthermore, journals could be more rigorous in respect to application of research in planning. Sentences such as ''findings can be useful for practice'', which we often encountered in our review, are too general to provide a thorough background for planning practice.
Conclusions
As an interdisciplinary scientific field, landscape ecology has great potential to inform planning through key concepts of landscape ecology that have been used in the development of the field. Hersperger's article in 1994 expressed the hope to use the then developing theories and concepts of landscape ecology to change the traditional human-centered environmental planning approach towards a true synthesis of people and nature. After 26 years, responding to the call of the early article of Hersperger (1994), this paper conducted a critical review of the recent development of landscape ecological concepts in planning. It is set to identify the major landscape ecological concepts that have been used frequently by the scientific community in recent years, to explore the causes for their wide usages, and to understand how they may be integrated into different steps of the planning process. To identify the key concepts, we analyzed a total of 1918 empirical and overview papers that have been published in four key academic journals in the field of landscape ecology and landscape planning from 2015 to 2019. To examine the integration of key concepts into planning, we further identified 84 papers from our 1918 paper sample and used them to evaluate how each concept has been integrated into each planning step. Our main findings are the following.
First, while some of the concepts emerged in the early 1990s have remained popular, additional concepts have risen to be frequently used in recent years. Out of the eight promising concepts at the beginning of the 1990s, four have remained pervasive in recent publications, namely Structure, Function, Change and Scale. Meanwhile, three additional concepts, i.e., Landscape as human experience, Land use and Landscape and ecosystem services, are widely used in today's publications, followed by Green infrastructure and Landscape resilience. While the early concepts leading in usage have been used to examine and evaluate patterns and processes of landscapes, newer concepts emphasize more the use of landscapes for human benefits.
Second, our analysis shows that landscape ecological concepts have not achieved deep integration into the planning process. Out of six planning steps, landscape ecological concepts have been often used in the Analysis and rarely in Goal establishment and Monitoring. Out of all 13 major concepts, Structure is mentioned the most as part of the planning process, followed by Land use, and Landscape as human experience.
The limited number of publications on connecting landscape ecology concepts with all steps of the planning process implied not only a disciplinary division between the fields of landscape ecology and planning but also the current limitation of publication tradition of academic journals. More dialogues between the disciplines are to be encouraged and more publication options can be explored. We emphasized that landscape ecological concepts have great potential to support the planning process, as illustrated by a variety of examples found in the literature. Future studies may include planning-practice oriented journals and landscape projects to more broadly assess the integration of concepts into all key steps of the planning process.
Acknowledgement We thank Simona Bacȃu for her support with the data analysis and two anonymous reviewers for their insightful comments and suggestions. BPD acknowledges the support by her Swiss Government Excellence Scholarship, and AMH acknowledges the support of part of this research through the Swiss National Science Foundation Consolidator Grant BSCGIO 157789.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Zonneveld IS (1990) Scope and concepts of landscape ecology as an emerging science. In: Zonneveld IS and Forman RTT (eds), Changing landscapes: An ecological perspective. Springer, New York, pp. 3-20 Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 2021-05-10T00:04:25.724Z | 2021-01-28T00:00:00.000 | {
"year": 2021,
"sha1": "65c4760d536965f958e85d7379db36d82b4f6e85",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10980-021-01193-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3ee1a1edb06e907f2c3ba6668a44bc410014230",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Sociology"
]
} |
251487810 | pes2o/s2orc | v3-fos-license | Metal(loid)s in Common Medicinal Plants in a Uranium Mining-Impacted Area in Northwestern New Mexico, USA
The objective of this study was to determine uranium (U) and other metal(loid) concentrations (As, Cd, Cs, Pb, Mo, Se, Th, and V) in eight species of plants that are commonly used for medicinal purposes on Diné (Navajo) lands in northwestern New Mexico. The study setting was a prime target for U mining, where more than 500 unreclaimed abandoned U mines and structures remain. The plants were located within 3.2 km of abandoned U mines and structures. Plant biota samples (N = 32) and corresponding soil sources were collected. The samples were analyzed using Inductively Coupled Plasma–Mass Spectrometry. In general, the study findings showed that metal(loid)s were concentrated greatest in soil > root > aboveground plant parts, respectively. Several medicinal plant samples were found to exceed the World Health Organization Raw Medicinal Plant Permissible Level for As and Cd; however, using the calculated human intake data, Reference Dietary Intakes, Recommended Dietary Allowances, and tolerable Upper Limits, the levels were not exceeded for those with established food intake or ingestion guidelines. There does not appear to be a dietary food rise of metal(loid) ingestion based solely on the eight medicinal plants examined. Food intake recommendations informed by research are needed for those who may be more sensitive to metal(loid) exposure. Further research is needed to identify research gaps and continued surveillance and monitoring are recommended for mining-impacted communities.
Introduction
Diverse populations are disproportionately exposed to toxic materials by virtue of proximity [1]. American Indian (AI) communities are at risk for worsened health burdens, which may be compounded by environmental exposures [2]. One-half of the uranium (U) in the United States (US) is found on AI lands, where mining, milling, and processing commonly occur [3], as well as the storage of remaining waste. In the Western US, more than 160,000 abandoned mines exist on or are adjacent to AI homelands [4]. The study setting was a prime target for U mining for military purposes, where northwestern New Mexico (NM) alone contributed 40% of the US U production [5].
Diné (Navajo) lands were one of the prime targets for mining, contributing thirteen million tons of U ore for military use from 1945 to 1988 [6] and leaving more than 550 abandoned and partially unreclaimed U mines, mills, and waste piles [7].
The extent of the health impacts on the Diné community exposed to these sites is a public health concern. Uranium enters the body primarily by inhalation or ingestion (via contaminated water or food) and is deposited in tissues, primarily the kidneys and bones [8]. High U exposure studies in mammals have shown kidney chemical toxicity [9]. Uranium and metal(loid)s were examined in this study as they may co-occur environmentally with other metal(loid)s and/or may be associated by way of its decay series. Addressing U and its associated co-contaminant exposures is a challenge for rural communities experiencing a myriad of socioeconomic barriers [10]. Arsenic (As) is a teratogen [11]. Cadmium (Cd) can accumulate in organs and is associated with liver and renal problems [12,13]. Long-term neurodevelopmental, renal, and reproductive problems are associated with lead (Pb) [14]. Toxicosis can occur with high doses of selenium (Se) [15]; semen quality and testosterone have been shown to have an inverse association with molybdenum (Mo) [16,17], which also has a negative effect on renal function [18]. Carcinogenicity has been reported for several metal(loid)s, including As [19][20][21], Cd [19], and Pb [22], whereas others have permanent and/or long-term health sequelae (Mo, Thorium (Th), Vanadium (V), and Cesium (Cs)) [23].
The interaction between humans and plants is known as ethnobotany [24] or rather it is how people of a particular region or culture utilize local (indigenous) plants. It is common knowledge that modern medicines are a direct result of traditional ethnobotanical knowledge and information. Globally, up to 80% of the population relies on traditional medicine for their healthcare needs [14]. Yet, traditional medicines are poorly regulated and monitored in many countries including the US. Medicinal plants are assumed to be safe due to ease of accessibility and availability; they are commonly self-administered or self-prescribed without medical consultation.
In the US Southwest, there are more than 3000 known plant species, of which the Diné were said to utilize about 450 species for medicinal purposes [25]. In AI communities, local plants are relied upon for their medicinal or healing properties, are consumed as foods/additives, or are relied upon for innumerable cultural purposes. In this community, the primary categories of plant use are medicine, food/beverage, creating dye, paint, ceremonial objects (baskets, paints), and other uses (such as construction, fuel, and implements such as textiles) [25], respectively. Cultural protocol passed down through generations dictates that all parts of the plant should be used without waste, that the plant was selected using strict environmentally sustainable practices (i.e., the strongest and most robust plants are left unharvested to perpetuate the species), and that the harvester has requested permission to use the plant and has practiced thankfulness and respectfulness for its use [26,27].
Medicinal plant pharmacological indications, routes, and dosages vary and its administration may include drinking it as a tea or as a concentrated decoction or used in combination with other ingredients (concoction), it may be applied by direct dermal application (as a poultice, salve) and may be inhaled via incense or by steam (sweat bath). Also, plant roots may be directly chewed and ingested (e.g., Bouteloua gracilis (Wildenou ex Kunth) Lagasca ex Griffiths). Human exposure can occur through contact with local plant branches, stems, and roots that can be used for cooking and heating (e.g., Juniperus monosperma Engelmann), serve as construction materials (e.g., J. monosperma), and be used in the creation of numerous cultural implements (baskets, cordage). This could include items integral to traditional healing ceremonies. Plants and their contaminants may be ingested indirectly by humans when locally raised meat (via forage, water, and soil ingestion) is consumed. Phytotherapy self-administration/prescription is commonplace. However, laypersons, herbalists, or traditional practitioners or specialists may provide directions or prescriptions for various indications. A summary of the information on the eight study plant species names/taxonomies, descriptions, and ethnobotanical indications can be found in Table 1. Table 1. Plant names (scientific, common, and Diné names), biota description/distribution, and ethnobotanical indications.
This plant was a food source for the early Diné and was cooked as mush, bread dumplings, and cakes [30]. It served as a material for clothing and bedding for the early Diné [28].
In the present day, it is an important feed source for animals and livestock [29].
This plant is a ceremonial a medicine and may often be combined with other mixtures of plants for therapy [28].
Artemisia tridentate Nuttal Big sagebrush Diné name: "Ts'ah" "the sagebrush" [28] (p. 106) Gray-green-foliaged aromatic shrub that grows to heights of about 1.8 m. It has a woody stalk and flowers in late August through early October. The range of growth is from elevations of between 1500 and 2000 m [28]. This shrub has a vast distribution in British Columbia, Baja California, and the eastern Dakotas [32].
When combined with other sagebrush species it is used to treat headaches. As a tea, it is prescribed for postpartum hemorrhage/pain [33], indigestion, and constipation. The stems and leaves are boiled to treat fever, colds, and tuberculosis. It is used for fasting and as a poultice for swelling and foot corns. It is a vital component of several Diné ceremonies a [28,33]. It serves as a fire-starting implement (a ceremonial fire drill) and is a common sweat bath medicine, food, beverage, and medicine [25]. The sagebrush leaves are used to purify (smudge) [25]. It is a universal tonic and is used for swelling and snakebite and implement dye [33]. It is considered a "life medicine" and has special healing powers [25] (p. 42). A poultice can be applied to animal wounds [28].
Bouteloua gracilis
Blue grama Diné name: "Tł'oh nástasí" "bent grass" [28] (p. 45) This is a perennial grass that rarely exceeds 61 cm in height and grows in areas up to 2500 m [28]. It has a comb-like spike and grows from June to November and flowers from July to October. It is the most prolific grass on Diné lands. It is found in the Great Plains and the southwestern US, Mexico, and the Canadian Provinces [34].
This plant can be applied to heal cuts on humans and animals or by placing a chewed root directly on the wound. As a tea concoction, it is used for postpartum pain [28].
The plant is used in several cultural ceremonies a [28]. The plant is an important and vital forage for the local animals and livestock [34].
This medicinal plant is an emetic, used to treat headaches, influenza, abdominal pain, nausea, and as an antihelminth [25]. It is also used to treat acne, arachnid bites, and postpartum pain [28]. For ceremonies a , it is used as an emetic and as a healing implement [28]. Juniper berries can be eaten in the fall and serve as culinary ash (providing sources of iron, zinc, calcium, and potassium) in blue corn dishes. It is a valuable fuel source for heating the home and cooking and its branches and twigs serve as construction materials. Juniper berry tea and twig tea are medicinal. It is used to dye wool and other cultural implements [25].
Plant Names Description/Distribution Ethnobotanical Indications
Pascopyrum smithii (Rydeberg) Löve Western wheatgrass Diné name: "Tl'oh nitl'izí" "brittle grass" [28] (p. 132) This is a blue-green or pale gray bunch grass that has underground stems and long-living, extensive, strong root systems. It can grow up to 61 cm and grows in patches in elevation ranges of 1200-2500 m [28]. It is found in the soils of the US Southwest, intermountain areas of the western US, and the Great Plains [36].
It is used as incense for various ceremonies a [28].
Pleuraphis jamesii Torrey Galleta Diné name: "Tł'oh łíchí'í" "red grass) [28] (p. 39) This is a perennial grass with rhizomes and grows in patches to a height of less than 61 cm. It is the second most abundant grass on Diné lands and grows in areas greater than 2000 m [28]. This plant is widely distributed in southern California, Colorado, the desert mountains of Arizona, Nevada, New Mexico, Utah, west Texas, and southern Wyoming [37].
Tea is boiled and given to infants so they "will be strong adults" [28] (p. 39). It is a dietary supplement for children [37] It is an important source of feed for local animals/livestock [37].
Sporobolus cryptandrus
(Torrey) A. Gray Sand dropseed Diné name: "Tl'oh-stoz-ee" "slender grass" [38] (p. 777) This grass matures by late May or June [25]. The plant has narrow tightly rolled leaves with a lacy appearance [25]. It is a native plant found throughout North America in the rangelands of the US Southwest and parts of Idaho and Oregon [39].
A food source for local Native peoples in the Four Corners region as a "hot grain cereal", bread [25] (p. 195), and medicine [30,40]. It is used for ceremonies a [41] (p. 17).
Note: a Refer to the listed citation source(s) for specific ceremony names.
The purpose of this study was to determine if eight abundant and readily accessible species of plants, a locally harvested resource on Diné lands in northwestern NM, were contaminated with U and other associated metal(loid)s. Food-chain contamination in locally harvested food in the Diné community in NM was reported as a plausible exposure pathway [42]; harvesting and gathering were found to be common practices [43]. The current study was undertaken to characterize the use of eight common local medicinal plants and contribute novel metal(loid) uptake data. The objective of this study was to compare plant-part concentrations to the World Health Organization (WHO) Raw Medicinal Plant Permissible Level (RMPPL) guidelines and calculate an estimated ingestion risk exposure and compare it to the established food intake guidelines according to the Provisional Tolerable Weekly Intake (PTWI) or Reference Dietary Intake (RDI) or Recommended Dietary Allowance (RDA) and Upper Limit (UL) in eight commonly used medicinal plants in a community impacted by the U mining legacy.
Data from the Human Harvester Questionnaire
The medicinal plant harvesters (n = 6) were evenly divided between genders and the mean (M ± Standard Deviation (SD)) age was 57.25 ± 1.84 (range 53-62). On average, the calculated weekly intake of herbal medicine was 1.17 for ingestion of at least one plant for a mean consumption of 56.5 ± 3.32 years. Per participant reports, plants were located in the wild and did not benefit from artificial watering, soil amendments, or the application of pesticides. All study participants reported sharing the herbs for free with community members on and off Diné tribal lands. No participants reported selling the herbs. The majority of participants self-prescribed and administered the medicinal plants and reported not having consulted a traditional practitioner for their use. Plant harvesting, preparation, storage, and consumption information was passed down from previous generations via elders; they were often laypersons, herbalists, or other traditional practitioners or specialists.
A study by Tsuji et al. [27] found food-sharing behavior to be common in a North American Indigenous community impacted by mining in a traditional-use territory in Canada. These types of food-sharing behaviors were found to be related to the harvesting of subsistence-type of foods that were found to have a direct exposure impact beyond the mining communities and were considered to be important for assessing and monitoring impacted communities [44], with a special interest in vulnerable groups (children and older tribal members) [45]. Using Geographic Information System (GIS) mapping, the above study [27] demonstrated that longstanding harvesting areas overlapped significantly with contaminated areas and that several important potential routes of exposure were identified and characterized (e.g., ingestion of contaminated foods and drinking water). Using GIS, the current study demonstrated an overlap of medicinal plant gathering and harvesting areas in proximity to mining sites and features; overlap was commonplace and samples fell within a 3.2 km buffer zone of high-risk areas. Proximity (median 3.54 km ((IQR 1.81-8.0)) to U mine and milling sites was found to be a potential contributor to cardiovascular disease [46] in a local GIS study.
Medicinal Plant Parts
Twenty-seven percent of the medicinal plant species in the study areas consisted of the B. gracilis plant, with 12% each of A. hymenoides, A. purpurea, P. smithii, P. jamesii, and S. cryptandrus and 6% each of J. monosperma and A. tridentate plants. The availability and distribution of the sampled plants were representative of the local flora reported in the literature [29,31,32,[34][35][36][37]39]. The majority of the plants had greater concentrations in their aboveground parts than their roots. The metal(loid)s that met statistical significance (p < 0.05) were Cd, Se, Th, and U ( Table 2). The largest metal(loid) concentration differences were found between the aboveground A. purpurea plant and its roots for Se (3.50 mg/kg vs. 2.31 mg/kg) and the P. jamesii plant and its roots (3.69 mg/kg vs. 2.41 mg/kg). Others that differed by more than 1 mg/kg were A. tridentate (2.67 mg/kg vs. 1.55 mg/kg) and S. cryptandrus (3.36 mg/kg vs. 2.28 mg/kg). In general, comparable results were found (or lower plant concentrations) with herbal plant metal(loid) levels in the study area [42,48], including international studies [49][50][51][52], except there were higher concentrations found with a Th [52] plant study (Table 3). Forage grasses reported for U ranged from 0.5 to 7.7 mg/kg (U in root M = 5.0 mg/kg and grass blades 2.4 mg/kg) [42] ( Table 3). The plant species reported by local and international studies were dissimilar to the plants reported in this study. Shi et al. [53] reported that various plants are prone to concentrating contaminants in their main roots as they seem to function as a buffer to the aboveground parts of the plant. Similarly, Anke et al. and Soudek et al. found that there were greater metal(loid) concentrations in the plant roots than in the above-ground portions [54,55]; this was particular to U [55]. The uptake of metal(loid)s appeared to differ between species of plants [49][50][51][52]. Table 3. Similar plant and soil studies examining metal(loid) concentrations. Metal(loid) concentrations are reported as mg/kg from high-impact areas unless otherwise specified.
Soil
In most instances, the study findings showed that metal(loid)s concentrated greatest in soil > root > aboveground plant parts, respectively (Table 2). Vanadium was the only metal that exceeded the concentration range of 15 mg/kg. Those metal(loid)s that fell between 10 and 15 mg/kg were Pb, Th, and As and of that, less than 5 mg/kg were Se, Cs, Mo, U, and Cd. The mean soil pH was weakly acidic to neutral in reaction (6.91 ± 0.97). Statistical significance was found in comparing the soil to the aboveground plant parts (soil > plants): V (p < 0.001), As (p < 0.05), Cs (p < 0.05), Pb (p < 0.05), Mo (p < 0.05), Th (p < 0.05), and U (p < 0.05). The soil concentrations were greater than the plant roots for all sampled plants: As, Cs, Pb, Mo, and Th (p < 0.001), V (p < 0.01), U, Se, and Cd (p < 0.05). These findings were similar to local and international plant studies examining different species of medicinal plants for metal(loid) content (As, Cd, Cs, Pb, Mo, Se, Th, U, V [43,48], Cd, and Pb [50] (Table 3)). In a regional tea soil study [43,48], there were comparable results for As, Cd, Cs, and V but greater concentrations of Pb, Se, and U; there were smaller concentrations of Mo and Th (Table 3). Regional plant and soil studies were also conducted for a different species of herbal plant (T. megapotamicum). A local study found comparable concentrations of Se in high-impact soil areas [56] but greater U soil concentrations were found in non-control areas [42,57].
The soil pH was comparable to other locally harvested plant and soil studies; they ranged from 6.3 to 6.5 (herbal tea and squash studies) [43,48]. More acidic soils have been demonstrated to increase the transfer and uptake of various metals such as Cd [58] and were thought to increase the likelihood of co-occurrence with other metal(loid)s, which also seems to be dependent on the physiochemical make-up of the soil and individual uptake of metals by various plant species [55,59,60]. It was beyond the scope of this study to describe all variables associated with the uptake of metal(loid)s in herbs from the soil.
For the herb-harvesting activities and consumption reported in this study, the main exposure to metal(loid)s appears to be via soil. In general, the current study has demonstrated that soil contained the greatest amounts of metal(loid)s compared to plant part samples (Table 2), which is comparable to local tea [43], vegetation [42], and crop studies [48].
WHO RMPPL
The soil Cd concentration levels were exceeded for the WHO RMPPL of 0.3 mg/kg by 2.9 times for the aboveground plant parts for P. jamesii (M = 0.87 ± 1.42 mg/kg, Table 2). Five aboveground plant parts (A. hymenoides: M = 1.31 ± 0.19 mg/kg; A. purpurea: M = 1.22 ± 0.32 mg/kg; B. gracilis: M = 1.08 ± 0.47 mg/kg; P. smithii: M = 1.19 ± 0.41 mg/kg; and P. jamesii: M = 1.40 ± 0.28 mg/kg) exceeded the As concentration level of 1 mg/kg for the WHO RMPPL [47]. Of all the plant species sampled, study participants reported consuming B. gracilis root (M = 1.16 ± 0.41 mg/kg) for medicinal purposes; this was found to have exceeded the As WHO RMPPL by more than 3.5 times the recommended level. There were no exceedances for Pb WHO levels (10 mg/kg) for all eight species of plants [47].
The WHO RMPPLs were put in place to evaluate the presence of metals in herbal tea formulations and tinctures [47]. There are no permissible levels for Cs, Mo, Se, Th, U, and V.
A local herbal plant study found that the WHO RMPPL was exceeded for Cd in a popular species of tea, T. megapotamicum (M = 0.35 ± 0.31 mg/kg) and was higher in high-vehicular-traffic areas (M = 0.68 ± 0.11 mg/kg; p < 0.001) than low-traffic areas (M = 0.10 ± 0.06 mg/kg; p < 0.001) [43]. International medicinal plant studies did not find PTWI exceedances in other species of plants [61,62].
Human Intake Calculations for As, Cd, and Pb
The weekly intake calculations for As for each plant ranged from 0.29 to 0.82 µg/kg, 0.02 to 0.51 µg/kg for Cd, and 0.29 to 1.55 µg/kg for Pb (Table 4). Collectively, the PTWI percentages were low and fell below 7.3% (range 0.29-7.3%) of the weekly intake for all plants examined.
The PTWI limits are 15 µg/kg body weight (BW), 7 µg/kg BW, and 25 µg/kg BW for As, Cd, and Pb, respectively [63,64]. There are currently no PTWI guidelines set for Cs, Mo, Se, Th, U, or V. All metal(loid) PTWI levels were below the level of concern for all plants examined. The PTWIs reported here were generally lower than those reported for squash and herbal tea plants [43,48] in a comparable regional study.
Human Intake Calculations for Mo, Se, and V
The daily intake calculation for Mo ranged from 0.28 to 0.91 µg, 0.92 to 2.01 µg for Se, and 0.22 to 4.86 µg for V ( Table 5). The percentages for the RDA and RDI all fell below 3.7% for each plant studied. The UL percentages were considerably lower and did not exceed 0.5% for all medicinal plants sampled.
For Mo, the RDA is 45 µg/day with a tolerable Upper Limit (UL) of 2000 µg [65]. The RDI for Se for adults is 55 µg/day with a tolerable UL of 400 µg/day [66]. The UL for V is 1800 µg/day but there are no RDA or RDI guidelines [65]. There are no set RDIs/RDAs for As, Cs, Pb, Th, U, or V. There are no UL guidelines for As, Cd, Pb, Th, or U. In a local study area report, the RDAs, RDIs, and ULs were lower than those reported for squash and herbal tea biota [43,48]. The calculated RDIs/RDAs for Mo, Se, and V were small; however, these may not be completely reflective of the overall diet. It is likely that the Mo, Se, and V RDIs/RDAs were met by the consumption of other foods in the regular overall diet. This study only focused on a small portion of the entire food intake. For this cohort, supplemental Se and Mo in the diet may be needed (if not met by the regular overall diet) and is available in foods such as meat, legumes, grains (Se), and nuts (Mo) [65]. The advice of a dietitian and healthcare provider is recommended for any dietary changes in similar settings.
Human Implications for Intake Calculations
The intake estimates demonstrate that the consumption of each herbal medicine individually may not be of concern in the current cohort with an intake of 1.17 times per week. Upon direct comparison to the WHO RPPML, several plant species' concentration levels were found to exceed the permissible levels for As and Cd. When the calculation incorporated a reference to body weight (60 kg) for the cohorts' weekly intake (1.17 times a week), the PTWI (As, Cd, and Pb), RDAs/RDIs (Mo and Se), and ULs (Mo, Se, and V) were not exceeded for all eight species of medicinal plants. The former guidelines are based on Acceptable Daily Intake and the latter guidelines are more appropriate for long-term or chronic exposure to metal(loid)s [63]. More recent recommendations by the WHO [67] support the use of PTWI for measuring accurate medicinal plant material metal(loid) intake exposure. In this study cohort, participants reported extensive years of exposure to medicinal plant harvesting consumption (56.5 ± 3.32) as well as participation in other related outdoor harvesting activities. This provides support for the use of the PTWI guidelines as an accurate measure of chronic or long-term exposure. For this study, we only reported individual plant concentration intake estimates consumed on a weekly basis and examined only a portion of the overall dietary intake. In some instances, there was a potential for guideline exceedances if several medicinal plants were used in mixtures or consumed on a more frequent basis. Further, if study participants were consuming additional locally raised and harvested foods (including local water) the combination may exceed the estimates reported here. For a more accurate intake estimate, collective food intake assessments that examine all aspects of one's dietary intake are recommended. It was beyond the scope of this study to report the estimates of all conceivable mixtures of phytotherapies or to consider every route of administration. In most study case scenarios, medicinal herbs were typically consumed for short periods or were reserved specifically for special albeit infrequent curative ceremonies and their overconsumption was uncommon. Lastly, examining metal(loid) uptake and their calculated intake from other medicinal plants that were not examined in this study is warranted.
This population group has disproportionately high rates of hypertension, diabetes, cancer, cardiovascular disease [2,46], renal disease, and other comorbidities [68]. Metal(loid) exposures are known to worsen these comorbidities. Further, there is little research on the collective bioeffects of co-occurrent contaminants. More research is needed for high-risk groups as they may be more susceptible to the effects of metal(loid)s. High-risk groups include the very young, lactating or pregnant women, older adults, and those with cardiac, renal, and immune function problems. The level to which exposure is a danger to high-risk persons and other interrelated factors are unknown and need further investigation. It is recommended for individuals that consume traditional medicinal plants to consult with their healthcare provider when consuming alternative therapies to avoid untoward medication interactions.
Limitations
There were several study limitations. There was ample literature documenting the indications for various medicinal plant remedies in this community; however, there was significantly less documentation in relation to dosage information. Several sources of information were available documenting the use of various medicinal plants during pregnancy and the postpartum period [28,33] and for the treatment of infants and children [28,37], but for all age groups, there was no dosage information available. As there was scant dosage information to glean for the study calculations, we relied upon comparable studies. For instance, we provided an estimate of oral intake by using the equivalency of one cup of tea containing one g of plant material [61,69]. Also, exposure in terms of routes of administration was not examined in this study due to the lack of detailed pharmacological information. For example, inhalation (via incense (e.g., P. smithii, A. tridentate) or sweat bath steam or other exposures by smoke or mist or aerosolization) and dermal exposure (e.g., B. gracilis, A. tridentate, J. Monosperma) were not calculated for this report. Future examination is needed to establish dosages and to include various routes of administration such as inhalation and dermal skin exposures.
Other locally derived environmental sources of exposure may add or compound the risks. For instance, it is common practice for people to use local water (regulated and unregulated) to steep the teas or medicinal concoctions possibly using a suite of plant mixtures. Further, such plant mixtures introduce several complexities; without detailed pharmacologic information, synergistic, additive, and antagonistic effects are difficult to determine. In fact, some studies have identified that metal(oid)s may dissociate in water at certain water temperatures [70,71] and that the pH of infusion water may be a factor in uptake [61,72]. These factors warrant further investigation.
A plausible reason for the lack of dosage-specific or other detailed phytotherapy information may be that some tribal communities are protective of this information. Researchers and other experts have reported a general reluctance by tribal members or informants to report on healing ceremonies/medicinal plants as this knowledge is seen as sacred and such esoteric medicinal and ceremonial knowledge is exclusively for the dispensation/treatment/handling by Diné medicine-people with extensive training or who have undertaken apprenticeships [28]. For this paper, the researchers have not reported any new information in this study on medicinal plant indications (including references to specific ceremony names) that has not been published elsewhere [28,29,[31][32][33][34][35][36][37][38][39][40][41]. It is rather an organized compilation and report of existing medicinal plant information published by researchers in direct consultation with expert Diné informants [28,29,[31][32][33][34][35][36][37][38][39][40][41] who are vetted specialists such as herbalists, medicine-people, or healers. The resource is meant to be a reference source for researchers, healthcare providers, traditional medicine healers, and tribal community members and leaders. Further, it is not the purpose of this paper to report on pharmacokinetics but rather to inform and identify knowledge gaps in this area of medicinal plant research. Pharmacological intake, absorption, bioavailability, distribution, metabolism, and excretion are very complex processes and require extensive research. Further, there are many complicated individualistic (e.g., age, chronic and acute health problems, metabolism, genetics, diet/nutritional status, etc.) and interrelated environmental variables to be considered.
Materials and Methods
This was a descriptive, comparative study examining contamination levels in locally harvested medicinal plants and soils from reservation areas within a 3.2 km radius of previously U-mined and disrupted areas. Data obtained from the Diné Network for Envi-ronmental Health (DiNEH) study cohort [42] served as one of the sources for identifying the subjects and samples of food, herbs, water, and soil. Additional participants were recruited using snowball methods (word-of-mouth), home visits, and advertising at public tribal community events. Of the DiNEH cohort [42] respondents, those individuals who reported harvesting plant foods were recruited for participation in the present study. Plant biota were selected based on active use by study participants and their proximity to mining structures. The medicinal plant data were compared and reported to reflect an accurate estimated measure of metal(loid) intake in humans via medicinal plant ingestion.
Study Setting
This study was reviewed and approved by the dual Institutional Review Board (IRB)s, the Navajo Nation Human Research Review Board, and the University of California, Los Angeles, (UCLA) IRB. The eight species of plants identified in this study are not listed as endangered or threatened according to the Navajo Nation Department of Natural Resources [73] and the NM Energy, Minerals, and Natural Resources Department [74].
The field research area is a semi-arid to arid region of the US Southwest in northwestern NM on Diné reservation lands ( Figure 1). The average precipitation was found to be <25 cm per year according to meteorological data for NM (Western Regional Climate Center Western US Climatic Historic Summaries) during the study period. The Mariano Lake Chapter is 272 km 2 of land mass and the Churchrock Chapter is 233 km 2 (total land mass of 505 km 2 ). Recruitment was initiated in May 2012 and enrollment began in July 2012. All samples were collected from 10 November to 13 December 2012. This study focused on locally harvested plant biota and was part of a larger research project that examined subsistence farming on the reservation, including the metal(loid) contamination of herbs, sheep, crops, and associated data [43,48].
Human Harvester Questionnaire Data
The Diné Plant-Animal-Human-Questionnaire was administered to collect demographic information and collect overall local food harvesting data. Information on specific harvesting exposure activities was obtained. The Diné Wild Plant/Herb Intake Questionnaire was used to collect information on herbal plant harvesting and consumption. Data collected included plant use; indications; the amount, frequency, and duration of consumption; incidences and the extent of herb sharing and sales; relevant cultural uses for the medicinal plants; and traditional practitioner information.
Plant Identification and Nomenclature
Live parallel plants were collected, dried, and pressed for identification and archival. A plant collection description log was collected. Color photographs were taken of each plant. The University of New Mexico (UNM) Herbarium identified and archived the plant samples. Global Positioning System (GPS) instrumentation (Trimble Navigation Limited, Westminster, CO, USA) was utilized to collect location data and conduct spatial proximity analysis. Data differential correction was completed within 72 h of data capture using Pathfinder Office version 5.30 (Trimble Navigation Limited, Westminster, CO, USA).
Medicinal Plant Samples
Eight species of medicinal plants were collected and identified. Live medicinal plant samples were collected from wild, non-cultivated sources within a 3.2 km radius of the central part of abandoned U mines and features (mine portals, pits, rim strips, vertical mine shafts, and prospect areas). The above-ground portions and roots of live plants were stored in polyethylene (PE) plastic Ziplock ® bags. The plant samples were photographed, weighed, bagged, and placed on dry ice for shipment for analysis by the UNM Analytical Chemistry Laboratory Earth and Planetary Sciences Department. The medicinal plant flowers, leaves, stems, and roots were analyzed for metal(loid)s (As, Cd, Cs, Pb, Mo, Se, T, U, and V) using inductively coupled plasma-mass spectrometry (ICP-MS).
Soil Samples
For each medicinal plant sample, parallel soil samples were collected. To avoid crosscontamination, a silicon-coated core sampler (Art's Manufacturing and Supply Inc. (AMS), American Falls, ID, USA) was utilized. A slide hammer with a stainless-steel hand auger was employed to collect soil samples using a PE liner (AMS Core Sampling Mini-kit, American Falls, ID, USA). One hundred grams (g) of soil were collected for each plant from a 0-25 cm depth. The soil samples were analyzed for metal(loid)s (As, Cd, Cs, Pb, Mo, Se, Th, U, and V) using ICP-MS.
Sample Analysis
Medicinal plant and environmental sample preparation and analysis are reported in detail in previous publications [43,48]. The biota and soil samples were stored in a −20 • C freezer before preparation and analyses. The organic plant samples were first washed thoroughly with 18 mega Ohm water to remove any suspended materials on the plants' surfaces. In addition, the samples were then soaked in a very dilute solution (0.001 M HCl) to ensure the removal of clay particles and any pollutants on the plants' surfaces. The samples were then oven dried at 65 • C until the samples' weight stabilized. The samples were prepared by weighing 2 g of dry mass into the digestion tube. Two mL of Hydrogen Peroxide (H 2 O 2 ) and 5 mL of ultra-high purity nitric acid (HNO 3 ) were added, and the solid plant and soil samples were gradually heated to 95 • C and digested for two hours. The digested samples were transferred into 50 mL volumetric flasks and brought to volume using 18 mega Ohm water. Three mL of HNO 3 (reagent blank) was run with each batch of samples.
A PerkinElmer NexION 300D ICP-MS (Waltham, MA, USA) coupled with an ESI SeaFast SP3 auto-sampler were used to analyze the digested samples in both direct (Anhydrous Ammonia for trace metals) and hydride (Oxygen for Arsenic) modes to significantly minimize mass interferences. The instrument detection limits are as follows: As 0.3 µg/L, Cd 0.1 µg/L, Mo 0.02 µg/L, Pb, 0.008 µg/L, Se 1.3 µg/L, and U 0.008 µg/L.
Provisionable Tolerable Weekly Intake (PTWI) Calculation Equation
The metal(loid) PTWI calculations were derived by utilizing this equation [61,68]: PTWI = daily intake of metal(loid)s = ∑[concentration of metal(loid) in herb × mean of herbal intake (grams per person per day)]; weekly intake of metal(loid)s = daily intake × seven days a week; weekly intake per body weight (kg) (PTWIs) = weekly intake or reference body weight (60 kg). Consumption based on the number of grams per day that herbal medicines were consumed (5 g) based on comparable data. (1)
Statistical Analysis
Statistical analysis was undertaken to utilize the Statistical Package for the Social Sciences (SPSS) for Windows (version 28, IBM, Armonk, NY, USA). Metal(loid) concentration levels in the medicinal plants and corresponding soil samples were reported as milligrams per kilogram (mg/kg). The summary data included means, standard deviations, medians, ranges, and percentages. The differences between the metal(loid) levels in the medicinal plant parts and soil were compared, with significance determined by Student's t-tests. A p-value of <0.05 was considered significant. The absolute value of the t-statistic was reported along with the relevant means and the interpretation of the directions of differences.
Conclusions
The WHO RMPPLs were exceeded for As for five aboveground plant parts (A. hymenoides, A. purpurea, B. gracilis (including plant root), P. smithii, P. jamesii) and two plant roots for Cd (P. jamesii and A. Tridentate); however, when the PTWI were calculated using the study participant intake data, all plant concentrations fell below the level of concern for metal(loid)s that have established food intake guidelines. There are no established intake guidelines for Cs, U, and Th. The current data do not appear to demonstrate a risk of metal(loid) ingestion above the average ingestion intake in this study cohort for the eight species of medicinal plants examined. Further study is needed to address the study limitations and the identified research gaps. The limitations to be addressed include further characterizations of medicinal plant dosages, indications and administration routes, and the health effects on high-risk groups. Continued research, surveillance, and monitoring are needed in uranium mining-impacted communities. Informed Consent Statement: This study was reviewed and approved by the Navajo Nation Human Research Review Board and the UCLA IRB. Community and individual informed consent were obtained.
Data Availability Statement:
Restrictions apply to the availability of these data. Data was obtained from The Navajo Nation and are available with the permission from The Navajo Nation Human Research Review Board. | 2022-08-11T15:17:10.855Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "0cc7c8c7bf952cd9f053f018abae9aa81b692a13",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/11/15/2069/pdf?version=1659957263",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "36b5ab988e8616dbe08c3b84cc2985eb862676ba",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231583007 | pes2o/s2orc | v3-fos-license | New Bias Calibration for Robust Estimation in Small Areas
Using sample surveys as a cost effective tool to provide estimates for characteristics of interest at population and sub-populations (area/domain) level has a long tradition in"small area estimation". However, the existence of outliers in the sample data can significantly affect the estimation for areas in which they occur, especially where the domain-sample size is small. Based on existing robust estimators for small area estimation we propose two novel approaches for bias calibration. A series of simulations shows that our methods lead to more efficient estimators in comparison with other existing bias-calibration methods. As a real data example we apply our estimators to obtain \textit{Gini} coefficients in labour market areas of the Tuscany region of Italy, where our sources of information are the EU-SILC survey and the Italian census. This analysis shows that the new methods reveal a different picture than existing methods. We extend our ideas to predictions for non-sampled areas.
Introduction
Small Area Estimation (SAE) has emerged and developed rapidly in recent years in theory and practice. Nowadays, SAE techniques are used in all kinds of official statistics, ranging from business decisions to attribution of health services or allocation of government funds. This is partly due to the high demand of statistics by policy makers on the one side, but also to the increasing data availability together with recent computational advances on the other side. Using sample surveys is a cost effective tool to provide estimates for characteristics of interest at population and sub-populations' (area/domain) level. This information, coming along with auxiliary data through administrative channels, is used for a better estimation of domain level parameters. Consequently, when sample sizes in the individual domains are too small (talking of "small areas") to obtain reasonable mean square errors by means of direct estimates, then SAE "borrows strength from other existing sources of information". For a comprehensive review on this subject we refer to Rao and Molina (2015) , Chambers and Clark (2012) and Longford (2005).
Indirect estimators that are based on an explicit linking model are referred to as model-based estimators. Among these we concentrate on mixed effects models (MEM) with area-specific random effects that try to capture the between area variation beyond what is accounted for by the auxiliary covariates, see Rao (2008), Datta (2009), Pratesi (2016), Jiang and Lahiri (2006) and Pfeffermann (2013). SAE techniques are intrinsically sensitive to outliers due to the small samples considered. Therefore, robust estimators has been proposed and developed in this field. The two main streams of research on this topic are the robust version of the EBLUP (REBLUP) proposed by Sinha and Rao (2009) based on bounded estimating equations for MEM, and the M-quantile (MQ) approach proposed by Chambers and Tzavidis (2006). The latter captures the between area variation through the estimation of area specific quantiles as coefficients; besides being robust against outliers it avoids problems associated with random effect prediction. For more recent developments on the M-quantile methods see Salvati et al. (2012), Pratesi et al. (2009), or Marchetti et al. (2017. In robust estimation of finite population parameters, Chambers in his seminal paper in 1986, distinguished between the projective and predictive estimation. The former refers to the classical robust estimation, where the outliers are down weighted or discarded in the estimation procedure. In contrast, the latter accounts for so-called "representative outliers", i.e. extreme observations in the sample which are likely to occur also among the non-sampled units. Therefore, a calibration is necessary for the bias that is caused by down weighting or disregarding these observations in the estimation process. A general bias calibration approach for the estimation of the finite population Cumulative Distribution Function (CDF) is proposed by Chambers and Dunstan (1986), and its robust version is presented in Welsh and Ronchetti (1998). In SAE, Tzavidis et al. (2010) introduced a general approach for the bias correction of existing robust estimators, and Chambers et al. (2014) discussed different methods to estimate the mean squared error (MSE) for these bias calibrated estimators.
In this context, we propose two novel approaches to calibrate the bias for the robust estimators of non-linear parameters. In the first one, we derive the non-linear statistical functional (e.g. Gini) by using the empirical CDF F on which the calibration is performed. In the second approach, we first linearise the functional by means of an appropriate approximation to afterwards apply a conventional calibration for linear parameter. Namely, we use the von Mises linear approximation by the Influence Function (IF) of the statistic. While through series of simulation we show that the former achieves the lowest MSE the latter offers the least absolute bias in comparison to other bias calibration techniques that are presented. We show later that mainly in situations where the errors come from a highly skewed, heavy tailed distributions, one should use an asymmetric calibration to reflect the data generating process. We observe that in such situations the bias calibration leads to a more efficient estimator if the correction is done using the asymmetric Huber function with data-driven tuning parameter.
In SAE in the absence of closed form formula or good approximation, the MSE are often estimated by some bootstrap techniques; see among others, Hall and Maiti (2006b), Hall and Maiti (2006a), Pfeffermann and Correa (2012). In our case as well any appropriate bootstrap method introduced in the SAE literature can be used to estimate the MSE of our robust, bias calibrated estimators.
We have to emphasize that our proposed methods can be used not only for the Gini but for any other linear or non-linear parameter in small areas. Its use is strongly recommended where the distribution is skewed and/or heavy tailed. Applications can be found in many fields, from poverty and inequality to environmental or health data. They can also be used to correct for a bias after having imputed missing data.
Section 2 introduces the general framework and the notation. In Section 3 we propose two approaches to deal with the non-linear population parameters, together with an asymmetric calibration of the estimates. Section 4 compares the performance of these approaches in a series of simulations with existing methods, and Section 5 discusses practical issues like the optimal tuning for the asymmetric calibration. In Section 6 we apply the EU-SILC survey and the census in Italy, to estimate the Gini coefficients at LMA in the Tuscany region. Section 7 concludes.
General framework and notations
Consider the entire population (so-called super-population) U of size N , that is partitioned in d mutually disjoint sub-populations U j of size N j , corresponding to our small areas j = 1, · · · d. For each area j we observe the outcome of interest Y ij for a sub-sample, s j , of individuals i = 1, ..., n j but not for the so-called unsampled subset r j of size N j − n j . However, we assume that the auxiliary information X, is available for all units, providing predictive power for the unobserved part of the population. This assumption could be avoided if we focus only on the linear parameters such as mean or the total of the areas but then the methods would be less general. The x ij for individual i in area j is a column vector of dimension p that has 1 as its first component. We are interested in doing inference on Y ij for the area-level. When n j is too small for direct estimation, i.e. giving large variances, or are not appropriate for other reasons, then model-based small area estimators are used. They apply a model on the super-population, typically to predict the unobserved Y ij for the subsets r j . Being interested in the distribution and the corresponding non-linear parameters (see Tzavidis et al. (2010)), we do not consider area-level models like those in Fay III and Herriot (1979), Dick (1995) or Pratesi and Salvati (2008). Instead, we consider unit level models that link the unit outcomes y ij to the unit-specific covariates x ij , see e.g. Battese et al. (1988).
As a basic setting, assume that the following mixed effect model (MEM) is in place for the sampled as well as for the unsampled units (i.e. without sampling selection bias): where β is the p-dimensional vector of fixed effects, and the u j the random effects of the same dimension as z ij ⊂ x ij . In our application we concentrate on the commonly used nested error models, where z ij contains only the 1. Standard assumptions are u j i.i.d ∼ (0, σ u ), and ij i.i.d ∼ (0, σ e ) being individual error terms independent from the random effects. In our setting however, to be more realistic we deviate from this assumption and allow for error terms that may belong to a heavily skewed distribution with potentially heavy tails for which the mean is not necessarily equal to zero. In addition, heteroscedasticity might be present.
Fitting the model to the sample at hand, one obtains estimates of the model parameters which are used to predict the unobserved Y ij . By the substitution principle, once the Cumulative Distribution Function (CDF) for each area is estimated, further distribution related quantities (functional statistics) can be derived. Tzavidis et al. (2010) pointed out that the CDF estimate is particularly useful in cases where there are extreme values in the small-area sample data, or if the small-area distribution is highly skewed. The area-specific true CDF for a finite population in area j can be expressed as: Population parameter that can be expressed as a functional of F j (t) can consequently be estimated as a functional of F j (t). In a naive setting we may use a plug-in estimator to obtain In this case, the estimation of the distribution is achieved by predicting the unobserved units aŝ y kj . This may be done by using different prediction methods suggested in the literature such as EBLUP, EB, HB, etc. In the presence of outliers, or heavy tailed distribution, one would replace unobserved Y ij rather by robust predictors. For instance, we could use robust mixed linear models to get an estimate of the model parameters and predict the robust version of EBLUP (REBLUP) introduced by Sinha and Rao (2009). Alternatively one may use the M-quantile approach of Chambers and Tzavidis (2006) for estimation, and proceed accordingly.
However, using the expected value of any of these estimators to predict the outcome for nonsampled units i in area j results in a cumulative bias in the estimation of F j . Specifically in the presence of heteroskedastic and/or asymmetric error terms, the bias will not cancel in the sum, see Tzavidis et al. (2010). The problem is even more prominent when there exist representative outliers (Chambers (1986)) because these are extreme observations in the sample which are likely to occur also among the non-sampled units. To account for such bias, a calibration step is needed which has also the side-effect of causing some efficiency gain; see Chambers and Dunstan (1986); Welsh and Ronchetti (1998); Rao et al. (1990); Jiongo et al. (2013) for the SAE context.
A basic bias calibration for the CDF was proposed by Chambers and Dunstan (1986): In this case the effect of residuals, y ij − y ij , is not bounded. Welsh and Ronchetti (1998) extended this idea to obtain a bounded version of the prediction of a finite-population CDF where y Rob ij and y Rob kj are the robust prediction of the observed and unobserved outcome, respectively, in area j, and w j is a robust estimate of the scale of the residuals in that area. Here, φ j is a bounded influence function that can change over different areas. Welsh and Ronchetti (1998) focused on one finite population; we extend this to several areas. They illustrate that in order to get a more efficient estimate for a finite population CDF, the truncation constant must change at different quantiles of the CDF, with larger constants for more extreme quantiles. Other calibration approaches in the literature are for instance Rao et al. (1990) and Jiongo et al. (2013). We first build upon model (5), propose a skewed calibration that accounts for asymmetry of the error terms, and extend this idea to correct for the bias in the linearised version of non-linear parameter estimates, namely the Gini index in our application.
Estimation of non-linear parameters for small domains
Estimators for linear population parameters such as the mean or the total are well studied in the SAE literature. For estimating non-linear statistical functionals, we introduce two approaches of which the first is based on estimating the area specific CDF, and the second on a linear approximation of the parameter. For the latter we use a von Mises approximation with the Influence Function (IF) of the statistical functional. As we are especially concerned about cases where the distribution is highly skewed with a heavy tail and outliers, we start with a robust estimate of the model parameters and introduce an asymmetric bias calibration method afterwards.
Conditional CDF
Given an estimate of the small areas' CDF, the calculation of linear or non-linear statistical functionals is straightforward. Other advantages of using the CDF estimation are discussed in Tzavidis et al. (2010). This approach is extremely beneficial when the small area outcome is highly skewed and/or contains extreme values. In this situation we propose a slightly different area-specific CDF estimator than (3). In the classical setting, a point prediction is used for the outcome of each unsampled or say, unobserved unit, and the calibration is done using the average effect of the residuals on that point prediction, see (4) and (5). We use the empirical predictive distribution of the unobserved outcomes in the estimation of (3) as follows. For each unobserved of the N j − n j units in area j we create a vector of length n j consisting of elements that are the robust predictorŷ kj plus the vector of residuals obtained from the observed units, cf.
Step 3 and Step 4 of the procedure below. We do so to preserve the unexplained variation in the sample. This formulation of CDF estimators incorporates the notion of bias calibration proposed by Chambers and Dunstan (1986); Welsh and Ronchetti (1998) but using different weights for the observed and predicted units. The procedure can be summarized by the following steps: Step 1 Given a linear MEM, get robust estimates of the model parameter, i.e. fixed effects and variance components, as well as robust predictions of the (random) area effects.
Step 2 For all observed units i ∈ s j , the residuals are calculated as Step 3 Point prediction for the unobserved units k ∈ r j are Step 4 For each unobserved unit,instead of only considering one point estimates we would like to use its entire predictive distribution. To do so the predictive distribution (a vector of predictions) is simulated by adding the vector of residuals to the predicted value: where I κ is the identity matrix of size κ × κ, 1 κ is a vector of ones of length κ, and ⊗ is the Kronecker product. Further, y Rob kj and j are point predictors and the vector of residuals in area j respectively. Thus, for N j − n j point predictions, the n j × (N j − n j ) vector y p j of simulated outcomes for r j is created.
Pooling this vector with the observed outcomes, our estimator of the conditional CDF is The notation j |û j indicates that these estimators are conditioned on the predicted values of the area effectsû j . The denominator in this formula is the sum of n j + n j (N j − n j ) = n j (N j − n j + 1) indicator functions. The intuition for this calibration follows Chambers and Dunstan (1986) who used the average effect of the residuals for calibration, c.f. (4). In our method, the whole variation of the residuals is applied to each unobserved subject.
Since we are looking for a robust estimator, we impose a bounded truncation function on the residual effect as proposed by Welsh and Ronchetti (1998), though adapted to our problem: where φ j is a symmetric Huber-type influence function with weight w j , typically a robust estimate of the scale of residuals in area j like the median absolute deviation (MAD). Later on we refer to this estimator as REBLUP-SBC, a robust EBLUP with Symmetric Bias Calibration. Next, we introduce the use of a skewed Huber function.
Taking into account the asymmetry of the outcome distribution
This proposal could be interpreted as an extension of the calibration method of Welsh and Ronchetti (1998). We do not, however, calibrate for the estimation of the model parameters (because we are thinking of representative outliers). Furthermore, we argue that in cases where there is extra knowledge available to the researcher, (s)he should exploit this information to better calibrate the estimated CDF, and thereby its statistical functionals, say T j for area j. For instance when analysing income, wealth or expenditure distributions, it is common knowledge that these are strongly skewed with a heavy tail to the right. One can use this information when predicting the distribution of each domain by applying an asymmetric calibration procedure. This requires two truncation constants for the skewed version of the Huber function: Here, c defines the width of the truncation window and γ the degree of skewness. Like in the classical case of symmetric calibration, one chooses the optimal c and γ by minimizing M SE(T j ).
In the presence of heteroskedasticity, we recommend to consider area-specific sets (c j , γ j ).
The idea behind ψ c,γ (.) is the general presentation of skewed distributions along Fernandez and Steel (1998). The tuning parameter γ is always positive; while γ = 1 represent the original Huber function, values greater and smaller than 1 provide left and right skewed windows, respectively. From the definition of F SBC j| u j , r is a standardized residual divided by a robust estimate of its scale. Several choices of the latter are available. We use the one of Rousseeuw and Croux (1993) which is based on the absolute pairwise differences of the residuals. It is an alternative to more traditional robust estimates but it performs better for skewed distributions. Looking closer at ψ c,γ (.) one can see that this is very similar to the skewed Huber function of Chambers and Tzavidis (2006) defining the M-quantile method; namely where φ c (.) is the classical Huber influence function, and q the qth quantile of the conditional outcome distribution with q = γ 2 γ 2 +1 . Notice however, that here the skewed Huber function is used for calibration, not for estimation. We keep the residuals effect bounded when searching for the shape of the true distribution. In practice, the optimal tuning constants are chosen by considering a mesh of a (c, γ) plane, and estimate the MSE (via bootstrapping) for each combination. As long as one allows for γ = 1, our method also nests symmetric calibration and therefore will outperform it in terms of MSE. Provided with the tuning parameter, the area specific CDF estimates are (7) with ψ c j ,γ j as in (6) but area-specific. Functionals like the Gini index can be calculated subsequently for each area. When the REBLUP is used, we refer to this method as REBLUP-ABC.
Linearization by the Influence Function
In this section we propose a new alternative way of calibration. It first linearizes the parameter of interest by means of the influence function and then applies the calibration. The idea of using a linear approximations for non-linear parameters has some tradition in SAE, though mainly for providing an estimator of the variance than correcting for the bias; see Graf and Tillé (2014); Demnati and Rao (2004). For the sake of presentation, and because it is the aim of our application, later we provide an explicit example of the Gini coefficient. Consider the first order expansion of a statistical functional introduced by von Mises (1947): where F is the (model) distribution, G a distribution in its neighbourhood, and IF (.; T, F ) the influence function as defined by Hampel (1974). For G := F j , we get where we set z ij := IF (y ij ; T, F j ). Now, an alternative estimator is obtained by replacing the unknown population parameter in (9) with its robust version, where T j is the original robust estimate of the statistical functional and z kj := IF ( y kj ; j T, F j ).
Substituting robust predictors for all unobserved units, the calibration is done by where w j is a robust estimate of the scale of the pseudo-residuals ζ ij = z ij − z ij in area j, and φ(.) the Huber function. Hereafter, the result of this calibration approach is referred to as IF-SBC. Noticing that the symmetric calibration is a special case of the asymmetric calibrations we proposed in Section 3.2, we continue with the former for our linearized estimator (11) by proposing with ψ c j ,γ j as in (6) with area-specific c j , γ j . This bias calibration is referred to as IF-ABC.
Calibration of the Gini coefficient As used in our application we compute explicitly the case of Gini index. Among various definitions of this index in the literature, we choose the following that results directly from the classical definition of this index being twice the area between the 45 degrees line and the Lorenz curve: Suppressing the area sub-index j, the influence function of this functional is: see Appendix (A) for the derivation of (13). Now using (9) in this case we have: where in the last equality we approximate 1 N N i=1 y i by µ(F ) by using the law of large numbers. Replacing T (F ) = 2 · I µ − 1, we obtain: Letting z i = +∞ y i tdF (t) + y i F (y i ), the non-linear parameter (Gini coefficient) in (14) is now approximated by a linear function in z i , which suggests an alternative estimator for the Gini coefficient. Replace the unknown population parameters in equation (14) with their robust estimates: Here the T and µ are the estimates of the Gini coefficient and the population mean in which the unobserved units are replaced by robust estimates.
Using from now on the area-specific notation, and the calibration based on (12), we have: Summarizing, the implementation steps for this Gini estimate are: Step 1. Use a robust estimator of the MEM, to get the robust prediction for the unobserved outcomes.
Step 2. Use the vector of observed and robustly predicted outcome values in area j, ( Y j ) to obtain T j and µ j .
Step 3. Put Y j in ascending order and obtain Y (i)j . Then, compute Step 4. Use (12) to get the bias calibrated estimates of Gini coefficient for each area.
Study of Finite Sample Performance
Before applying our new methods REBLUP-ABC and IF-ABC, we will validate and compare them in a small simulation study with existing bias calibration methods. So we aim to estimate the Gini coefficients for small areas under different designs for model (1). Specifically, we generate a population for d = 40 small areas of equal size N j = 300, j = 1, ..., 40 and take a sample of size n j = 15 from each area using SRSWOR (simple random sampling without replacement). The auxiliary variables X ij 's are i.i.d. with X ij ∼ logN orm(mean = 1, sd = 0.5), and the outcome Y is generated as y ij = 100 + 5x ij + u j + ij , for individual i in area j. As we want to study the effect of heavy tailed and right skewed distribution, the error terms are generated from skewed t 3 distributions (St-3) with different measures of skewness to represent the (expected) inequality in developing and developed countries respectively. The scenarios are also created to distinguish between situations where the mean of the heavily skewed error terms is equal to zero, referred to as centred error terms, and those where the mean is different from zero, referred to as non-centred error terms. Notice that, compared to other measures of inequality, the Gini coefficient is especially sensitive to location changes of the distribution. However, in all scenarios we keep the distribution of the random area effects as u ∼ N (0, 1). This is done because McCulloch and Neuhaus (2011) argue that in a linear MEM which only contains a random intercept, no association between random effects, error terms and covariates, and uninformative cluster sizes, the misspecification of the shape of the random effects distribution can introduce no or only ignorable bias in the estimation of the model parameters and random components. This statement is in accordance with our primary simulations (not shown here). The following scenarios are considered here, with d, N j , n j as described above, and u ∼ N (0, 1): 1. Centred errors ( where St(·, ·) denotes a right-skewed t distribution, and λ > 0 the measure of skewness as introduced in Fernandez and Steel (1998). The symmetric t-distribution is a special case with λ = 1. In these scenarios we choose the λ so that the right skewed distributions resemble best the distribution of outcome in different countries, representing the income inequality that is usually observed in developed, developing and underdeveloped regions. The focus here is mainly to predict area specific parameters, rather than to consistently estimate model parameters. We will see that nevertheless our methods serve as bias calibration techniques that can be applied to any appropriate robust estimation technique.
The data for the population are generated under each scenario and then hundred samples are drawn at random, t = 1 · · · 100. We consider the standard (EBLUP), the robust (REBLUP), the REBLUP with symmetric (REBLUP-SBC) and asymmetric (REBLUP-ABC) bias calibration, and two robust bias calibrated method based on the linearized index (IF-SBC, for symmetric calibration) and (IF-ABC, for asymmetric calibration). The Gini is predicted and calibrated for each area using each of these techniques. The predicted values are then compared with their true counterparts in relative terms to calculate the relative prediction error: relative prediction error(Gini .
The expected value of these relative errors over repeated sampling provides an estimate of the relative bias in each area.
Relative Bias j = 1 100 100 h=1 relative prediction error(Gini The prediction of the Gini is generally downward biased in small samples. An indirect estimate by replacing the unobserved outcomes with their estimates suffers from the same problem, see Deltas (2003). That is expected, as the variation in the predicted outcomes (for the unobserved part of the population) is smaller than the variation in true outcomes. Considering the predictive distribution instead of the point prediction as well as skewed truncation corrects somewhat this bias.
To illustrate the efficiency gain due to our proposed methods we compare: RRMSE j = 1 100 100 h=1 relative prediction error(Gini Table 1 giving the median of Relative Bias and RRMSE over the 40 areas under each scenario and calibration techniques. Table 1 show that the asymmetric calibration methods are clearly outper- forming the symmetric counterparts. IF-ABC provides the best results in terms of Relative Bias, whereas REBLUP-ABC achieves minimum RRMSE for all scenarios. The former finding is not surprising, as the linearization with IF leads to an implicit bias correction. included to have a more comprehensive comparison with other robust estimation methods.
Further Practical Issues
Before we apply these methods to our data for estimating the inequality in the different LMAs in Tuscany, we need to briefly address two practical issues. The first issue arises when we need to provide a robust prediction for the out of sample areas. In our data set (EU-SILC 2008), out of the 57 LMAs in Tuscany, only 29 are sampled, but 28 are not. The second issue is of technical nature as the proposed methods require two tuning parameters for calibration. In what follows we provide practical solutions for both problems.
Full calibration vs. partial calibration
Calibration in our application is done by means of the fitted model residuals. Once a model is assumed to be the Data Generating Process of the super-population, in the absence of sampling selection problems, one can fit it to the sample at hand and predict the unobserved outcomes using the available auxiliary information. The difference between the observed outcomes and the predictions for these observed units is used to calibrate the bias. There are two ways to incorporate these residuals to account for (representative) outliers. One is to use the residuals in each area to correct for the bias in that specific area. This is the framework we considered when we introduced our methods. Let us call it "Partial Calibration" because we only use part of the residual vector obtained from the fitted model, namely the area specific residuals. An alternative is to use each time the entire sample of residuals when calibrating the estimator for one area, say "Full Calibration". The latter was also introduced by Jiongo et al. (2013). However, there are some differences between their calibration, and the way we are using this concept here. In their (full) version, they also try to correct for the bias in the prediction of random effects, whereas in our case, predicted area effects are considered as fixed, because we focus on the conditional CDF of each area. This is also related but still different to what they call "Conditional Calibration". Combining the full calibration idea with our proposed methods which account for the area specific DGP (by choosing area-specific tuning constants for calibration) leads to a compromise that seems to work well. We implemented both, partial and full calibration, and compared their results on our application. An advantage of the full calibration technique is that it allows for bias calibration in the non-sampled areas for which we don't have area specific residuals to calibrate in the sense of partial calibration. Now, for (7) and (15) the full calibration analogues are where z ij = +∞ y ij tdF j (t) + y ij F j (y ij ), and w a robust estimate of the scale of the entire vector of pseudo-residuals merged together.
Choice of the tuning parameters
The choice of tuning parameters can play a crucial role in bias calibration as well as in robust estimation. Therefore we propose an automatic, data-driven way to find the optimal values for our tuning constants, namely c and γ, in formula (6). In the case of symmetric calibration the convention is to use a rule of thumb for the width of truncating windows. But there also exist some guidelines for the best choice of tuning constants for calibrating certain population parameters, see e.g. Welsh and Ronchetti (1998).
The objective is to minimise the MSE. For estimating the MSE of linear population parameters some analytic approximations have been proposed, either by first order Taylor expansion, Prasad and Rao (1990), by defining the estimator as the pseudo linear parameter, Chandra et al. (2011), or by other approximations, Chambers et al. (2014). However, there are mainly two obstacles for our case. First, these approximations do not take into account the calibration, and secondly, there exist no general closed form for non-linear parameters. It is very common then to use resampling methods to approximate the MSE of small area parameter estimates; see e.g. Hall and Maiti (2006b), Hall and Maiti (2006a), Pfeffermann and Correa (2012). We propose to use non-parametric bootstrap, explained in detail in Appendix B. This can be used to obtain both, c and γ. The main drawback of this technique is that it can be computationally expensive. When the computational burden becomes too heavy, we suggest to fix c as in the case of symmetric calibration along existing rules of thumb, see Chambers et al. (2014). For the parameter γ, needed for the asymmetric calibration, we propose as an alternative to the bootstrap, an estimator based on ideas of Fernandez and Steel (1998), where a method of transforming any member of the exponential family into a skewed counterpart is developed. This transformation depends essentially on one parameter, which actually corresponds to our γ in (6), respectively γ j when choosing area-specific tuning constants. A simple but effective estimator is where n − j and n + j are the number of negative and positive centred residuals in area j. When using IF-ABC from equation (15), these are the residuals of the z ij s. Appendix C provides some details on the derivation of this formula.
Estimating the Gini for Labour Market Areas in the Tuscany
In the following income study, our main interest focuses on the income inequality in LMAs regions of Tuscany, Italy. We apply the newly developed the robust estimation and bias calibration to estimate the Gini index in all LMAs. For this we are provided with the EU-SILC 2008 sample survey of Italy and the 2001 census as an auxiliary source of information. From the survey we model the household equivalised disposable income on other explanatory variables at household and individual level. Since both EU-SILC sample and census have comparable covariates for individual characteristics, we can exploit the unit level model for our SAE. Specifically, the set of explanatory variables included in this study are gender, marital status, employment status and the years of education of the head of the household (household representative in the survey), as well as household size and household ownership status of the residence.
LMAs do not match with the administrative boundaries, such that, though graphically and economically of great interest, they are not necessarily a priori considered in the survey planning like for the EU-SILC database. As most of these regions are under-represented in the sample, they must be regarded as small areas. Moreover, from 57 LMAs regions of Tuscany in the census, only for 29 of them we find observations in the sample. That is, for the remaining 28 LMAs direct estimates of the area inequality parameters is not even possible. For these out-of-the-sample areas we use our indirect model predictions calibrating them afterwards by full calibration as explained in Section 5.1. For the other 29 regions we can alternatively also use partial calibration. We compare our results obtained when using direct estimators, robust indirect without calibration, using symmetric REBLUP, and when applying our two asymmetric calibration methods, respectively. For the sake of brevity we only show some selected results; first when using partial calibration (only for the 29 sampled LMAs available), then using full calibration (for comparison reasons applied to all 57 areas, even if only recommended for the 28 unsampled areas); see Table 2 for the number of observations in each area, and its ratio to the population size. In all LMAs less than 1% of the population is sampled.
The mentioned rules-of-thumb suggest c = 3 and c = 2 for REBLUP-ABC and IF-ABC, respectively. Figures 5 and 7 study the effect of choosing γ according to the proposed method (17) compared to a range of prefixed alternative values.
Results for LMAs in sample areas with partial calibration
We first estimate the parameters for the 29 sampled LMAs (recall Table 2) based on the presumingly more precise partial calibration. Apart from the robustness study regarding the choice of γ j , Figures 4 and 5 show the differences in the estimation of the Gini coefficient due to different calibration methods. Since we do not know the true values of the Gini index for each area, we compare the results with the direct estimates which is supposed to be unbiased but with a large variance. We further compare all this to the indirect estimate that minimizes the variances but inherits some biases. While Figure 4 illustrates on the map, how Gini estimates vary over the different methods, Figure 5 shows how asymmetrically bias calibrated estimates change between direct and indirect estimates depending on the choice of tuning parameters. In the former figures we had c = 3 and c = 2 for REBLUP-ABC and IF-ABC, and γ j estimated along (17). Note that REBLUP-ABC and REBLUP-SBC give quite similar results. It is, however, hard to say whether the latter or IF-ABC are closer to the direct estimates. Figure 5 shows nicely that our bias calibrated estimators are not just alternatives to direct or robust indirect estimators, but actually offer an extremely useful compromise: while still protecting us against the impact of outliers, the bias calibrated estimators maintain a reasonable variance of Gini predictions over areas. The γ parameter allows us to smoothly move from one extreme to the other. The estimator (17) has a clear trend towards keeping the bias small, which is typically in the spirit of what practitioners would demand for.
Results for all areas with full calibration
As said, for about half of the LMA we have no observations in the EU-SILC database but only in the census, which in turn does not contain direct information about income. When we want to predict now the Gini indices for the 28 unsampled LMAs, one has to switch to the full calibration. This can unfortunately imply a heavy smoothing, making all areas looking quite similar unless the distributions of the covariates change dramatically over the areas. For comparison reasons we give the estimates of the Gini coefficients with full calibration for all LMA areas, i.e. sampled and non-sampled -even though in practice one would take partial calibration for the sampled ones. In the 28 unsampled areas we predict income for all households by setting the area random effectû . =û (0.5) , which is the median of predicted random effects. Then we use the entire vector of residuals to correct for the bias. For the choice of tuning parameters, c is fixed as before, but γ is now estimated once for all areas (i.e. not varying over areas) using again the entire vector of residuals for the algorithm introduced in Section 5.
In Figures 6 and 7, i.e. the analogues to Figures 4 and 5 (without the direct estimates as these don't exist for unsampled areas), then we see that the results are strongly smoothed by the full calibration. As said, this is expected since it averages out the calibration over areas by using the entire vector of residuals. Note that the scales in the maps are different from those we had for Figure 4. Perhaps surprisingly, here REBLUP-ABC is closer to IF-ABC than to REBLUP-SBC. has been strongly smoothed). Not surprisingly, the effect of the γ choice seems to be attenuated, but this may just be due to the fact that we have taken one value for all areas.
Taking all our findings together, we can clearly recommend the use of asymmetric bias calibration for the indirect robust estimators for SAE. One may prefer the calibration via CDF when the aim is to minimize the MSE, but the asymmetric calibration through IF if the aim is to minimize the bias. In both cases, however, full calibration is only recommended for out of sampled areas to provide some moderate correction of bias in these areas. Because of its strong smoothing effect it should not be used for the sampled areas. For these, one would take partial calibration and area-wise tuning parameter γ j for which our simple estimator (17) seems to work fine. Note that this parameter can be used to move from one estreme to the other (between direct estimates and indirect robust ones).
Conclusion
We introduce robust estimators for potentially non-linear small area parameters, and propose various bias calibration approaches to correct for the inherited biases. In the first approach we use an asymmetric Huber function to calibrate the bias of the robust estimator in the CDF in each area, and then apply the bias calibrated CDF to estimate any statistical functional of interest.
In the second approach we derive a linear approximation of the statistical functional using its influence function, and then calibrate for the bias through the linear component. Since the symmetric Huber function is a special case of the asymmetric one, using the latter outperforms symmetric calibration when the tuning parameters are chosen appropriately. Data-driven choices of these parameter are introduced, and modifications of the calibration allow its application also to those areas in which no sampled units are available. The simulation results confirm the efficiency gain using these approaches compared to the existing methods. While this was mainly shown along the objective of estimating the Gini coefficient, it is clear that these methods can be applied to other settings. They provide a way to tackle robust estimation with bias calibration in a fairly general framework.
After our simulation study which shows the excellent performance of the methods as well as the benefits from applying them, we use these methods to estimate the income inequality for all LMAs in Tuscany, Italy. In this application we can clearly see the usefulness of calibration which exhibits quite serious shifts indicating important bias corrections. Also, it shows that full calibration, though extremely useful for doing bias calibration in the non-sampled areas, has a strong smoothing effect. Thus, partial calibration is the preferred choice where doable. We also illustrate in a sensitivity check the behaviour of the tuning parameter, and the usefulness of the γ j choice(s) via our estimation proposal.
A Influence function of the Gini coefficient Using the definition of Dirac delta function we obtain +∞ 0 tF (t)dδ y (t) = y · F (y), (1 − )µ + y − 1, and by definition given in Hampel (1974), the influence function of the functional T is It follows that and by change of variables where the last equality holds since f (.) is symmetric around 0. Assume that our model residuals (used for calibration) follow distribution (19), and then try to estimate the two probabilities involved in (20) as follows: P r( < 0 | γ) = n − N , and P r( ≥ 0 | γ) = n + N , where n − , n + , and N are the number of positive, negative and total residuals. Therefore, a heuristic estimation of γ, i.e. the skewness factor for the residuals that are distributed around 0, is: γ = n − /n + .
A feasible algorithm to obtain data-driven tuning constants in (6) is there the following 1. Centre the block of residuals in each area.
2. Fix the constant c at a given value. Values between 2 and 3 seem to provide a good performance in practice, see Chambers et al. (2014).
3. Count the number of positive and negative centred residuals in each area: n + j and n − j for area j and set γ j = n − j /n + j . | 2021-01-13T02:16:06.272Z | 2021-01-12T00:00:00.000 | {
"year": 2021,
"sha1": "3f9b3ed479387142faf75fe06d93a72e2cc7e5ef",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3f9b3ed479387142faf75fe06d93a72e2cc7e5ef",
"s2fieldsofstudy": [
"Economics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
247505260 | pes2o/s2orc | v3-fos-license | The Effectiveness of the Generative Model in Learning Synonyms and Antonyms in English Language at Kingdom of Saudi Arabia
The research aims to identify the effectiveness of using the generative model in learning synonyms and antonyms in English language course for secondary second-grade female students in Bisha governorate. To achieve this goal, the researchers implemented a quasi-experimental approach. Generative Learning Model was employed in class using a list of synonyms and antonyms chosen by the researchers from Saudi English language curriculum in the secondary stage. The pre-test and post-test were used as data gathering tools. Forty-eight female students from the secondary second-grade were chosen randomly and divided into two groups; the control group consisted of twenty-four female students and the experimental group consisted of twenty-four female students. The results indicate that, there was a statistically significant difference between the mean scores of the experimental and control group members on the posttest in the levels of remembering, understanding and application in favor of the experimental group. The research recommended using Generative Learning Model in the educational process because of its proven effectiveness in developing the level of learning synonyms and antonyms. © 2022 EDUJ, College of Education for Human Science, Wasit University DOI: https://doi.org/10.31185/eduj.Vol3.Iss46.2059 446 Journal of College of Education (46)(3) ملعت يف يديلوتلا جذومنلا ةيلعاف ةيدوعسلا ةيبرعلا ةكلمملاب ةيزيلجنلإا ةغللا يف تاداضتملاو تافدا رتملا اتسلأا ذ كراشملا نامثع دمحم دمحأ دمحم متاح .د :ةثحابلا ينارهشلا الله دبع دمحم ىولس .أ ةيبرتلا ةيلك – ةشيب ةعماج – ةيدوعسلا ةيبرعلا ةكلمملا ةشيب ميلعت ةرادإ – كلمملا ةيبرعلا ة يدوعسلا ة :صلختسملا ةيزيلجنلإا ةغللا ررقم يف تاداضتملاو تافدا رملا ملعت يف يديلوتلا جذومنلا مادختسا ةيلعاف فرعت ىلإ ثحبلا فده مادختسا مت .يبيرجت هبش جهنملا قيبطتب ناثحابلا ماق ،فدهلا اذه قيقحتل .ةشيب ةظفاحمب يوناثلا يناثلا فصلا تابلاط ىدل ختساب لصفلا يف يديلوتلا ملعتلا جذومن ةيزيلجنلإا ةغللا جهنم نم ناثحابلا اهراتخا يتلا تاداضتملاو تافدا رتملا ةمئاق ماد ةبلاط نيعبرأو نامث رايتخا مت امك .تانايبلا عمجل تاودأك يدعبلاو يلبقلا نيرابتخلاا مادختسا مت .ةيوناثلا ةلحرملاب يدوعسلا ومجملا :نيتعومجم ىلإ نمسقو اًيئاوشع يوناثلا يناثلا فصلا نم ةعومجملاو ةبلاط نيرشعو عبرأ نم تنوكت ةطباضلا ةع دا رفأ تاجرد تاطسوتم نيب ةيئاصحإ ةللاد تاذ قورف دوجو ىلإ جئاتنلا تراشأ .ةبلاط نيرشعو عبرأ نم تنوكت ةيبيرجتلا لا ةعومجملا حلاصل قيبطتلاو ،مهفلاو ،ركذتلا تايوتسم يف يدعبلا رابتخلاا يف ةطباضلاو ةيبيرجتلا ةعومجملا .ةيبيرجت تافدا رتملا ملعت ىوتسم ريوطت يف ةتبثم ةيلعاف نم هل امل ةيميلعتلا ةيلمعلا يف يديلوتلا ملعتلا جذومن مادختساب ثحبلا ىصوأ .تاداضتملاو :ةيحاتفملا تاملكلا يديلوتلا جذومنلا ، تافدا رتملا ، تاداضتملا . Introduction: The learning and mastery of basic language skills is closely related to the learning of vocabulary, especially synonyms and antonyms. Linguistic vocabularies are symbols that are represented in the human mind to facilitate the process of expressing things and ideas. Learning of English language synonyms and antonyms is primarily related to knowledge acquisition processes; this in turn, is reflected in the power of learning and acquiring scientific concepts in various sciences in the view of their impact on other linguistic skills, especially with regard to reading and writing skills (Al-Damig, 2011). In response to the challenges and requirements of the era, many modern teaching strategies, methods and models have emerged, including the generative learning model, which has received a great deal of care and attention by the educational systems in developed countries. The World Education Conference in 1990 and the Dakar Conference in 2000 recommended that students should be educated in a variety of ways, so that all learners can obtain maximum success and achievement within the framework of their capabilities and abilities and work on developing their skills and outstanding performance in the current era (Al Zand, 2011). The generative learning model is one of the modern models that emphasize learning and focus on the activity of the learner during the learning process. This increases the learner's ability to understand and link between information. Generative learning arises when the learner uses various strategies to reach learning, and generative education encourages reducing reliance on the teacher and creates more self-reliance for the learner and provides an opportunity for the learner by organizing the study content, linking the new content of the educational material with the previous knowledge of the learners and generating ideas that work on developing thinking (Al-Mahdaoui, 2006). 447 Journal of College of Education (46)(3) The process of learning synonyms and antonyms of English language and acquiring skills is a major goal of the educational process in the Kingdom of Saudi Arabia, because of the importance of language, especially different careers that are reflected in the life of the individual and society alike. The primary goal of language learning is to provide the learner with the ability to communicate in languages, and the process of using vocabulary is an essential element in learning any language (Berry, 2010). Recent trends in education call for the necessity of providing education for all members of society, while adopting and using modern teaching strategies and models that center on the learner and help him master this language. So that he becomes able to use what he has learned in past years and add to it what will be learned from vocabulary. The researchers conclude that the importance of learning English language has become a recognized issue. Therefore, modern methods and models of teaching should be used in teaching English language, which is what the current study will do, that is the effectiveness of using the generative model in learning synonyms and antonyms in the English language course for second-grade secondary school female students in Bishah governorate in the Kingdom of Saudi Arabia. Research Problem: The research problem is evidenced by the results of many related studies that confirm the existence of a problem among secondary school students in learning the vocabulary of English language. The study of Al-Naghmishi (2014) concluded that students in Buraidah in Saudi Arabia face many difficulties in acquiring English vocabulary. It used the semantic mapping strategy to develop the students' English language vocabulary and maintain the impact of learning it. Al-Nisour (2005) also indicated the necessity of using foreign vocabulary learning strategies; putting into consideration that, intermediate school students in Jordan face inferiority in acquiring English vocabulary. Whereas, the study of Zureikat, (2013) shed light on the necessity to improve the vocabulary acquisition of secondary school students, and it also called for the necessity of using strategies to activate prior knowledge of the content to increase the reading comprehension and vocabulary acquisition of students which are the main problems facing students. On the other hand, the study of Al-Ananza, (2015) revealed that despite the efforts made to develop the level of learning English language, the results achieved so far are not encouraging in terms of language acquisition. Most secondary school students still have insufficient use of vocabulary. The study of Salama, (2007) indicated, that the learners have achieved less than their capabilities in using the language in new situations. The study also indicated that vocabulary acquisition is the main aspect of learning English language. From the previous studies, it became clear that female students have a weakness in learning English language, especially synonyms and antonyms. Therefore, the researchers conducted an exploratory study with the aim of identifying some of the obstacles and problems of learning synonyms and antonyms in English language on a sample of 75 secondary school female students in Bisha governorate, Saudi Arabia. The survey found the following results: Regarding the problems that prevent female students from practicing English language, 36% of the sample responded by forgetting vocabulary. The most successful way to increase vocabulary is the response by making vocabulary illustrations, with the highest response rate of 55.3%. Regarding what the student needs while learning English language, obtaining new 448 Journal of College of Education (46)(3) vocabulary and practicing its use was among the first needs of 60.4% of the total sample. Moreover, 94.2% of the sample answered yes to their opinion of the aid of learning synonyms and antonyms in developing the linguistic outcome. As for the appropriate learning method in learning synonyms and antonyms in English language, the use of words in speaking and writing was by 53.5%, followed by the visual method by 25.8%, then the auditory method by 14.2%. The previous results illustrated the evident that the female students have a need to learn synonyms and antonyms in English language and to be trained in their use. To overcome this problem, the importance of making a means of clarification between vocabulary by forming mental associations should be taken into consideration. This is what the generative model in education does in using synonyms and antonyms among students in English language, and the generative model helps learners to participate actively in the learning process and generate knowledge. In light of the foregoing, the research problem can be identified in the weakness of female students in learning vocabulary (synonyms and antonyms) in English language course. Therefore, the researchers believe that the current study should test the effect of using the generative model in learning synonyms and antonyms in English language course for the secondary second-grade school female students. Research Questions: 1What is the effectiveness of using the generative model in learning synonyms and antonyms in English language course for second-grade secondary school female students in the level of understanding? 2What is the effectiveness of using the generative model in learning synonyms and antonyms in English language course for second-grade secondary school students in the level of remembering? 3What is the effectiveness of using the generative model in learning synonyms and antonyms in English language course for second-grade secondary school female students in the level of application? 4What is the effectiveness of using the generative model in learning synonyms and antonyms in English language course for second-grade secondary school female students in Bisha governorate? Research Objectives: This research aims to: 1know the effectiveness of using the generative model in the achievement of synonyms in English language course for second-grade secondary school female students in the level of understanding. 2know the effectiveness of using the generative model in the achievement of antonyms in English language course for the second-grade secondary school female students in the level of remembering. 3know the effectiveness of using the generative model in the achievement of synonyms in English language course for second-grade secondary school female students in the level of application. 449 Journal of College of Education (46)(3) 4know the effectiveness of using the generative model in the achievement of synonyms in English language course for second-grade secondary school female students in general. Significance of the Research : The significance of the current research is that it may be useful in: 1presenting a visualization of how to use the generative model to learn a proposed unit in synonyms and antonyms in the second-grade English language course . 2using the generative model in learning synonyms and antonyms among secondary school female students . 3helping English language teachers in the secondary stage in developing new strategies and models for teaching English language courses, especially synonyms and antonyms. 4directing the attention of educational supervisors towards teachers and urging them to use the generative model in teaching synonyms and antonyms in English language courses . 5urging those in charge of curriculum development in the Ministry of Education to improve the methods of acquiring English language vocabulary in many ways that allow the use of the generative model. Research Limits: aHuman Limits: Female students of the second year of secondary education in Bisha governorate in the Kingdom of Saudi Arabia. bObjective limits: Confined to studying the effectiveness of using the generative model in learning synonyms and antonyms in English language courses by teaching a proposed unit entitled (Travel around the World) prepared according to the generative model. cSpatial boundaries: The research was conducted in two schools in the governorate of Bisha female secondary schools, in the Kingdom of Saudi Arabia. dTemporal boundaries: The academic year 2019-2020 AD. Research Terms: Effectiveness: It is defined as the ability to achieve the result according to specific criteria, and the efficiency increases whenever the result can be fully achieved, Badawi (2001). The researchers defined it as the degree of growth expressed by the difference between the average grades of the second year secondary school female students in the two applications of the preand post-achievement test after using the generative model in teaching a proposed unit (Travel around the World) in English language course. The Generative Learning Model (G.L.M): It is a model that reflects Vygotsky's view of learning. The researchers defined it as a model that aims to help the students generate multiple types of relationships. In this study this model is to link previous experiences with later experiences of synonyms and antonyms in the prepared unit of English language course. Synonyms: They are words that give the same meaning or are equivalent in meaning. 450 Journal of College of Education (46)(3) The researchers defined them as the words that give the same meaning or denote the same topic in English language courses and which are learned using the generative model. Antonyms: Antonyms in the linguistic sciences are the two words that denote two opposite meanings such as black and white. (Mukhtar, 2008). The researchers defined them as words that denote opposite meanings in English language course and that are learned using the generative model. Literature Review: Generative Learning Model: The generative learning model is one of the most prominent models that focused on the development of mental skills and cognitive development of the learner. It focuses on the base that learning is a positive, active process in which students' past experiences are recalled and linked to what will be learned to form new ideas. This model also relies on mental processes that are a product of the brain’s work during learning or facing daily situations. The generation of information arises when using cognitive and metacognitive strategies during their social nature. (Solomon, 2015) Al-Shammari (2018) defines the generative learning model as: “a process of building self-knowledge through interactive mental activities that link the learner’s prior knowledge with the new knowledge that comes to him through participatory learning among students and the strengthening and reinforcing of the teacher” p. 136. Al-Zahrani (2018) defines the generative learning model as: “An educational model based on constructive theory aims to develop students’ achievement by generating relationships between their previous experiences and their subsequent experiences. Beside generating relationships between parts of knowledge or subsequent experiences to be acquired and according to the phases of the generative learning model: preliminary stage, the stage of concentration, the stage of modernization, the stage of application. Obaid (2013) defines the generative learning model as: “A teaching model that includes four sequential stages: the introduction phase, the focus phase, the challenge phase, and the application phase. It aims to achieve meaning-based learning by providing the learner with the ability to generate a relationship between his previous and new experience and between the parts of the new knowledge he acquired. Wittrock (2014) believes that learning according to the generative model is: the process of creating relationships, or structure, between components or parts of information that the individual tries to understand, and the process of creating relationships between an individual’s knowledge and the information that the individual tries to understand. The learner should be active in establishing these relationships; and caring for the basic structure of the information that must be learned. Objectives of the Generative Learning Model: 451 Journal of College of Education (46)(3) Obaid (2013) believes that the objectives of the generative education model are as follows: * It allows students to think freely, which leads to the development of their creative thinking. * It promotes respect for the opinions of others and achieves self-confidence, self-respect and appreciation. * It leads to active learning and meaningful learning occurring and thus has a lasting impact. * It encourages the student to bring the activity from his own, and transforms his role from a listener and receiver of knowledge to a participant and actor in building it. * It makes students think about different solutions to one problem, which leads them to use creative thinking in its broadest sense. The Main Pillars of the Generative Learning Model: Generative learning takes place when the learner participates in the appropriate cognitive processing during learning, including the presence of relevant information, the organization of mentally incoming information into a coherent cognitive structure, the merging of cognitive structures with others and with related prior knowledge and its activation in long-term memory, (Mayer and Fiorella, (2015). The teacher's role in the generative learning model lies in assisting students in generating connections or helping them to relate new ideas to each other with their prior learning. The student is directed to find these connections. Education moves here from preparing information to facilitating the building of a fabric of knowledge, (Al-Kubaisi and Al-Saadi, (2012). Al-Majdalawi and Al-Abed (2018) argue that the roles which the generative model gives to the learner as an active participant who trusts his abilities to relate, analyze and make judgments, and to use his knowledge to build new meaningful knowledge. Those roles that the generative model cast on the teacher as a facilitator and modifier of knowledge, and a provider for material experiences and realistic activities, leads to an appropriate learning environment to form meaningful learning, and facilitates the development of their vocabulary. Obeid (2013) believes that generative learning focuses on learning based on understanding by linking the student's previous experiences in the structure of knowledge with his subsequent experiences, and forming relationships between them. In order for the individual to remain a new knowledge he has, it should be merged into the already existing cognitive structure, and this takes place through a real social interaction between students and between them and their teacher. In research conducted by Riezebos, Yu and Zhu, (2016), it has been demonstrated that the strategy of the generative learning model is based on creating and refining personal mental structures around educational environments by creating a theoretical framework for generative learning that combines content and context analysis. Its aim is to allow students to participate in building the conveyed content and framing the learning contexts; where they can link new information with old information, gain meaningful knowledge and use their metacognitive abilities. Stages and Phases of the Generative Learning Model: Askar, (2018) and Fiorella & Mayer, (2015) indicated that the generative learning model consists of four stages, or phases, which can be presented as follows: 452 Journal of College of Education (46)(3) Preliminary Phase: Guidance: Through which students are directed to think about the topic of the lesson and link the topic of the lesson with the previous topics. Provoking up Students' Daily Experiences: Provoking up students' experiences through the process (information synthesis) where the teacher asks students to ask questions themselves. Presentation of Students' Ideas: Through the dialogue discussion between the teacher and students, the teacher allows the students to think out loud and then their answers are presented to the teacher, whether with the verbal and written answer in their own books. Thus the teacher can know what the learners have from previous information. Interpreting Students' Ideas and Building New Ideas: The teacher, along with students, explains their most important ideas and uses them to build new ideas, including the teacher's comment on the ideas presented in the previous step. The Focus Phase: This phase focuses on the students themselves because they are cooperative work teams whose number ranges from (4-6) individuals. So that the work is distributed among the members of each group according to the distribution of roles among them. The distribution of students allows the teacher to move between these teams. In this phase connectivity between day-to-day knowledge and targeted knowledge should be achieved. The Challenge Phase: This phase is done by providing the leader of each group with the opportunity to contribute his observations and comments, observing the activities of the students and helping them with the school educational pillars, and reintroducing the scientific terms and concepts to be accessed. The Application Phase: The teacher uses in this stage the scientific concepts as functional tools to solve scientific problems and reach results and applications used in new life situations, then help expand the scope of the concept. The Role of Generative Learning Model in Developing English language Vocabulary: According to the generative learning model, learners who employ and benefit from it in learning synonyms and antonyms can be able to develop new knowledge based on the analysis and synthesis of information. They can relate their previous knowledge to the new knowledge in order to develop their cognitive representations that empower and help them to produce and create new ideas and new processes or models in which this knowledge is used, (Al-Ruwais, 2010). The increase in the attainment of synonyms and antonyms in English language and the ease of familiarity with them depends mainly on the generative learning model. These will be through focusing during its various stages on carrying out many mental and practical investigative activities that provide opportunities for the learner to practice science processes, 453 Journal of College of Education (46)(3) such as observation, interpretation, classification, measurement, prediction and procedural definition in order to reach the concepts and information by himself. The learner also accesses knowledge by himself and links it with his previous knowledge and experiences, and the learning supports are used to reach the learner to the maximum of his capabilities, (Ahmad, 2013: 355). Previous Studies: This section will introduce researches and previous studies that investigate both generative learning model and synonyms and antonyms. It will facilitate making comparison and contrast between the present study and the previous researches in terms of methodology, instruments, population and sampling, results and some other aspects. Ting, (2016) study aimed to present a conceptual framework, using the generative model and proposing a framework of four levels of smart education and ten key features of smart learning environments for intelligent learners who need them in developing knowledge. The smart teaching framework includes discrimination based on classroom instruction, cooperative learning based on group, individualized personal learning and comprehensive learning based on the generative model, and the study reached many results, the most important are: * The basic concept of generative learning includes the creation and refinement of personal mental structures around educational environments by creating a theoretical framework for generative learning that combines content and context analysis. * The generative model allows students to participate in building the conveyed content and framing learning contexts; where they can link new information with old information, gain meaningful knowledge and use their metacognitive abilities. Wahydo, (2013) study investigated whether the synonyms and antonyms tests measured similar ranges of verbal abilities, and whether they had a similar psychological effect. The data used in this study are subsets of data collected during the University of Gadja Mada Postgraduate Admission Test (UGM) in 2013-2014 academic year using three forms of PAPS Graduate Academic Aptitude Test. Confirmatory factor analysis revealed that tests of synonyms and antonyms assess similar areas of verbal ability. A model combining items from both tests to represent one dimension interpreted the data better than a model that separated the two tests into aspects of different dimensions. The study indicated large correlations between dimensions in the one-dimensional model showed correlations with areas of verbal abilities such as verbal knowledge, comprehension and reasoning. Additional analysis using item-level analysis showed that contrast elements tend to be more difficult than elements of synonyms, and this finding indicates that although both tests evaluate similar content, answering the contrast test requires a more complex cognitive process than answering the synonym test. Anderman, (2010) study aimed at identifying developments in the field of the research about motivation, achievement and comment, and their role in how these developments are reflected in the Wittrock educational model of learning. Specifically, the study focused on the roles of prior knowledge, the generation of knowledge, and beliefs about ability. It reached many results, such as: 454 Journal of College of Education (46)(3) * The Wittrock Model of Learning is not designed as a generative process as a model for human motivation, yet the basic principles of the model are reflected in many current motivational research theories and programs. * The role of perception, the building of meaning, and beliefs about ability are actually three areas that have become very important topics in the study of academic motivation. Ball, (2009) study aimed to present the preparation of teachers to teach in classes that are culturally and linguistically complex in international contexts. It examined the long-term social and institutional implications of professional development and documentation processes that facilitate teacher continuous learning. The study reached many results, the most important are: * The generative model contributed to the development of general knowledge by teachers and shows how they relied on that knowledge in the thinking processes with students and during the teaching process. * The generative model accelerates educational parity across ethnic and social boundaries. It also contributes to overcoming the legacy of academic failure that afflicts many students through an expanded understanding of the processes of generative change. Research Hypotheses: 1There is a statistically significant difference at the level of (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the post-test at the level of understanding. 2There is a statistically significant difference at the level of (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the post-test in the level of remembering. 3There is a statistically significant difference at the level of (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the post-test at the level of application. 4There is a statistically significant difference at the level of (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the post-test in general. Research Methodology: A qualitative research approach for this research was chosen because it is useful to understand how participants make meaning of the phenomenon being studied; i.e., the effectiveness of generative learning model in learning synonyms and antonyms in English language course. The researchers used a quasi-experimental approach because it included a purposive sampling, systematic data collection and data analysis procedures. Research Design Procedures for selecting study participants and collecting and analyzing data were described in this section. Research Population: 455 Journal of College of Education (46)(3) The research community consists of all the second grade secondary school female students in the governorate of Bisha, in the second semester of the academic year 2019/2020 AD. Research Sample: For the purpose of this study, (48) second-grade secondary school EFL female students were randomly selected to participate in the study. They were divided into two groups. The first is the control group, which consisted of (24), female students. The second is the experimental group that consisted of 24 female students. The purpose of selecting secondary school participants was based on the researchers' assumption that these learners would have a better grasp on learning synonyms and antonyms and they would better be able to answer the questions of the achievement test. This was due to their advanced level and longer exposure to English language. The population of the study was EFL language learners attending this language institution. Since the participants were only female learners, gender was not a variable in the current study. The age range was between (15) and (18) years. Research Tools and Materials: Due to the nature of the research and its objectives, the following tools and materials have been used: 1Achievement test in synonyms and antonyms of the proposed unit (Travel around the World) of the English language course. 2A teacher's guide for teaching synonyms and antonyms in the proposed unit Travel around the World) according to the generative learning model. 3List of synonyms and antonyms in the proposed unit according to the generative learning model. 4Activity brochure for female students. Exploratory Testing of the Research Tool (Achievement Test): After completing the preparation of the research tool (achievement test) and amending it in light of the opinions of the arbitrators, the research tool was tested on a random sample of (30) students from outside the study sample, in order to verify: the coefficient of ease, difficulty and discrimination, as well as the validity and reliability of the study tool. Calculation of the difficulty and ease factor: The researchers calculated the difficulty and ease factor for an exploratory sample reached (30) female students. The results was represented in the following table: Table No. (1) Difficulty and Ease Factor for the Achievement Test 456 Journal of College of Education (46)(3) Question No. Correct Answers Wrong Answers Difficulty Factor Ease Factor
A B S T R A C T
The research aims to identify the effectiveness of using the generative model in learning synonyms and antonyms in English language course for secondary second-grade female students in Bisha governorate. To achieve this goal, the researchers implemented a quasi-experimental approach. Generative Learning Model was employed in class using a list of synonyms and antonyms chosen by the researchers from Saudi English language curriculum in the secondary stage. The pre-test and post-test were used as data gathering tools. Forty-eight female students from the secondary second-grade were chosen randomly and divided into two groups; the control group consisted of twenty-four female students and the experimental group consisted of twenty-four female students. The results indicate that, there was a statistically significant difference between the mean scores of the experimental and control group members on the posttest in the levels of remembering, understanding and application in favor of the experimental group.
The research recommended using Generative Learning Model in the educational process because of its proven effectiveness in developing the level of learning synonyms and antonyms.
Introduction:
The learning and mastery of basic language skills is closely related to the learning of vocabulary, especially synonyms and antonyms. Linguistic vocabularies are symbols that are represented in the human mind to facilitate the process of expressing things and ideas. Learning of English language synonyms and antonyms is primarily related to knowledge acquisition processes; this in turn, is reflected in the power of learning and acquiring scientific concepts in various sciences in the view of their impact on other linguistic skills, especially with regard to reading and writing skills (Al-Damig, 2011).
In response to the challenges and requirements of the era, many modern teaching strategies, methods and models have emerged, including the generative learning model, which has received a great deal of care and attention by the educational systems in developed countries. The World Education Conference in 1990 and the Dakar Conference in 2000 recommended that students should be educated in a variety of ways, so that all learners can obtain maximum success and achievement within the framework of their capabilities and abilities and work on developing their skills and outstanding performance in the current era (Al Zand, 2011). The generative learning model is one of the modern models that emphasize learning and focus on the activity of the learner during the learning process. This increases the learner's ability to understand and link between information. Generative learning arises when the learner uses various strategies to reach learning, and generative education encourages reducing reliance on the teacher and creates more self-reliance for the learner and provides an opportunity for the learner by organizing the study content, linking the new content of the educational material with the previous knowledge of the learners and generating ideas that work on developing thinking (Al-Mahdaoui, 2006). The process of learning synonyms and antonyms of English language and acquiring skills is a major goal of the educational process in the Kingdom of Saudi Arabia, because of the importance of language, especially different careers that are reflected in the life of the individual and society alike. The primary goal of language learning is to provide the learner with the ability to communicate in languages, and the process of using vocabulary is an essential element in learning any language (Berry, 2010).
Recent trends in education call for the necessity of providing education for all members of society, while adopting and using modern teaching strategies and models that center on the learner and help him master this language. So that he becomes able to use what he has learned in past years and add to it what will be learned from vocabulary.
The researchers conclude that the importance of learning English language has become a recognized issue. Therefore, modern methods and models of teaching should be used in teaching English language, which is what the current study will do, that is the effectiveness of using the generative model in learning synonyms and antonyms in the English language course for second-grade secondary school female students in Bishah governorate in the Kingdom of Saudi Arabia.
Research Problem:
The research problem is evidenced by the results of many related studies that confirm the existence of a problem among secondary school students in learning the vocabulary of English language. The study of Al-Naghmishi (2014) concluded that students in Buraidah in Saudi Arabia face many difficulties in acquiring English vocabulary. It used the semantic mapping strategy to develop the students' English language vocabulary and maintain the impact of learning it. Al-Nisour (2005) also indicated the necessity of using foreign vocabulary learning strategies; putting into consideration that, intermediate school students in Jordan face inferiority in acquiring English vocabulary. Whereas, the study of Zureikat, (2013) shed light on the necessity to improve the vocabulary acquisition of secondary school students, and it also called for the necessity of using strategies to activate prior knowledge of the content to increase the reading comprehension and vocabulary acquisition of students which are the main problems facing students. On the other hand, the study of Al-Ananza, (2015) revealed that despite the efforts made to develop the level of learning English language, the results achieved so far are not encouraging in terms of language acquisition. Most secondary school students still have insufficient use of vocabulary. The study of Salama, (2007) indicated, that the learners have achieved less than their capabilities in using the language in new situations. The study also indicated that vocabulary acquisition is the main aspect of learning English language.
From the previous studies, it became clear that female students have a weakness in learning English language, especially synonyms and antonyms. Therefore, the researchers conducted an exploratory study with the aim of identifying some of the obstacles and problems of learning synonyms and antonyms in English language on a sample of 75 secondary school female students in Bisha governorate, Saudi Arabia. The survey found the following results: Regarding the problems that prevent female students from practicing English language, 36% of the sample responded by forgetting vocabulary. The most successful way to increase vocabulary is the response by making vocabulary illustrations, with the highest response rate of 55.3%. Regarding what the student needs while learning English language, obtaining new vocabulary and practicing its use was among the first needs of 60.4% of the total sample. Moreover, 94.2% of the sample answered yes to their opinion of the aid of learning synonyms and antonyms in developing the linguistic outcome. As for the appropriate learning method in learning synonyms and antonyms in English language, the use of words in speaking and writing was by 53.5%, followed by the visual method by 25.8%, then the auditory method by 14.2%.
The previous results illustrated the evident that the female students have a need to learn synonyms and antonyms in English language and to be trained in their use. To overcome this problem, the importance of making a means of clarification between vocabulary by forming mental associations should be taken into consideration. This is what the generative model in education does in using synonyms and antonyms among students in English language, and the generative model helps learners to participate actively in the learning process and generate knowledge.
In light of the foregoing, the research problem can be identified in the weakness of female students in learning vocabulary (synonyms and antonyms) in English language course. Therefore, the researchers believe that the current study should test the effect of using the generative model in learning synonyms and antonyms in English language course for the secondary second-grade school female students.
Research Questions:
1-What is the effectiveness of using the generative model in learning synonyms and antonyms in English language course for second-grade secondary school female students in the level of understanding? 2-What is the effectiveness of using the generative model in learning synonyms and antonyms in English language course for second-grade secondary school students in the level of remembering? 3-What is the effectiveness of using the generative model in learning synonyms and antonyms in English language course for second-grade secondary school female students in the level of application? 4-What is the effectiveness of using the generative model in learning synonyms and antonyms in English language course for second-grade secondary school female students in Bisha governorate?
Research Objectives:
This research aims to: 1-know the effectiveness of using the generative model in the achievement of synonyms in English language course for second-grade secondary school female students in the level of understanding. 2-know the effectiveness of using the generative model in the achievement of antonyms in English language course for the second-grade secondary school female students in the level of remembering. 3-know the effectiveness of using the generative model in the achievement of synonyms in English language course for second-grade secondary school female students in the level of application. 4-know the effectiveness of using the generative model in the achievement of synonyms in English language course for second-grade secondary school female students in general.
Significance of the Research :
The significance of the current research is that it may be useful in: 1-presenting a visualization of how to use the generative model to learn a proposed unit in synonyms and antonyms in the second-grade English language course . 2-using the generative model in learning synonyms and antonyms among secondary school female students . 3-helping English language teachers in the secondary stage in developing new strategies and models for teaching English language courses, especially synonyms and antonyms. 4-directing the attention of educational supervisors towards teachers and urging them to use the generative model in teaching synonyms and antonyms in English language courses . 5-urging those in charge of curriculum development in the Ministry of Education to improve the methods of acquiring English language vocabulary in many ways that allow the use of the generative model.
Research Limits: a-Human Limits:
Female students of the second year of secondary education in Bisha governorate in the Kingdom of Saudi Arabia. b-Objective limits: Confined to studying the effectiveness of using the generative model in learning synonyms and antonyms in English language courses by teaching a proposed unit entitled (Travel around the World) prepared according to the generative model. c-Spatial boundaries: The research was conducted in two schools in the governorate of Bisha female secondary schools, in the Kingdom of Saudi Arabia.
Research Terms: Effectiveness:
It is defined as the ability to achieve the result according to specific criteria, and the efficiency increases whenever the result can be fully achieved, Badawi (2001).
The researchers defined it as the degree of growth expressed by the difference between the average grades of the second year secondary school female students in the two applications of the pre-and post-achievement test after using the generative model in teaching a proposed unit (Travel around the World) in English language course.
The Generative Learning Model (G.L.M):
It is a model that reflects Vygotsky's view of learning. The researchers defined it as a model that aims to help the students generate multiple types of relationships. In this study this model is to link previous experiences with later experiences of synonyms and antonyms in the prepared unit of English language course.
Synonyms:
They are words that give the same meaning or are equivalent in meaning.
The researchers defined them as the words that give the same meaning or denote the same topic in English language courses and which are learned using the generative model.
Antonyms:
Antonyms in the linguistic sciences are the two words that denote two opposite meanings such as black and white. (Mukhtar, 2008).
The researchers defined them as words that denote opposite meanings in English language course and that are learned using the generative model.
Literature Review: Generative Learning Model:
The generative learning model is one of the most prominent models that focused on the development of mental skills and cognitive development of the learner. It focuses on the base that learning is a positive, active process in which students' past experiences are recalled and linked to what will be learned to form new ideas. This model also relies on mental processes that are a product of the brain's work during learning or facing daily situations. The generation of information arises when using cognitive and metacognitive strategies during their social nature. (Solomon, 2015) Al-Shammari (2018) defines the generative learning model as: "a process of building self-knowledge through interactive mental activities that link the learner's prior knowledge with the new knowledge that comes to him through participatory learning among students and the strengthening and reinforcing of the teacher" p. 136.
Al-Zahrani (2018) defines the generative learning model as: "An educational model based on constructive theory aims to develop students' achievement by generating relationships between their previous experiences and their subsequent experiences. Beside generating relationships between parts of knowledge or subsequent experiences to be acquired and according to the phases of the generative learning model: preliminary stage, the stage of concentration, the stage of modernization, the stage of application. Obaid (2013) defines the generative learning model as: "A teaching model that includes four sequential stages: the introduction phase, the focus phase, the challenge phase, and the application phase. It aims to achieve meaning-based learning by providing the learner with the ability to generate a relationship between his previous and new experience and between the parts of the new knowledge he acquired. Wittrock (2014) believes that learning according to the generative model is: the process of creating relationships, or structure, between components or parts of information that the individual tries to understand, and the process of creating relationships between an individual's knowledge and the information that the individual tries to understand. The learner should be active in establishing these relationships; and caring for the basic structure of the information that must be learned.
Objectives of the Generative Learning Model:
Obaid (2013) believes that the objectives of the generative education model are as follows: * It allows students to think freely, which leads to the development of their creative thinking. * It promotes respect for the opinions of others and achieves self-confidence, self-respect and appreciation. * It leads to active learning and meaningful learning occurring and thus has a lasting impact. * It encourages the student to bring the activity from his own, and transforms his role from a listener and receiver of knowledge to a participant and actor in building it. * It makes students think about different solutions to one problem, which leads them to use creative thinking in its broadest sense.
The Main Pillars of the Generative Learning Model:
Generative learning takes place when the learner participates in the appropriate cognitive processing during learning, including the presence of relevant information, the organization of mentally incoming information into a coherent cognitive structure, the merging of cognitive structures with others and with related prior knowledge and its activation in long-term memory, (Mayer and Fiorella, (2015).
The teacher's role in the generative learning model lies in assisting students in generating connections or helping them to relate new ideas to each other with their prior learning. The student is directed to find these connections. Education moves here from preparing information to facilitating the building of a fabric of knowledge, (Al-Kubaisi and Al-Saadi, (2012).
Al-Majdalawi and Al-Abed (2018) argue that the roles which the generative model gives to the learner as an active participant who trusts his abilities to relate, analyze and make judgments, and to use his knowledge to build new meaningful knowledge. Those roles that the generative model cast on the teacher as a facilitator and modifier of knowledge, and a provider for material experiences and realistic activities, leads to an appropriate learning environment to form meaningful learning, and facilitates the development of their vocabulary.
Obeid (2013) believes that generative learning focuses on learning based on understanding by linking the student's previous experiences in the structure of knowledge with his subsequent experiences, and forming relationships between them. In order for the individual to remain a new knowledge he has, it should be merged into the already existing cognitive structure, and this takes place through a real social interaction between students and between them and their teacher.
In research conducted by Riezebos, Yu and Zhu, (2016), it has been demonstrated that the strategy of the generative learning model is based on creating and refining personal mental structures around educational environments by creating a theoretical framework for generative learning that combines content and context analysis. Its aim is to allow students to participate in building the conveyed content and framing the learning contexts; where they can link new information with old information, gain meaningful knowledge and use their metacognitive abilities.
Stages and Phases of the Generative Learning Model:
Askar, (2018) and Fiorella & Mayer, (2015) indicated that the generative learning model consists of four stages, or phases, which can be presented as follows: Preliminary Phase: Guidance: Through which students are directed to think about the topic of the lesson and link the topic of the lesson with the previous topics.
Provoking up Students' Daily Experiences:
Provoking up students' experiences through the process (information synthesis) where the teacher asks students to ask questions themselves. Presentation of Students' Ideas: Through the dialogue discussion between the teacher and students, the teacher allows the students to think out loud and then their answers are presented to the teacher, whether with the verbal and written answer in their own books. Thus the teacher can know what the learners have from previous information.
Interpreting Students' Ideas and Building New Ideas:
The teacher, along with students, explains their most important ideas and uses them to build new ideas, including the teacher's comment on the ideas presented in the previous step.
The Focus Phase:
This phase focuses on the students themselves because they are cooperative work teams whose number ranges from (4-6) individuals. So that the work is distributed among the members of each group according to the distribution of roles among them. The distribution of students allows the teacher to move between these teams. In this phase connectivity between day-to-day knowledge and targeted knowledge should be achieved.
The Challenge Phase:
This phase is done by providing the leader of each group with the opportunity to contribute his observations and comments, observing the activities of the students and helping them with the school educational pillars, and reintroducing the scientific terms and concepts to be accessed.
The Application Phase:
The teacher uses in this stage the scientific concepts as functional tools to solve scientific problems and reach results and applications used in new life situations, then help expand the scope of the concept.
The Role of Generative Learning Model in Developing English language Vocabulary:
According to the generative learning model, learners who employ and benefit from it in learning synonyms and antonyms can be able to develop new knowledge based on the analysis and synthesis of information. They can relate their previous knowledge to the new knowledge in order to develop their cognitive representations that empower and help them to produce and create new ideas and new processes or models in which this knowledge is used, (Al-Ruwais, 2010).
The increase in the attainment of synonyms and antonyms in English language and the ease of familiarity with them depends mainly on the generative learning model. These will be through focusing during its various stages on carrying out many mental and practical investigative activities that provide opportunities for the learner to practice science processes, such as observation, interpretation, classification, measurement, prediction and procedural definition in order to reach the concepts and information by himself. The learner also accesses knowledge by himself and links it with his previous knowledge and experiences, and the learning supports are used to reach the learner to the maximum of his capabilities, (Ahmad, 2013: 355).
Previous Studies:
This section will introduce researches and previous studies that investigate both generative learning model and synonyms and antonyms. It will facilitate making comparison and contrast between the present study and the previous researches in terms of methodology, instruments, population and sampling, results and some other aspects.
Ting, (2016) study aimed to present a conceptual framework, using the generative model and proposing a framework of four levels of smart education and ten key features of smart learning environments for intelligent learners who need them in developing knowledge. The smart teaching framework includes discrimination based on classroom instruction, cooperative learning based on group, individualized personal learning and comprehensive learning based on the generative model, and the study reached many results, the most important are: * The basic concept of generative learning includes the creation and refinement of personal mental structures around educational environments by creating a theoretical framework for generative learning that combines content and context analysis. * The generative model allows students to participate in building the conveyed content and framing learning contexts; where they can link new information with old information, gain meaningful knowledge and use their metacognitive abilities.
Wahydo, (2013) study investigated whether the synonyms and antonyms tests measured similar ranges of verbal abilities, and whether they had a similar psychological effect. The data used in this study are subsets of data collected during the University of Gadja Mada Postgraduate Admission Test (UGM) in 2013-2014 academic year using three forms of PAPS Graduate Academic Aptitude Test. Confirmatory factor analysis revealed that tests of synonyms and antonyms assess similar areas of verbal ability. A model combining items from both tests to represent one dimension interpreted the data better than a model that separated the two tests into aspects of different dimensions. The study indicated large correlations between dimensions in the one-dimensional model showed correlations with areas of verbal abilities such as verbal knowledge, comprehension and reasoning. Additional analysis using item-level analysis showed that contrast elements tend to be more difficult than elements of synonyms, and this finding indicates that although both tests evaluate similar content, answering the contrast test requires a more complex cognitive process than answering the synonym test.
Anderman, (2010) study aimed at identifying developments in the field of the research about motivation, achievement and comment, and their role in how these developments are reflected in the Wittrock educational model of learning. Specifically, the study focused on the roles of prior knowledge, the generation of knowledge, and beliefs about ability. It reached many results, such as: * The Wittrock Model of Learning is not designed as a generative process as a model for human motivation, yet the basic principles of the model are reflected in many current motivational research theories and programs. * The role of perception, the building of meaning, and beliefs about ability -are actually three areas that have become very important topics in the study of academic motivation.
Ball, (2009) study aimed to present the preparation of teachers to teach in classes that are culturally and linguistically complex in international contexts. It examined the long-term social and institutional implications of professional development and documentation processes that facilitate teacher continuous learning. The study reached many results, the most important are: * The generative model contributed to the development of general knowledge by teachers and shows how they relied on that knowledge in the thinking processes with students and during the teaching process. * The generative model accelerates educational parity across ethnic and social boundaries. It also contributes to overcoming the legacy of academic failure that afflicts many students through an expanded understanding of the processes of generative change.
Research Hypotheses:
1-There is a statistically significant difference at the level of (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the post-test at the level of understanding. 2-There is a statistically significant difference at the level of (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the post-test in the level of remembering. 3-There is a statistically significant difference at the level of (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the post-test at the level of application. 4-There is a statistically significant difference at the level of (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the post-test in general.
Research Methodology:
A qualitative research approach for this research was chosen because it is useful to understand how participants make meaning of the phenomenon being studied; i.e., the effectiveness of generative learning model in learning synonyms and antonyms in English language course. The researchers used a quasi-experimental approach because it included a purposive sampling, systematic data collection and data analysis procedures. Research Design Procedures for selecting study participants and collecting and analyzing data were described in this section.
The research community consists of all the second grade secondary school female students in the governorate of Bisha, in the second semester of the academic year 2019/2020 AD.
Research Sample:
For the purpose of this study, (48) second-grade secondary school EFL female students were randomly selected to participate in the study. They were divided into two groups. The first is the control group, which consisted of (24), female students. The second is the experimental group that consisted of 24 female students. The purpose of selecting secondary school participants was based on the researchers' assumption that these learners would have a better grasp on learning synonyms and antonyms and they would better be able to answer the questions of the achievement test. This was due to their advanced level and longer exposure to English language. The population of the study was EFL language learners attending this language institution. Since the participants were only female learners, gender was not a variable in the current study. The age range was between (15) and (18) years.
Research Tools and Materials:
Due to the nature of the research and its objectives, the following tools and materials have been used: 1-Achievement test in synonyms and antonyms of the proposed unit (Travel around the World) of the English language course. 2-A teacher's guide for teaching synonyms and antonyms in the proposed unit Travel around the World) according to the generative learning model. 3-List of synonyms and antonyms in the proposed unit according to the generative learning model. 4-Activity brochure for female students.
Exploratory Testing of the Research Tool (Achievement Test):
After completing the preparation of the research tool (achievement test) and amending it in light of the opinions of the arbitrators, the research tool was tested on a random sample of (30) students from outside the study sample, in order to verify: the coefficient of ease, difficulty and discrimination, as well as the validity and reliability of the study tool.
Calculation of the difficulty and ease factor:
The researchers calculated the difficulty and ease factor for an exploratory sample reached (30) female students. The results was represented in the following table: Table No. (1) indicates that the values of the difficulty factor is ranged from (33.3% to 63.3%). Facilitation coefficients is ranged between (36.7% to 66.7%). All these values are acceptable, and indicate the validity of the test for the field application. According to Allam (2007), if the difficulty factor is less than (25%), the question is considered difficult, but if it exceeds (75%), the question is considered easy, and what falls between them is considered the as a medium difficulty.
Validity Calculation of the Test Internal Consistency:
The researchers calculated the internal consistency of the test items using the Pearson correlation coefficients between each item and the total score of the test, which is shown in the following table: (2), it becomes clear that all the items of the achievement test are significant at the level (0.01), while some of them are at the level of significant (0.05). Accordingly, it becomes clear that all the items composing the test have a high degree of validity, which makes it suitable for field application.
Field study: 1-The pre-application of the achievement test to use the generative model, while the female students in the control group were taught in the traditional method. 2-The proposed unit was studied for the female students of the experimental group. 3-Remote application of the test of learning synonyms and antonyms to the two research groups (experimental and control).
Results and their Discussion: 1-The answer of the first question: What is the effectiveness of using the Generative Model in learning synonyms and antonyms in the English language course for second-grade secondary school students in Bisha governorate at the level of remembering?
In order to find out whether there are statistically significant differences at the level (0.05≥∝) between the mean scores of the experimental and control group students in the post application of the achievement test in the level of remembering, the researchers used (T) test for the independent sample. It was used to clarify the significance of the differences between the means of the experimental and control group in the post application of the test at the level of remembering. The results were as follows: Table (3) above shows the superiority of the experimental group over the control group in the post application of the achievement test in the level of remembering. The average scores of the experimental group in the achievement test at the level of remembering was (4.67), while the average scores of the control group was (2.88). The value of the total (T) was (5.264) at degrees of freedom (46), which is a statistically significant value at the level (0.05). Thus, the first hypothesis of the research, which states, "There is a statistically significant difference at the level (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the achievement test in the level of remembering in favor of the experimental group" was accepted.
This result is consistent with many previous studies that have proven the feasibility and effectiveness of the generative model in the educational process in general. It agreed with the Al-Saeedan, (2011) study, which found statistically significant differences between the averages of students' physical concepts acquisition due to the teaching strategy, and in favor of the generative learning strategy compared to the learning cycle and the usual method. It also agreed with Al-Sheikh, (2013) study, which indicated the effectiveness of the generative model in developing the skills of literary texts among students, and the size of the effect was great in the levels of arrangement: comprehension, interpretation, criticism, literal, taste, and then creative.
2-The answer of the second question: What is the effectiveness of using the Generative Model in learning synonyms and antonyms in the English language course for second-grade secondary school students in Bisha governorate at the level of understanding?
To know whether there are statistically significant differences at the level (0.05≥∝) between the mean scores of the experimental and control group students in the post application of the achievement test in the level of understanding, the researchers used (T) test for the independent sample. It was used to clarify the significance of the differences between the means of the experimental and control group in the post application of the test at the level of understanding. The results were as follows: (4) that the superiority of the experimental group over the control group in the post application of the achievement test in the level of understanding. The average scores of the experimental group in the achievement test in the level of understanding was (7.08), while the average score of the control group was (2.42). The value of total (T) was (-9.418) at degrees of freedom (46), which is a statistically significant value at the level of (0.05). This indicated that the second hypothesis of the research, which states, "There is a statistically significant difference at the level of (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the achievement test in the level of understanding in favor of the experimental group" was accepted.
This result is in agreement with Al-Jamaan, (2013) study, which found statistically significant differences at the level (0.05) in the average achievement of the basic ninth grade female students in chemistry due to the teaching model and in favor of the experimental group that was studied according to the generative learning strategy. It also agreed with Abu Qadiri, (2016) study, which concluded that there are statistically significant differences between the arithmetic means of the post-measurement of achievement in chemistry in favor of the experimental group. The size of the effect was (68%), and the results showed that there were statistically significant differences between the arithmetic averages of the scores of the two groups in the achievement test in favor of the experimental group, and the size of the effect was (70%).
3-The answer of the third question: What is the effectiveness of using the Generative Model in learning synonyms and antonyms in the English language course for second-grade secondary school students in Bisha governorate at the level of application?
In order to find out whether there are statistically significant differences at the level (0.05≥∝) between the mean scores of the experimental and control group students in the post application of the achievement test in the level of application, the researchers used (T) test for the independent sample. It was used to clarify the significance of the differences between the means of the experimental and control group in the post application of the test at the level of application. The results were as follows: (5) that the superiority of the experimental group over the control group in the post application of the achievement test at the level of application. The average scores of the experimental group in the achievement test at the level of application reached (3.29), while the average scores of the control group reached (1.46). The value of total (T) was (-9.903) at degree of freedom (46), which is a statistically significant value at the level of (0.05). Thus, the third hypothesis of the research, which states that "There is a statistically significant difference at the level of (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the achievement test in the level of application in favor of the experimental group" was accepted.
This result is in agreement with O. Qarareh's (2016) study that found statistically significant differences at the level (0.05) for the effect of the constructor of the learning model on the achievement and scientific thinking in favor of the experimental group. It also agreed with Lisnasari Andi (2017) study, which concluded that the students' achievement of learning before they are taught using the generative learning model with a strategy of writing-pair is in a very low category. However, the student's learning after teaching using the generative learning model with a writing-pair strategy is in a high category, and that the students 'response was good for activities in the learning processes with the generative learning model.
4-The answer of the main question: What is the effectiveness of using the Generative Model in learning synonyms and antonyms in the English language course for second-grade secondary school students in Bisha governorate at all levels?
To know whether there are statistically significant differences at the level (0.05≥∝) between the mean scores of the experimental and control group students in the post application of the achievement test in all levels, the researchers used (T) test for the independent sample. It was used to clarify the significance of the differences between the means of the experimental and control group in the post application of the test as a whole. The results were as follows: (6) shows the superiority of the experimental group over the control group in the post application of the achievement test as a whole. The average scores of the experimental group in the achievement test as a whole (15.00), while the average scores of the control group reached (7.75). The value of total (T) was (-12.341) at degree of freedom (46), which is a statistically significant value at the level (0.05). Thus, it is possible to accept the fourth hypothesis of the research, which states that "There is a statistically significant difference at the level (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the achievement test as a whole in favor of the experimental group." This result is in agreement with Al-Masri (2016) study, which concluded that there are statistically significant differences between the experimental and control group in the achievement test as a whole in favor of the experimental group that studied according to the generative learning. It also agreed with the study of Al-Shammari (2018), which demonstrated the effectiveness of the generative learning model in developing some mathematical operations at the levels (comprehension, remembering, application, and overall degree).
Conclusion:
The present study investigated the effectiveness of the Generative Model in learning synonyms and antonyms in English language at Kingdom of Saudi Arabia. A random sample of (48) female students participated in this study. All participants undertook an achievement test consisting of four types of objective questions. Students were divided into two groups, a control group and an experimental group. They were exposed to the synonyms and antonyms that are part of the Saudi secondary second grade curriculum. The control group studied the synonyms and antonyms in the conventional way of teaching. The experimental group studied using the Generative Model. Four weeks later, a post-test took place for the control group as well as for the experimental group. While teaching the synonyms and antonyms for the experimental group a positive feedback was present and students expressed their enjoyment in dealing with the activities including synonyms and antonyms. They found themselves active all the time and enjoyed the creativity in learning the new words and generating short sentences. Results indicated that, there is a statistically significant difference at the level (0.05) between the mean scores of the experimental group students and the scores of the control group students in the post application of the achievement test as a whole in favor of the experimental group.
Recommendations:
Based on the findings of the research, the researchers recommend the following: 1\ The necessity to use the generative learning model in the educational process because of its proven effectiveness in developing the level of learning synonyms and antonyms for secondgrade secondary school female students. 2\ The necessity to encourage teachers to use the generative learning model in the educational process. 3\ The necessity to provide the necessary technical environment to use the generative learning model in the educational process. 4\ The necessity to encourage female students to use modern technologies in the early educational stages, so that they can deal with these technologies and keep pace with current progress in all fields.
Suggestions for Further Researches:
In light of the objectives and results of the current research, the researchers propose the following studies and research: 1-Studying the effectiveness of the generative learning model in acquiring the skills of listening, speaking, reading and writing in different educational stages. 2-Studying the effectiveness of the generative learning strategy in developing the skills of criticism, analysis and innovative thinking in English literature. 3-Conducting descriptive studies on the obstacles of using the generative learning model in the educational process, in order to identify these obstacles, and develop appropriate solutions for them. | 2022-03-18T15:11:21.994Z | 2022-02-28T00:00:00.000 | {
"year": 2022,
"sha1": "fa46a55b8f19ec464b6e9730a8909f275a084c8c",
"oa_license": "CCBY",
"oa_url": "https://eduj.uowasit.edu.iq/index.php/eduj/article/download/2059/2011",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8868c9cc81774617fee85a7b695a316c5c264f22",
"s2fieldsofstudy": [
"Education",
"Linguistics",
"Computer Science"
],
"extfieldsofstudy": []
} |
52917859 | pes2o/s2orc | v3-fos-license | Prescribing Variation in General Practices in England Following a Direct Healthcare Professional Communication on Mirabegron
Introduction: Pharmacovigilance may detect safety issues after marketing of medications, and this can result in regulatory action such as direct healthcare professional communications (DHPC). DHPC can be effective in changing prescribing behaviour, however the extent to which prescribers vary in their response to DHPC is unknown. This study aims to explore changes in prescribing and prescribing variation among general practitioner (GP) practices following a DHPC on the safety of mirabegron, a medication to treat overactive bladder (OAB). Methods: This is an interrupted time series study of English GP practices from 2014–2017. National Health Service (NHS) Digital provided monthly statistics on aggregate practice-level prescribing and practice characteristics (practice staff and registered patient profiles, Quality and Outcomes Framework indicators, and deprivation of the practice area). The primary outcome was monthly mirabegron prescriptions as a percentage of all OAB drug prescriptions and we assessed the change following a DHPC issued by the European Medicines Agency in September 2015. The DHPC stated mirabegron use was contraindicated with severe uncontrolled hypertension and cautioned with hypertension. Variation between practices in mirabegron prescribing before and after the DHPC was assessed using the systematic component of variation (SCV). Multilevel segmented regression with random effects quantified the change in level and trend of prescribing after the DHPC. Practice characteristics were assessed for their association with a reduction in prescribing following the DHPC. Results: This study included 7408 practices. During September 2015, 88.9% of practices prescribed mirabegron and mirabegron comprised a mean of 8.2% (SD 6.8) of OAB prescriptions. Variation between practices was classified as very high and the median SCV did not change significantly (p = 0.11) in the six months after the September 2015 DHPC (12.4) compared to before (11.6). Before the DHPC, the share of mirabegron over all OAB drug prescriptions increased by 0.294 (95% confidence interval (CI), 0.287, 0.301) percentage points per month. There was no significant change in the month immediately after the DHPC (−0.023, 95% CI −0.105 to 0.058), however there was a significant reduction in trend (−0.036, 95% CI −0.049 to −0.023). Higher numbers of registered patients, patients aged ≥65 years, and practice area deprivation were associated with having a significant decrease in level and slope of mirabegron prescribing post-DHPC. Conclusion: Variation in mirabegron prescribing was high over the study period and did not change substantively following the DHPC. There was no immediate prescribing change post-DHPC, although the monthly growth did slow. Knowledge of the degree of variation in and determinants of response to safety communications may allow those that do not change prescribing habits to be provided with additional support.
Introduction
When medicines are first launched, evidence of drug efficacy and safety may be incomplete, and for approximately 10% of drugs, information about serious risks associated with the drug do not become known until after being released onto the market [1]. The pre-marketing phase based on randomised controlled trials generally involves healthier participants than the general patient population, relatively short durations of follow-up, and sample sizes which only power to detect a difference in the primary efficacy outcome. Post-marketing pharmacovigilance is necessary to monitor benefits and risks based on real-world use. Emerging safety issues identified in post-marketing monitoring may require regulatory action to maintain a favourable risk-benefit ratio. This can involve a change in the terms of a product licence, a direct healthcare professional communication (DHPC) from medicine regulators to healthcare professionals warning of a new adverse effect, caution, or contraindication, or withdrawal of a drug from the market.
An example of a drug recently subject to a Europe-wide DHPC is mirabegron, licensed for the treatment of overactive bladder (OAB) by the European Medicines Agency (EMA) in December 2012 [2]. It is a beta-3 adrenoreceptor agonist and is the first treatment for OAB with this therapeutic target. Other pharmacological treatment options for OAB such as oxybutynin are antimuscarinic drugs, which carry a risk of anticholinergic adverse effects due to their mechanism of action, such as dry mouth, dizziness, constipation, and cognitive impairment [3]. Mirabegron, as a new active substance, is subject to additional monitoring post-marketing, generally for a period of five years under EMA rules. In July 2015, a review of safety data by the EMA found an increased risk of severe hypertension associated with mirabegron, and cerebrovascular and cardiovascular events such as stroke linked to mirabegron had been reported. The EMA deemed that this required active dissemination regarding the change of use of mirabegron. A DHPC letter was sent to healthcare professionals in September 2015 by European medicine regulators to inform them that mirabegron was contraindicated in patients with severe uncontrolled hypertension (systolic blood pressure ≥180 mmHg or diastolic blood pressure ≥110 mmHg, or both) [4]. The product license was also amended to caution prescribing where systolic or diastolic blood pressure is ≥160 mmHg or diastolic blood pressure ≥100 mmHg respectively.
DHPCs have been shown to be effective in changing prescribing behaviour. The impact of these has been evaluated for a wide range of therapeutics, including selective serotonin reuptake inhibitors, antipsychotics and oral contraceptives [5][6][7], and DHPCs for safety issues with a risk of death and/or disability may have a greater impact on prescribing [8]. However, the extent to which prescribers vary in their response to regulatory safety communications is unknown. An understanding of the degree of variation in and determinants of uptake of DHPCs may allow groups that do not change prescribing to be supported with specific interventions.
This study aims to explore changes in prescribing in general practices in England following a DHPC regarding the safety of mirabegron.
The objectives were: To quantify variation between general practitioner (GP) practices in rates of mirabegron prescribing before and after regulatory safety communication To determine the effect of this safety warning on the level and trend of mirabegron prescribing among general practices in England To quantify variation between GP practices in response to the regulatory safety communication, and To identify GP practice factors that explain variations in the response to the regulatory safety communication.
Study Design, Setting and Participants
The STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) statement has been used in the reporting of this research [9]. This study utilises an interrupted time series design, the strongest quasi-experimental design to assess the effect of policy or regulatory interventions [10].
The setting is English general practice and includes all GP practices in England using prescribing data available from the National Health Service (NHS) Digital platform. This provides monthly statistics of prescribing of different medicines aggregated at the level of GP practices for all practices in England. The study period was January 2014 to March 2017. Atypical practices were excluded, i.e., those with <750 registered patients or <500 patients registered per full time equivalent (FTE) GP, or if there are >5000 registered patients per FTE GP. This is consistent with previous studies utilising administrative GP practice data from the same source [11]. In addition, practices with fewer than 100 prescriptions per month during the 12 months either before or after the DHPC (i.e., from August 2014 to October 2016) were excluded, to ensure that included practices contributed sufficient data in the immediate period before and after the DHPC.
Variables
The primary outcome was prescriptions for mirabegron as a percentage of all prescriptions for drugs to treat OAB.
Characteristics of GP practices which may relate to prescribing include the number of FTE GPs in each practice, the age and sex distribution of GPs, and whether the practice has any registrar GPs (i.e., qualified doctors undertaking specialist training in general practice) [11]. Indicators of quality of care through the Quality and Outcomes Framework (QOF) are available for each practice, including the overall score, as well as indicators relating to specific conditions such as cardiovascular disease. Information is also provided on the practice list size (i.e., the number of patients registered to each practice), and the age and sex distribution of registered patients. Lastly, although no other practice-level patient characteristics are available, the Index of Multiple Deprivation (IMD) of the geographic area a practice is located in was used.
Data Sources/Measurement
Monthly prescribing data relating to mirabegron, OAB drugs and all prescription items were downloaded from the NHS Digital website for the study period. Prescribed products are coded based on their British National Formulary (BNF) classification, and mirabegron (0704020AE) and OAB drug prescribing (070402) were defined using this coding. All drugs listed in BNF section 7.4.2 (Urinary frequency, enuresis, and incontinence) were considered as OAB drugs (see Supplementary Table S1). The number of prescriptions for each product that was dispensed in the specified month is captured in this data. The data relates to NHS prescriptions issued by general practices in England and dispensed in any community pharmacy in the United Kingdom (UK). Prescriptions may be issued by any prescribing staff within practices, including GPs, nurses, and pharmacists and private prescriptions are not included.
Baseline GP practice workforce and registered patient data (i.e., from 2014) for included practices were downloaded from the NHS Digital website and were summarised at the practice level. In addition, QOF indicators were obtained for 2014 and 2015 for overall score and indicators of hypertension and dementia prevalence (which could plausibly explain variation in prescribing of mirabegron and other antimuscarinic OAB drugs, respectively). For deprivation, the IMD for 2015 is provided for geographic areas (lower-level super output areas or LSOA) by the Department for Communities and Local Government. The index captures the following dimensions of deprivation: income, employment, education, health, crime, access to housing and services, and living environment. Practices were assigned the IMD decile of the LSOA they were located in based on their postcode.
As this study used publicly available data aggregated at the GP practice-level, ethical approval was not required.
Analysis
Descriptive statistics are presented for practices which met inclusion criteria. Prescribing patterns were summarised for each month, including the proportion of practices prescribing any mirabegron, mirabegron and OAB prescriptions, and mirabegron as a percentage of OAB prescriptions. We graphed monthly percentiles of mirabegron's percentage share of OAB prescriptions to describe variation over time. Between-practice variation in prescribing before and after the September 2015 DHPC was assessed using the systematic component of variation (SCV) based on practices between the 5-95th percentiles of mirabegron prescribing. The SCV estimates the true or non-random part of total variation and performs well as a measure of variation [12][13][14]. Variation is classified as either low (less than 3), moderate (between 3 and 5.4), high (between 5.4 and 10), or very high (greater than 10) [12]. In particular, we examined variation in the six months before and after the DHPC and assessed whether the median SCV differed significantly before and after, using the Wilcoxon ranked sum test. Standardised prescribing ratios (mirabegron's percentage share of all OAB prescriptions in a practice each month divided by the percentage share across all practices each month) were calculated and plotted by month to visually inspect variation. Ratios >1 indicate a higher percentage than average. To examine variation relative to the month of the DHPC, we calculated a rolling average of practices' mean percentage of mirabegron prescriptions over the previous six months, expressed as a ratio of the practice percentage in September 2015. Deciles and 1st to 9th bottom and top percentiles of these ratios were graphed to assess whether the distribution of practices differed before and after the DHPC. This approach has been used previously to assess variation following guidance being issued regarding tamoxifen use [15].
Interrupted time series studies of policy interventions can be analysed using segmented regression, allowing for the change in level and trend of an outcome following an intervention to be evaluated [10]. A multilevel segmented regression model was fitted to account for repeated monthly observations clustered within practices, with monthly mirabegron percentage as the outcome. Random effects (to allow slope and intercept parameters to vary by practice) were included to determine the change in level and trend of prescribing after the DHPC, using an unstructured covariance matrix. Appropriateness of inclusion of random effects was assessed using the likelihood ratio test for the following parameters: level of prescribing pre-safety warning in August 2015 (intercept), the change in level of prescribing immediately post-warning in October 2015 (change in intercept), the monthly trend in prescribing pre-warning (slope), and the change in the monthly trend post-warning (change in slope). Calendar month (as a fixed effect) and a second order autoregressive function were included to account for seasonality.
Lastly, the estimated practice-specific parameter for each of the random effects was examined to classify practices according to whether their change in level and change in slope parameters represented a significant increase or decrease (i.e., if the estimate's 95% confidence interval excluded zero). Practice characteristics were examined as predictors of decreases in level or slope using multivariate logistic regression. Characteristics were included as standardised variables (i.e., rescaled to a mean of zero and a standard deviation of one).
Results
This study included 7408 GP practices, which represents 98.4% of practices in England as of September 2016. At baseline, included practices had a median of 6613 registered patients (interquartile range (IQR) 4072-9919). The mean percentage of patients aged 65 years and over was 16.8% (SD 6.5), and on average their patients were 49.9% female (SD 2.3). Practices had a median four GP FTEs (IQR 2-6.4). On average, 46.5% of GPs in a practice were female (SD 25.9) with 56.2% aged 45 years and over (SD 28.4), while 25.5% of practices had a registrar.
During September 2015, 88.9% of practices prescribed mirabegron and mirabegron comprised a mean of 8.2% (SD 6.8) of OAB prescriptions (median 7.0%, IQR 3.6%-11.1%). This corresponded to a mean of 76 OAB prescriptions, of which 6.2 were mirabegron. Variation between practices was classified as very high and the median SCV did not change significantly (p = 0.11) in the six months after the September 2015 DHPC (12.68) compared to the six months before (12.04). Among practices with any mirabegron prescribing, standardised prescribing ratios in the six months before and after September 2015 ranged from 0.44-14.1. Figure 1 is a dot plot which illustrates little change in variation over the time period. Figure 2 shows a decile plot of mirabegron percentage, indicating the increasing percentage over time, but little change in the distribution across deciles. Figure 3 shows the distribution of practices by mean mirabegron percentage for rolling six-month periods relative to September 2015, and the distribution was relatively symmetrical with respect to the Y-axis before and after the DHPC, suggesting that between-practice variation remained relatively stable.
Segmented regression analysis indicates that before the DHPC, there was a trend of 0.294 (95% confidence interval (CI), 0.287, 0.301) percentage points increase per month in the percentage of OAB drugs prescribed as mirabegron (see Table 1 and Figure 4). There was no significant change in percentage of mirabegron prescribing immediately after the DHPC (−0.023, 95% CI −0.105 to 0.058); however, there was a small but significant reduction in trend (−0.036, 95% CI −0.049 to −0.023) after the DHPC. Examining practice-level random effects, 1.8% of practices had an immediate decrease in level of mirabegron prescribing, while 7.1% had a decrease in slope post-DHPC (see Table 2). Increases in level and slope were observed in 1.9% and 4.5% of practices respectively. Estimated mirabegron prescribing for sub-groups of practices with a decrease in level or slope is shown in Figure S1. a Decrease defined as a practice-level random effect for level/slope where the upper bound of the 95% confidence interval is less than zero. b Increase defined as a practice-level random effect for level/slope where the lower bound of the 95% confidence interval is greater than zero. Table 3 shows practice characteristics associated with decreases in the level of prescribing and slope. A higher number of registered patients, higher proportion of registered patients aged 65 years and over, and deprivation were all associated with lower odds of an immediate decrease in the level of mirabegron prescribing. Similarly, factors associated with lower odds of a decrease in slope included a higher number of registered patients and deprivation.
Discussion
Variation in mirabegron prescribing was high and this did not change significantly following a DHPC. At the beginning of the study period, mirabegron was a relatively new medicine to be authorised, having been approved by the EMA in December 2012 and first prescribed on the English market from March 2013. This may be one explanation for the high variation, as practices may adopt prescribing of new products at different rates [16]. At the time of the DHPC, the vast majority of practices were prescribing mirabegron. There was no immediate prescribing change post-DHPC, and although the monthly growth in mirabegron prescriptions did slow, the magnitude of this change was small. Our study could only evaluate aggregate practice-level prescribing and could not separate prevalent and incident use. The decline in the monthly rate of increase in mirabegron could potentially be attributable to reduced incident use, however any change in mirabegron prescribing among at-risk patients may not have been detectable at the practice-level.
Practices with more registered patients and those in more deprived areas were less likely to have a reduction in the level and trend in mirabegron prescribing. This suggests that some practices have a greater capacity to review and amend prescribing if there are fewer patients or less deprivation, or these practices may have fewer or no patients with uncontrolled hypertension and no reason to alter prescribing. Deprivation and inequality are associated with more complex care need through higher prevalence of multimorbidity and polypharmacy [17,18], and poorer health outcomes [19]. In line with the inverse care law, those most in need of care due to inequalities are often least likely to receive it due to reduced capacity of care providers because of the added complexity of care [20,21]. Similarly, the more older patients registered at a practice, the less likely an immediate reduction in the level of mirabegron prescribing. For practices with older patient populations, reluctance to switch to alternative oral OAB drugs (which may have anticholinergic effects) may have contributed to continued growth in mirabegron prescribing. Mirabegron was a second-line therapy in national guidelines at this time and so many patients may already have not responded to or tolerated an alternative OAB drug. No alternatives were recommended in the DHPC, which reflects the limited therapeutic options for OAB available to prescribers caring for older patients, among whom the prevalence of hypertension is high. Alternative medicines for OAB are antimuscarinic, and older adults are particularly susceptible to the anticholinergic effects of such agents (3), and further risks identified in recent years include dementia and cognitive decline [22]. However this evidence is primarily derived from observational research which considers all OAB drugs together, whereas newer agents such as darifenacin may not carry this effect [23]. It is also possible that a doctor and patient for whom mirabegron may be cautioned could decide that the benefits of continuing mirabegron for OAB may outweigh the potential cardiovascular risks. This may be more likely given the relatively small number of cases which prompted the DHPC and that further confirmatory studies have yet to be completed.
Although this appears to be the first study to evaluate variation in response to a DHPC regarding a medication, previous research evaluating indicators of prescribing safety, high-risk prescribing, and antipsychotic prescribing in UK general practice has found variation between practices was similarly high [7,24,25]. Although several studies have evaluated the impact of DHPCs on a range of outcomes, these have not assessed variation between healthcare professions [5,6,26,27]. Evaluating the effectiveness of risk communication has become a focus area in recent years [28], as evidenced by Strengthening Collaboration for Operating Pharmacovigilance in Europe (SCOPE) Joint Action which involves medicine regulators across Europe [29]. An understanding of variation in and the determinants of response to regulatory safety communications among GP practices, and ideally individual general practitioners, may allow for those that do not alter their prescribing to be provided with tailored information and supports to promote safe medication use. There is evidence that there is variation between countries in Europe in GP preferences for the format of safety communications [30]. Despite this, it appears that DHPCs represent the most common source of awareness of medicines' safety issues among European GPs, pharmacists and cardiologists [31]. Previous research has indicated that such communications have greater impact on non-specialist drugs and for safety issues with a risk of death and/or disability [8] however to date, the relationship between the characteristics of DHPCs and variation in outcomes has not been evaluated. Further research should also evaluate additional interventions to communicate safety information to healthcare professionals in cluster or stepped wedge randomised controlled trials. The only such study to date examined the effect of an additional email on the effectiveness of a DHPC in the Netherlands [32]. Depending on the timing and formats of future DHPCs, this may present opportunities to evaluate the effectiveness of such communications in natural experiments [6].
Systematic reviews illustrate that relatively few evaluations of regulatory actions have been undertaken [5,[33][34][35]. Regulatory actions relating to a range of therapeutic agents have been evaluated, with antidepressants being the most commonly examined [33,35]. A substantial proportion of such studies used study designs and analytical approaches which yield low quality evidence of the effects of pharmacovigilance actions i.e., cross-sectional or before and after studies [33,34]. Unlike more methodologically robust interrupted time series studies, these studies do not consider trends in outcome and thus may overestimate the impact of an intervention of interest [10,36]. Therefore, the methodological approach may have an impact on findings, as studies using more robust design tended to report more conservative or mixed impacts of regulatory actions, like the present study [5,33,34].
Evaluating medication utilisation using prescribing or dispensing data is just one way of evaluating such regulatory actions. Other quantitative evaluation could measure changes in adverse outcomes relating to uncontrolled hypertension or cerebrovascular events in the case of mirabegron, or unintended consequences such as inappropriate switching to another OAB drug. Recent proposals have outlined a framework approach to evaluation, including quantitative and qualitative analysis of tradition and social media uptake of the communications, qualitative research with healthcare professionals and patients, as well as more traditional quantitative measures of process and outcomes [37]. Behaviour change and implementation science is a growing area of focus for regulatory bodies in pharmacovigilance and risk minimisation programs [38]. This reflects that moving from awareness of a regulatory safety communication to implementation in clinical practice is complex, with decay at each step in the process [39]. Similarly, the use of complex interventions to support adoption of regulatory safety warnings may increase their impact. For example, this could involve integrating emerging safety communication within computer decision support systems in electronic health records to flag warnings relevant to specific patients during clinical workflow. However, evidence on the effectiveness of computer warnings is mixed, and requiring a reason to override messages may improve effectiveness at the expense of potential alert fatigue [40]. Frameworks such as the adoption of innovation (i.e., innovation, communication channels, time and adoption process, and social systems) could be considered by regulatory agencies to optimise scale-up, spread, and uptake of regulatory actions and communications [41].
This study has a number of strengths. It appears to be the first to assess the impact of the DHPC on mirabegron's cardiovascular risks on utilisation patterns of OAB drugs in a large primary care cohort. We have also used the most robust method possible in evaluating temporal changes in prescribing. A limitation of this study is the lack of patient-level data, which prevented analysis of mirabegron prescribing among those patient groups affected referred to in the DHPC. It is possible that prescribers may have reduced use of mirabegron in at-risk populations, which may not be detectable with a concurrent rise in mirabegron prescribing to other patients in the practice. Lack of patient-level data also precluded analysis of patient-level characteristics and their association with cessation of mirabegron among prevalent at-risk patients. All of the characteristics examined were at the practice level, with the exception of deprivation of the practice area. While this may indicate the deprivation of the practice populations, there is potential for ecological bias in that registered patients may not have been deprived despite the practice being located in a deprived area. We were also unable to examine patient-level changes in prescribing to determine whether reductions in mirabegron use were appropriately targeted at patients most at risk of cardiovascular harms. Inappropriate switching to other OAB drugs in patients who already had a high anticholinergic burden as an unintended consequence could have resulted in increased net harm. Despite these limitations, this appears to be the first study to evaluate variation between GP practices in response to a DHPC, which may be an important consideration for future pharmacovigilance research.
Conclusions
While variation in healthcare has received much attention in recent decades, this has not extended to variation in response to regulatory safety communications regarding medications. As medicine regulators develop further strategies to improve the impact of DHPCs on clinical practice, heterogeneity between prescribers in response to such warnings will become an important consideration.
Supplementary Materials: The following are available online at http://www.mdpi.com/2077-0383/7/10/320/s1, Figure S1: Estimated values for proportion of mirabegron prescribing without and with the direct healthcare professional communication (DHPC), graphed by significant reduction in slope or level post-DHPC., Table S1: All agents considered as overactive bladder drugs. | 2018-10-21T21:47:40.845Z | 2018-09-05T00:00:00.000 | {
"year": 2018,
"sha1": "18aa06127c7f7399a14939b550ef2de0996ef696",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/7/10/320/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8e6f83034c1fe0b5d7832365796d1268d58a5e6",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226263316 | pes2o/s2orc | v3-fos-license | Protease targeted COVID-19 drug discovery and its challenges: Insight into viral main protease (Mpro) and papain-like protease (PLpro) inhibitors
Graphical abstract
Introduction
In late December 2019, the newly emerged highly contagious novel coronavirus disease 2019 (COVID-19) was identified in Humans. 1,2 The outburst of virus containing a single positive-stranded RNA first found to occur in Wuhan, China and was named as severe acute respiratory syndrome (SARS)-CoV-2 (SARS-CoV-2). [3][4][5][6][7][8][9][10] Worldwide more than millions of cases have been registered. 11,12 According to World Health Organization (WHO), the highly transmissible disease COVID-19 has so far, more than millions confirmed cases and deaths have been reported from 216 countries around the globe. 12 Currently, this virus is far more contagious and more catastrophic compared to other flu-viruses with several symptoms like fever, cough, pneumonia, nausea, and fatigue. 13 Hence, the World Health Organization was forced to declare a state of global health emergency to organized scientific and medical efforts to quickly develop a cure for patients. 14 Presently, there is no specific targeted therapy against this novel virus. Thus, the scientific community is making great efforts to explore diverse mechanisms to restrict the virus replication. As a result, diverse antiviral drugs with similar viral infections were tested on patients. Several drugs like: Remdesivir (designed for the Ebola virus), 14 Lopinavir/Ritonavir (designed for the HIV), 15 chloroquine and hydroxychloroquine (designed for anti-malarial action) 14 and Tocilizumab (designed for rheumatoid arthritis) 16 were found to be effective against this deadly virus, but their efficacy still remains controversial. 17 The current impact of COVID-19 outbreak and the possibility of forthcoming CoV epidemics prove that there is a need for rapid discovery of anti-COVID-19 drugs. Recent studies revealed that SARS-CoV-Abbreviations: 3CLpro, 3C-like protease or main protease; CoV, coronavirus; COVID-19, coronavirus disease 2019; E protein, envelope protein; EBOV, Ebola virus; Mpro, main protease; M protein, membrane protein; MERS-CoV, Middle East respiratory syndrome coronavirus; N protein, nucleocapsid protein; Nsp, non-structural proteins; NTD, N-terminal domain; ORF, open reading frame; PLpro, papain-like protease; QSAR, Quantitative structure-activity relationship; RdRp, RNA-dependent RNA polymerase; S protein, spike protein; SAR, Structure-activity relationship; SARS-CoV, severe acute respiratory syndrome coronavirus; SARS-CoV-2, severe acute respiratory syndrome coronavirus 2; SPCI, Structural and physico-chemical interpretation; WHO, World Health Organization.
2 has a comparable genomic pattern to other corona viruses. 18 These viruses mainly comprises of a 5 ′ -untranslated region (UTR), a replicase complex for encoding non-structural proteins (nsps), spike protein (S) gene, envelope protein (E), membrane protein (M) gene, nucleocapsid protein (N) gene, 3 ′ -UTR, and numerous unknown non-structural parts which provide them support against the environmental factors. [19][20][21][22] Usually, these viruses harvest several polypeptides which promote proteolytic breakdown to produce 20 additional proteins during their lifecycle. Among them two crucial proteases such as main protease (Mpro) and papain-like protease (PLpro) are vital for virus replication. [22][23][24] Meanwhile, a tremendous effort has been spent on studying these proteases in order to discover specific inhibitors against this noxious COVID-19. [25][26][27][28][29][30] Among these the two proteases, the coronavirus Main protease (Mpro) also recognized as 3C like protease (3CLpro) acknowledged great attention for its significant role in enzymatic activity leading to its posttranslational processing of replicase polyproteins. 26,27 The Mpro consist of 306 amino acid long and has high structural and sequence resemblance to that of SARS-CoV Mpro. 25 SARS-CoV-2 Mpro monomer comprises of three domains (i.e., N-terminal domain-I, N-terminal domain-II, and C-terminal domain-III). 25 The Mpro active site consists of two catalytic dyad C145 and H41 ( Fig. 1A-B).
On the other hand, papain-like protease (PLpro) active site consists of catalytic triad ( Fig. 1C-D). PLpro functions by cleaving ISG15, a twodomain Ub-like protein, and Lys48-linked polyUb chains. Hence, their main function lies in the processing of the viral polypeptide into functional proteins, which further deubiquitinize and dampen host anti-viral reactions by hijacking the ubiquitin (Ub) an enzyme playing the pivotal role in host defense mechanism. 31 Therefore, the two proteases are equally important for viral lifecycle and are supported by several studies which reveal that most of the coronaviridae genome encrypts two polyproteins, pp1a and pp1ab during their translation stage through ribosomal frame shifting mechanism. 32 These polyproteins were further processed into mature non-structural proteins (nsps) by Mpro and PLpro which plays a vital role in the transcription/replication. 33 Targeting these may hence institute a valid tactic for antiviral drug design and discovery.
In the 21st century, drug repurposing, screening of databases and designing different inhibitors are the only fastest possibility in terms drug discovery to prevent the catastrophe caused by COVID-19 outbreak. Diverse approaches have also been made in order to get insights into the mechanism of these proteases and to inhibit their functions but still there has been a lot of groundwork to be done for drug discovery and development against these targets. This study, as a part of rational drug design and discovery, 9,10,[34][35][36][37] aims to sketch out the current status of SARS-CoV-2 protease inhibitors based drug discovery. We also try to provide a new insight into coronavirus protease structural biology and discuss the challenges in the development of effective as well as drug like protease inhibitors. The study will offer an initiative to stimulate further research by providing useful guidance to the medicinal chemists for designing of new protease inhibitors effective against COVID-19 in near future.
Structural biology of SARS-CoV-2 proteases
CoV is a single-stranded positive sense RNA virus where genome is encapsulated within a membrane envelope. [38][39][40][41][42][43] The spike glycoprotein of CoV regulates its entry into the host cells. [43][44][45] Two polyproteins (pp1a and pp1ab) are translated after virion entry into the host cells, which are promptly split by two viral proteases including Mpro and PLpro. 46 Further proteolytic cleavage of these two viral polyproteins resulted in sixteen non-structural proteins (nsp1 to nsp16). The PLpro manages the proteolytic cleaving of nsp 1-3, whereas all junctions downstream of nsp4 are cleaved by Mpro.
The Mpro cleaves at no fewer than 11 sites on the large polyprotein 1ab with the recognition sequence of L-Q↓ (S, A, G) (↓ refers the cleavage site). 25 The Mpro of SARS-CoV-2 is a 67.6 kDa homodimeric cysteine protease having huge sequence identity with SARS-CoV Mpro (Fig. 1). Mpro of CoV forms a dimer where each monomer consists of N-terminal catalytic region and the C-terminal region. Moreover, the N-terminal residues form a typical chymotrypsin fold while the C-terminal residues form an extra domain. In addition, each protomer containing three domains such as domains I (residues 8-101), II (residues 102-184) and domain III (residues 201-303). 22 Domains I and II espouses a double β-barrel fold and the active site is located in a shallow cleft between two antiparallel β-barrels (Fig. 1). Notably, C terminal helical-bundle domain, Domain III, might involve in stabilization of their active homodimer forms. The active site can be further divided into several (sub)sites. Notably, the catalytic dyad formed by H41-C145 is observed at the S1 site (Fig. 2). The hydrophobic side chains are found mostly at the S2 and S4 sites. The list of amino acid residues play key role in SARS-CoV-2 Mpro is highlighted in Table 1. Since the sequences of SARS-CoV-2 and SARS-CoV Mpro share 96% of identity and the minimum differences between both enzymes resides at the surface of the proteins. Therefore, inhibitors against SARS-CoV Mpro are expected to inhibit SARS-CoV-2 Mpro.
The ligand-bound X-ray structure of SARS-CoV-2 PLpro was elucidated few days ago. 7 The list of amino acid residues play key role in SARS-CoV-2 PLpro is highlighted in Table 2.
Molecular modeling and in silico virtual screening against SARS-CoV-2
Novel coronavirus pandemic caused by SARS-CoV-2 severely threatens public health globally. In its infancy, little knowledge about the exact molecular mechanisms of the disease is obstructing the attempts to develop promising anti-viral drugs. 9 Hence, bioinformatics and molecular modeling approaches are the only handy strategy until precise molecular and structural biology is known.
FDA-approved drugs surely claim safe alternatives if it exhibits at least modest activity against SARS-CoV-2. Currently, scientific community are largely focused in the screening of -(i) FDA-approved drug databases, (ii) clinical trials molecules and/or (iii) previously reported coronavirus inhibitors. 9 In silico virtual screening (VS) techniques are proficient to explore CoV protease inhibitors. Yu and co-workers 40 reported the computational screening and findings with regard to potential binding luteolin and other natural compounds against Mpro. Notably, luteolin has also been found to bind effectively with other targets (PLpro, Spike protein, and RdRp) of SARS-CoV-2. 40,87,88 Vast amount of in silico VS studies against SARS-CoV-2 Mpro has been reported over past months. [26][27][28][29][30] As the detailed description on the molecular modeling studies is out of Scope for this current communication, readers interested in learning more about recent molecular modeling studies to identify probable CoV protease inhibitors are directed to mentioned references and others.
Since the SARS-CoV-2 Mpro shares about 96% sequence similarity with SARS-CoV Mpro, previously reported SARS-CoV Mpro inhibitors may have huge prospect to show their efficacy against SARS-CoV-2 Mpro also. By May this year, we have endorsed our rational anti-viral drug design efforts through data mining and molecular docking studies. 10 In an endeavour, our research team explored the crucial structural fingerprints modulating SARS-CoV PLpro inhibitory activities by the aid of 2D-QSAR, SPCI analysis as well as Monte Carlo optimization based QSAR. Further, QSAR derived virtual screening of some inhouse molecules were done which rendered some important hits.
What efforts are taken to identify COVID-19 protease inhibitors?
In February 2020, the first crystal structure of SARS-CoV-2 virus Mpro (PDB: 6LU7) with covalent inhibitor N3 (Fig. 3) was reported by Jin and co-workers. 3 The isobutyl function of N3 embed itself in the hydrophobic S2 site formed by H41, M49, and M169 ( Fig. 4A-B). This study forms the basis of rapid target-based discovery of lead molecules against 2019-nCoV Mpro.
A comparison analysis of the inhibitor-bound CoV Mpro crystal structures suggested that a peptidomimetic inhibitor portrayed like a 'sword', while baicalein set a 'shield' near the two catalytic dyads to restrict the binding of the substrate. In addition, baicalein was also found promising in an enzymatic assay against SARS-CoV-2 Mpro (IC 50 = 0.94 µM). It also exhibited a dose-dependent inhibition on the replication of SARS-CoV-2 with a half-maximal effective concentrations (EC 50 = 1.69 µM). 8 The unique binding mode and promising ligandbinding efficiency of baicalein will inspire Researchers for further lead optimization. Structure-based design, synthesis and activity assessment by Zhang et al facilitated the development of peptidomimetic α-ketoamides as broad-spectrum inhibitors of beta coronavirus and alpha coronavirus Mpro. 4 The most potent compound 11r of this series displayed EC 50 of 400 pM against MERS-CoV in virus-infected Huh7 cells (Fig. 6). Notably, 11r exhibited broad-spectrum anti-viral activity due to its P2 cyclohexyl moiety which intended to fit the pocket in the enterovirus Mpro. In another study, the same group modified the chemical structure of 11r by replacing the hydrophobic cinnamoyl moiety by comparatively less hydrophobic Boc group and concealing the P3-P2 amide bond within a six member pyridone ring. 25 This led to the development of 13a (Fig. 6) with improved solubility in plasma and also reduced binding to plasma proteins, however, the SARS-CoV-2 Mpro inhibition was compromised (13a: SARS-CoV-2 Mpro IC 50 = 2.39 µM vs 11r: SARS-CoV-2 Mpro IC 50 = 0.18 µM). Further replacement P2 cyclohexyl moiety of 13a with cyclopropyl resulted an increase in the anti-viral property against SARS-CoV-2 Mpro (13b: IC 50 = 0.67 µM, Fig. 6). This molecule 13b also showed potency against Mpro of SARS-CoV (IC 50 = 0.90 µM) and MERS-CoV (IC 50 = 0.58 μM). Furthermore, the X-ray crystal structure of 13b-bound SARS-CoV-2 Mpro conferred that the carbonyl oxygen of pyridone in the P3-P2 position formed hydrogen bond with the main-chain amide of E166. Despite the protecting Boc group on P3 unable to occupy the canonical S4 site of Mpro the protease, it was found at close enough to P168 and subsequently, directed outward by more than 2 Å relative to apo-Mpro structure of SARS-CoV-2. Further removal of Boc group in compound 14b (Fig. 6) led to fall in the inhibitory action, suggesting that hydrophobicity and bulkiness of Boc group would be important to cross the cellular membrane. 25 Dai et al elucidated two crystal structures of SARS-CoV-2 Mpro in complex with two indole based covalent inhibitors (Fig. 6) at a high resolution. 21 The indole ring of 11a at P3 occupied the solvent exposed S4 site to form a 2.6-Å hydrogen bond with E166 along with hydrophobic interactions with side chains of residues P168 and Q189. Since the S2 site of CoV Mpro withholds bulky P2 fragment, the cyclohexyl moiety of 11a is buried snugly into the S2 pocket of SARS-CoV-2 Mpro and stacking with the imidazole ring of H41. It also interacts with the side chains of M49, Y54, M165, D187 and R188. In contrast, fluophenyl function of 11b at P2 undergoes a significant downward rotation and form additional hydrogen bond Q189 that is likely to enhance Mpro inhibitory activity. Notably, the aldehyde functions of both 11a and 11b act as a warhead in P1 to form a covalent bond with cysteine residue. Moreover, the (S)-γ-lactam ring immerses into the S1 site of CoV Mpro to form several hydrogen bonds with H163, F140 and E166. Both of these inhibitors displayed excellent Mpro inhibitory activities (11a: SARS-CoV-2 Mpro IC 50 = 0.053 µM; 11b: SARS-CoV-2 Mpro IC 50 = 0.040 µM) along with good PK properties in vivo. 21 Hence, from the above studies 3,17,21,25 it may be observed that S2 site in SARS-CoV-2 Mpro can board a broad range of hydrophobic substitutions. The isobutyl, cyclopropyl, cyclohexyl and 3-FPh moieties of inhibitors embed themselves in the hydrophobic S2 site formed by H41, M49, and M169. 17 Despite huge research efforts on SARS-CoV-2 Mpro inhibitors, proteomic and structural biology works on SARS-CoV-2 PLpro and its inhibitors (11r, 13a, 13b, 14b, 11a and 11b). inhibitor have been very few. Nevertheless, Rut and co-workers utilised HyCoSuL (Hybrid Combinatorial Substrate Library) to scrutinize substrate specificity of SARS-CoV-2 PLpro enzymes. 7 Two irreversible inhibitors namely VIR250 and VIR251 having high degree of PLpro selectivity over other proteases were identified (Fig. 7). The same study first time reported the inhibitor-bound crystal structures of SARS-CoV-2 PLpro. Altogether, these crystal structures in complex with VIR250 (PDB: 6WUU) and VIR251 (PDB: 6WX4) provide a basic for rapid rational drug design against SARS-CoV-2 PLpro (Fig. 4C-D).
Challenges in drug discovery efforts for SARS-CoV-2 protease inhibitors
The corona virus protease inhibitors discovery effort targeting Mpro and PLpro have presenting a substantial challenge owing to poor pharmacokinetic properties of peptidomimetic/macromolecular compounds and low inhibitory potency of non-peptidomimetic and/or compounds having low molecular weight. 91,92 To be an effective drug/drug candidate, a molecule must have ability not only to reach its desire target in the body in sufficient concentration but also to possess expected biological responses. Drug discovery and development markedly depends on assessment of absorption, distribution, metabolism and excretion (ADME) characteristics. Notably, the macromolecule approach of developing SARS-CoV-2 Mpro as well as PLpro inhibitors has been advantageous over the low molecular weight compounds in terms of inhibitory potency and selectivity. In fact the former can occupy the different parts of the active sites of the Mpro and PLpro enzyme to manifest potential inhibitory property. We have collected recently published different SARS-CoV-2 Mpro inhibitors and analyzed for their drug likeliness and other physicochemical properties. From the analysis we have seen that most of these compounds fail to pass the drug-likeness criteria (Fig. 8A-D).
Molecules including Baicalein, Disulfiram, Carmofur (Fig. 8A), Ebselen, Tideglusib, Shikonin and PX-12 coherently passed the drug likeness to be suitable drug but adding challenge to achieve potency and selectivity against coronavirus protease. A similar trend is seen with the SARS-CoV-2 PLpro inhibitors also. Hence, this should be addressed in Mpro and PLpro protease-based drug discovery which could help harness the therapeutic potential against COVID-19.
Meanwhile, a pool of 2D and fingerprint descriptors for these twenty five SARS-CoV-2 Mpro inhibitors (those having exact biological endpoint from Table 3) was calculated to frisk the linear relationships. However, the similar analysis has not been possible for the SARS-CoV-2 PLpro inhibitors due to the insufficient number of reported compounds. The correlation of SARS-CoV-2 Mpro inhibitory activity with eleven molecular descriptors (N = 25) at significant p-statics is graphically represented in Fig. 8B.
A Moreover, from the properties such as, blood-brain-barrier permeability (BBB permiant), inhibition capability of CYP1A2 (CYP1A2 inhibitor), CYP2C19 (CYP2C19 inhibitor) as well as permeability coefficient (log Kp (cm/s)) of these molecules can be negatively correlated with their Mpro inhibitory potency (Fig. 8D). Noticeably, most of this dataset molecule with a log Kp (cm/s) value ≥ − 6.0 exhibited lesser to-poorly active nature against Mpro. Hence, this should be addressed in Mpro and PLpro protease-based drug discovery which could help harness the therapeutic potential against COVID-19.
Conclusion and future perspective
COVID-19 disease is few months old. Until precise molecular and structural biology underlying SARS-CoV-2 replication are available, bioinformatics and multi-target molecular modeling driven in vitro antiviral study as well as repurposing of previous SARS-CoV protease inhibitors are the handy strategies.
From the SARs, it may be postulated that peptidomimetic and/or covalent coronavirus protease inhibitors possessed potent and selective active-site inhibition. However, these inhibitors exhibited poor absorption, distribution, metabolism, and excretion as well as toxicology parameters to be a drug/drug like molecule. 92 Consequently, repurposing/ new protease inhibitors discovery efforts of peptidomimetic compounds have presenting a substantial challenge owing to poor pharmacokinetic properties. On the other hand, non-peptidomimetic and/or compounds having low molecular weight coherently passed the drug likeness to be suitable drug but adding challenge to achieve potency and selectivity against coronavirus protease. Hence, lead optimization of nonpeptidomimetic and/or low molecular weight compounds should be focused. In this scenario, fragment based drug design (FBDD) approaches can play a significant role in designing and developing potential protease inhibitors. The effective strategy for drug discovery of potential protease inhibitors may consist of following steps: step 1: identification of low molecular weight compounds as protease inhibitors; step 2: identification of good fragments from peptidomimetic compounds by different experimental and computational methods; step 3: incorporating of good fragments during the lead optimization of the low molecular weight compounds; step 4: final optimization of these hybrid molecules for satisfactory pharmacokinetic and pharmacodynamics properties. A masterly combination of adequate pharmacokinetic properties with coronavirus protease activity as well as selectivity will provide strong drug candidates in future.
Based on recent mechanistic and structural data on other viral proteases including HIV, we can anticipate or rather suggest to target the allosteric sites of coronavirus proteases as strategies-based drug discovery tool. 92 This effort may soon emerge as frontiers in SARS-CoV-2 Mpro and PLpro drug discovery to triumph the battle against COVID-19.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2020-11-07T14:06:42.249Z | 2020-11-06T00:00:00.000 | {
"year": 2020,
"sha1": "f04ce5167c7913d03d7fc8d3085f87e0a560c474",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.bmc.2020.115860",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "777beb0f88c4a19d31324b9852616c39fe9fe3ec",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119434657 | pes2o/s2orc | v3-fos-license | Candidate Events in a Search for Muon Antineutrino to Electron Antineutrino Oscillations
A search for $\nuebar$'s in excess of the number expected from conventional sources has been made using the Liquid Scintillator Neutrino Detector, located 30 m from a proton beam dump at LAMPF. A $\nuebar$ signal was detected via the reaction $\nuebar\,p \rightarrow e^{+}\,n$ with $e^+$ energy between 36 and $60\mev$, followed by a $\gamma$ from $np\rightarrow d\gamma$ ($2.2\mev$). Using strict cuts to identify $\gamma$'s correlated with positrons results in a signal of 9 events, with an expected background of $2.1 \pm 0.3$. A likelihood fit to the entire $e^+$ sample yields a total excess of $16.4^{+9.7}_{-8.9}\pm 3.3$ events, where the second uncertainty is systematic. If this excess is attributed to neutrino oscillations of the type $\numubar\rightarrow\nuebar$, it corresponds to an oscillation probability of ($0.34^{+0.20}_{-0.18}\pm 0.07$)\%.
Neutrino mass is a central issue for particle physics, because neutrinos are massless in the Standard Model, and for cosmology, because the relic neutrinos, if massive, would have profound effects on the structure of the universe. To search for such mass an experiment has been carried out using neutrinos from π and µ decay at rest from the Los Alamos Meson Facility (LAMPF) beam stop. Observation ofν e production above that expected from conventional processes may be interpreted as evidence forν µ →ν e oscillations (and hence mass) or some direct lepton number violating process.
Protons from the LAMPF 800-MeV linac produce pions in a 30-cm-long water target positioned approximately 1 m upstream from the copper beam stop. [1] The beam stop provides a source ofν µ , via π + → µ + ν µ followed by µ + → e + ν eνµ decay-at-rest; the relativē ν e yield is ∼ 4×10 −4 [2] for E ν > 36 MeV. The Liquid Scintillator Neutrino Detector (LSND) detectsν e byν e p → e + n, followed by a γ from np → dγ (2.2 MeV). Requiring an electron energy above 36 MeV eliminates most of the accidental background from ν 12 e C → e − X, while the upper energy requirement of 60 MeV allows for theν µ endpoint plus energy resolution.
The 7691 coulombs of protons were obtained in a 1.5-month run in 1993 and a 3.5-month run in 1994. The calculatedν µ decay-at-rest flux [3] totaled 3.75 × 10 13 ν/cm 2 at the center of the tank, with an uncertainty of 7%.
The center of the detector is 30 m from the neutrino source and is shielded by the equivalent of 9 m of steel. The detector, an approximately cylindrical tank 8.3 m long by 5.7 m in diameter, is under 2kg/cm 2 of overburden to reduce the cosmic-ray flux and is located at an angle of 12 o relative to the proton beam direction. On the inside surface of the tank 1220 8-inch Hamamatsu phototubes provide 25% photocathode coverage with uniform spacing. The tank is filled with 167 metric tons of liquid scintillator consisting of mineral oil and 0.031 g/l of b-PBD. The composition of the liquid is CH 2 , including 1.1% of 13 C and ∼ 10 −4 of 2 H. The low scintillator concentration allows the detection of bothČerenkov light and scintillation light and yields an attenuation length of more than 20 m for wavelengths greater than 400 nm. A sample of ∼ 10 6 electrons from cosmic-ray muon decays in the tank was used to determine the electron energy scale and resolution. A typical electron at the end-point energy of 52.8 MeV leads to ∼ 1750 photoelectrons, of which ∼ 300 are in thě Cerenkov cone. The phototube time and pulse height signals are used to reconstruct the electron track with an average r.m.s. position resolution of ∼ 30 cm, an angular resolution of ∼ 12 o , and an energy resolution of ∼ 7%. A liquid-scintillator veto shield [4] surrounds the detector tank with 292 5-inch phototubes.
Particle identification (PID) for relativistic particles is based upon theČerenkov cone and the time distribution of the light, [5] which is broader for non-relativistic particles. Three PID quantities are used: theČerenkov cone fit quality, the event position fit quality, and the fraction of phototubes hit at a time corresponding to light emitted more than 12 ns later than the reconstructed event time. Comparing electrons from cosmic-ray muon decays with cosmic-ray-produced neutrons of similar deposited energy, a neutron rejection of ∼ 10 −3 is achieved with an electron efficiency of 79%.
Each phototube channel is digitized every 100ns and the data is stored in a circular buffer. A primary event trigger is generated when the total number of hit phototubes in two consecutive 100 ns periods exceeds 100. However, no primary triggers are allowed for a period of 15.2µs following veto shield events with > 5 hit veto phototubes in order to reject electrons from the decay of stopped cosmic-ray muons in the detector. The trigger operates independently of the state of the proton beam, so the beam duty factor of 7.3% allows 13 times more beam-off than beam-on data to be collected. After a primary trigger with > 125 hit phototubes (> 300 in 1993), the threshold is lowered to 21 hit phototubes for a period of 1 ms in order to record the 2.2 MeV γ from np → dγ, which has a 186 µs capture time.
In addition, "activity" events are recorded for any event within the previous 51.2 µs and having > 17 hit detector phototubes or > 5 hit veto shield phototubes.
The first step in searching forν e interactions is to select electrons (the detector cannot distinguish between electrons and positrons) with more than 300 hit phototubes (highly efficient for energies above 28 MeV), PID information consistent with a β ∼ 1 particle, < 2 veto shield hits, and no "activity" events in the previous 40 µs. The reconstructed position of the track midpoint is required to be > 35 cm from the locus of the phototube faces.
Finally, events with three or more associated γ's are consistent with cosmic-ray neutrons and are eliminated. The overall electron selection efficiency is 28 ± 2%. In the 36 < E e < 60 MeV energy range, there are 135 such electron events with the beam on and 1140 with the beam off, giving a beam-on excess of 46.1 ± 11.9 events.
The second step is to require a correlated 2.2 MeV γ with a reconstructed distance, ∆r, within 2.5 m of the electron, a relative time, ∆t, of less than 1 ms (imposed by the trigger), and a number of hit phototubes, N γ , between 21 and 50. The efficiency for a neutron to be captured by a free proton and for the 2.2-MeV γ to be found by these cuts is 63%. To determine if such a γ is correlated with the electron or from an accidental coincidence, a function R of ∆r, ∆t, and N γ is defined to be the ratio of approximate likelihoods for the two hypotheses. Distributions of these quantities for correlated γ's are measured using cosmic ray neutron events. We also compute the ∆r distribution with a Monte Carlo simulation.
The R distributions for accidental γ's are measured as a function of electron position using the large sample of electrons from cosmic-ray muon decays. The R distributions are shown in Fig. 1a, and Fig. 1b shows the R spectrum for the beam-on minus beam-off data sample.
Requiring that a γ be found with R > 30 has an efficiency of 23% for events with a recoil neutron and an accidental rate of 0.6% for events with no recoil neutron. Fig. 2 shows the beam on minus beam off energy distribution for events with R > 30. There are 9 beam-on and 17 beam-off events between 36 and 60 MeV, corresponding to a beam-on excess of 7.7 events. Table I lists the locations and energies for the 9 beam-on events. When any of the electron selection criteria is relaxed, the background increases slightly, but the beam-on minus beam-off event excess does not change significantly. Table II lists the expected number of background events in the 36 < E e < 60 MeV energy range for R > 30. The beam-unrelated background is well determined from the thirteen-fold larger data sample collected between accelerator pulses. To set a limit on beam-related neutron backgrounds, events were selected which failed electron PID criteria but were otherwise consistent with the correlated eγ signature and in the electron energy range of interest. The yield of beam-related neutron events of this type was < 3% of all neutrons when the beam was on. Applying this ratio to neutrons passing electron PID criteria, the beam-related neutron background is bounded by 0.03 times the total beamunrelated background, and is thus negligible. The largest neutrino background, due to µ − decay at rest in the beam stop followed byν e p → e + n in the detector, is calculated using the Monte Carlo beam simulation [3]. Another background with a recoil neutron arises from ν µ p → µ + n (includingν µ C → µ + nX) if the muon is lost (due to the "activity" threshold or trigger inefficiency) or if it is misidentified as an electron (e.g., if a fast decay made the µ and e look like a single particle). This background is determined from our measurement of ν µ C → µ − X [6] and from our Monte Carlo detector simulation. [7] Finally, the sum of all backgrounds involving accidental γ's is computed from the yield of electrons without correlated neutrons, which is measured using the likelihood fit described below. The total estimated beam-related background for R > 30 is thus 0.79 ± 0.12 events, which implies a net excess of 6.9 events in the 36 < E e < 60 MeV energy range. The probability that this excess is due to a statistical fluctuation is < 10 −3 . Cosmic-ray background is especially intense in the outer regions of the detector and where the veto has gaps -beneath the detector (low y), and near the lower corner of the upstream end (low y and low z). In an effort to find anomalous spatial concentrations of the ocillation candidates, we performed Kolmogorov tests on distributions of various quantities, among which were y, distance from the lower upstream corner, and distance from the surface containing the photomultiplier faces. These tests, done both with no photon criteria and with R > 30, gave probabilities above 25% of consistency with what is expected, with the exception of one distribution not expected to be sensitive to background; the distribution in x, with no photon criteria, had a probability of 4%.
We have also investigated alternative geometric criteria. Removing the 5% of the total volume having y < −120 cm and z < 0 removes 32% of the beam-off background, and results in a net excess of 20.6 +9.5 −8.7 ± 4.1 events, corresponding to an oscillation probability of (0.45 +0.21 −0.19 ±0.10)%. None of the R > 30 events is in this area of largest beam-off background.
The neutrino oscillation probability for two-generation mixing can be expressed as P = (sin 2 2θ) sin 2 (1.27∆m 2 L/E), where L is the distance (meters) between the reconstructed positron position and the neutrino production point and E is the neutrino energy (MeV) obtained from the measured positron energy and direction. A possible concern is the presence of R > 30 events near and above 60 MeV. But the Kolmogorov probability of consistency with a large ∆m 2 , for example, oscillation hypothesis is 71% for 36 < E e < 60 MeV and 13% for 36 < E e < 80 MeV (ignoring any possible contribution from decay-in-flight oscillation events).
If the observed excess is due to neutrino oscillations, Fig. 3 shows the allowed region (95% C.L.) of sin 2 2θ vs. ∆m 2 from a maximum likelihood fit to the L/E distribution of the 9 beam-on events in the 36 < E e < 60 MeV energy range with R > 30. The result is renormalized to the measured oscillation probability of 0.34% given above. The fit includes background subtraction, smearing due to positron energy, position, and angular resolutions, and the uncertainty of the neutrino production vertex. The allowed region is not in conflict with previous low energy decay-at-rest neutrino experiments E225 [8] and E645 [9] at LAMPF. Some of the allowed region is excluded by the ongoing KARMEN experiment [10] at ISIS, the E776 experiment at BNL [11], and the Bugey reactor experiment [12].
In conclusion, the LSND experiment observes 9 electron events in the 36 < E e < 60 MeV energy range which are correlated in time and space with a low energy γ. The total estimated background from conventional processes is 2.1 ± 0.3 events, so that the probability that the excess is due to a statistical fluctuation is < 10 −3 . If the observed excess is interpreted as ν µ →ν e oscillations, it corresponds to an oscillation probability of 0.34 +0.20 −0.18 ± 0.07% for the allowed regions shown in Fig. 3. If the excess is due to direct lepton number violation and the spectrum ofν e is the same as forν µ in µ + decay, then the violation rate is the same as the above oscillation probability. We plan to collect more data, and backgrounds and detector performance continue to be studied. These efforts are expected to improve the understanding of the phenomena described here. | 2019-04-14T03:07:25.998Z | 1995-04-20T00:00:00.000 | {
"year": 1995,
"sha1": "38ad562f1c0a4c98e7c8aa2d984c02a1b35bba3f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nucl-ex/9504002",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e0862ec33cb65a648f493a5440e68e2b7b2a662f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257806985 | pes2o/s2orc | v3-fos-license | Mechanical assessment of two hybrid plate designs for pancarpal canine arthrodesis under cyclic loading
Pancarpal canine arthrodesis (PCA) sets immobilization of all three carpal joints via dorsal plating to result in bony fusion. Whereas the first version of the plate uses a round hole (RH) for the radiocarpal (RC) screw region, its modification into an oval hole (OH) in a later version improves versatility in surgical application. The aim of this study was to mechanically investigate the fatigue life of the PCA plate types implementing these two features–PCA-RH and PCA-OH. Ten PCA-RH and 20 PCA-OH stainless steel (316LVM) plates were assigned to three study groups (n = 10). All plates were pre-bent at 20° and fixed to a canine forelimb model with simulated radius, RC bone and third metacarpal bone. The OH plates were fixed with an RC screw inserted either most proximal (OH-P) or most distal (OH-D). All specimens were cyclically tested at 8 Hz under 320 N loading until failure. Fatigue life outcome measures were cycles to failure and failure mode. Cycles to failure were higher for RH plate fixation (695,264 ± 344,023) versus both OH-P (447,900 ± 176,208) and OH-D (391,822 ± 165,116) plate configurations, being significantly different between RH and OH-D, p = 0.03. No significant difference was detected between OH-P and OH-D configurations, p = 0.09. Despite potential surgical advantages, the shorter fatigue life of the PCA-OH plate design may mitigate its benefits compared to the plate design with a round radiocarpal screw hole. Moreover, the failure risk of plates with an oval hole is increased regardless from the screw position in this hole. Based on these findings, the PCA plate with the current oval radiocarpal screw hole configuration cannot be recommended for clinical use.
Introduction
Pancarpal canine arthrodesis (PCA) sets immobilization of all three carpal joints-The antebrachiocarpal or radiocarpal, the middle carpal or intercarpal, and the carpometacarpal-To result in bony fusion of their joint surfaces (Buote et al., 2009;Ernst, 2012). It is a common and well-established salvage surgical procedure indicated for a variety of carpal disorders including hyperextension injuries, severe fractures, end-stage osteoarthritis, and neurologic deficits (Parker et al., 1981;Gambardella and Griffiths, 1982;Lesser, 2003), and is considered a standard of care procedure in small animal veterinary medicine with the goal to restore reasonable limb function.
Among different treatment options, such as pinning or external fixation, the most common PCA procedure relies on dorsal plating (Chambers and Bjorling, 1982). In this setting, the dorsally applied plate spans all carpal joints while being fixed with screws to the radius, the radiocarpal bone and usually the third metacarpal bone. Although a dorsally positioned plate lies on the compression side of the joints and is not favorable from a biomechanical perspective, dorsal plating is commonly performed due to the ease of the dorsal surgical approach compared to a palmar approach.
Carpal fusion angles between 0°(straight alignment) and 20°of extension have been proposed in the literature (Chambers and Bjorling, 1982;Guillou et al., 2012). The straight fusion alignment limits the risk of plate failure under dynamic loading, however, it is associated with both poorer paw placement during stance and increased tendon pain (Hottinger et al., 1996). Therefore, for a more physiological stance, a radiocarpal (RC) joint fusion angle of approximately 15°to 20°is clinically preferred, which requires bending of the PCA plate. This makes the bent plate more prone to fatigue failure (Lesser, 2003). Indeed, failure of these pancarpal arthrodesis plate types has been reported in literature with an incidence of 2%-9% (DeCamp et al., 1993;Bristow et al., 2015). Besides plate breakage, construct failures following dorsal plating occur due to screw breakage or loosening and/or bone fracturing at the peripheral ends of the plates (Johnson, 1980;Denny and Barr, 1991;Li et al., 1999;Whitelock et al., 1999). Furthermore, a mismatch between screw and metacarpal bone size is associated with increased risk of metacarpal bone fracture through the corresponding screw (Whitelock et al., 1999;Wininger et al., 2007) when dynamic compression plates (DCPs) or limitedcontact DCPs (LC-DCPs) are used. These plates are also considered as too bulky for the distal metacarpal region, leading to increased wound dehiscence rates postoperatively (Worth and Bruce, 2008;Guillou et al., 2012).
To overcome the limitations of compression plates, PCA plate designs were specifically devised (Guillou et al., 2012;Zderic et al., 2021). Among them, hybrid plates with distally tapered profile for both width and thickness became popular and accept application of smaller 3rd metacarpal screws for optimized stability and reduced risk of metacarpal fractures and wound dehiscence. However, screw loosening persisted as a major problem with standard compression plates not providing any mechanism to counteract this adverse effect (Clarke et al., 2009;Bokemeyer et al., 2011;Ernst, 2012). For this purpose, taking advantage of both the tapered design and locking plate technology, two new PCA plates were designed (Asimus et al., 2017; LCP Pancarpal Arthrodesis Plates -Surgical Technique, 2019).
The locking plate design includes all combination plate holes to accommodate either standard or locking screws-Except the standard RC plate hole. The rationale for the latter is that this critical anchor point allows to angle the screw and ensure appropriate placement within the radiocarpal bone; Additionally, tightening of this screw results in pull-out of the RC bone towards the plate, preventing a caudal displacement of the RC bone. In one plate version, the non-locking RC plate hole was designed as a round hole (RH), while the other version features an oval "sliding" RC plate hole (OH) to allow for adjustments of the plate position (Figure 1), facilitate the surgical use of the plate, and provide improved placement of a screw into the RC bone. Although the mechanical superiority of the round-hole plate versus oval-hole plate was recently reported in a biomechanical study demonstrating FIGURE 1 LCP Pancarpal Arthrodesis Plate 2.7/3.5 mm with round (up) and oval (down) standard radiocarpal screw hole, 12 holes, length 151 mm.
FIGURE 2
Pre-bending of round-hole PCA plates to 20°joint fusion angle; (A) Photograph showing a plate fixed within the bending press; (B) A consistent bending angle of all plates was obtained.
Frontiers in Bioengineering and Biotechnology frontiersin.org 02 increased plate surface strains next to the OH compared to the RH (Zderic et al., 2021), the biomechanical performance of both plate designs has not been investigated under cyclic loading, which warranted further evaluations under fatigue testing.
Therefore, the aim of this study was to investigate the mechanical behavior of the hybrid PCA plates with oval versus standard round RC plate hole under cyclic loading. We hypothesized that 1) Ovalization of the RC hole will decrease the fatigue life compared to the round-hole plate, and 2) Within the oval-hole plates, moving the screws from proximal to distal will further decrease the fatigue life and increase plate failure probability.
Specimens and preparation
Ten RH and 20 OH hybrid PCA plates were consistently prebent to 20°joint fusion angle using a custom-made bending press ( Figure 2). The press was designed to bend the plates consistently and accurately at a point centrally located between the distal radial and the RC holes, and was adapted from previous studies (Guillou et al., 2012;Zderic et al., 2021). All plates were fixed to three cylindric cotton fabric bone model substitutes (HGW 2082 [Canevasite], Amsler & Frey AG, Schinznach-Dorf, Switzerland) simulating the radius, RC bone and 3rd metacarpus with lengths resembling a cadaveric specimen (171, 10, and 123 cm, respectively) (Zderic et al., 2021). Each plate was fixed first to the radius by placing 3.5 mm locking screws in plate holes 1 (proximal), 3, 5, and 6, whereas the metacarpus was fixed second by placing 2.7 mm locking screws occupying holes 1 (proximal), 2, and 5. The RC bone was fixed last to the plate by occupying the RC screw hole with a standard 3.5 mm cortical screw. Ten OH plates were instrumented with the RC screw placed in the oval hole most proximally (OH-P) or most distally (OH-D). Pilot holes with diameters of 2.8, 2.0, and 2.5 mm were predrilled into the bone models prior to screw insertion. All 3.5 mm screws (locking and compression/cortex ones) were tightened at a final toque of 1.5 Nm, whereas the 2.7 mm locking screws were locked at 0.8 Nm. All implants were made of implant-grade stainless steel (316LVM), featuring a modulus of elasticity of 186 GPa, a yield strength of ≥690 MPa, and an ultimate tensile strength of 860-1100 MPa (Disegi and Eschbach, 2000), and were produced by the same manufacturer (DePuy Synthes, Zuchwil, Switzerland).
Mechanical testing
Mechanical testing of the specimens was performed using an electrodynamic material testing machine (Acumen III, MTS Systems Corp., Eden Prairie, MN, United States) equipped with a 3 kN load cell. The test setup was adopted from a previous study and is shown in Figure 3 (Zderic et al., 2021). Articulated fixtures with co-axial alignment were used proximally and distally allowing free rotation in the sagittal plane. The distances between the bending point and the proximal and distal articulations of the fixtures were identical for all specimens. This guaranteed that the length of the proximal and distal lever arms with respect to the radio-carpal bone was consistent between specimens. Using a sine profile, the specimens were cyclically loaded in axial compression along the machine axis between 20 N valley load and 320 N peak compression at 8 Hz until construct failure. The latter was defined when the machine transducer reached a test stop criterion of 20 mm axial displacement with respect to its position at the beginning of the cyclic test. The force level and loading protocol were defined in a pilot study, where they were found appropriate to produce fatigue failures in RH plate constructs within a range between 500,000 and 1,000,000 cycles.
Data acquisition and analysis
Strain levels inherent in the constructs under the given load magnitudes were assessed based on strain gauge data from a previously performed study (Zderic et al., 2021) measuring maximum plate surface strains on the same constructs under quasi-static loading to 300 N. Those results were considered to define the corresponding minimum strains at the valley load of 20 N, as well as to extrapolate them to the applied peak load level of 320 N. The minimum/maximum strains were further converted to the theoretical stress magnitudes, given the elastic modulus of 186 GPa for implant-grade stainless steel (316LVM) (Disegi and Eschbach, 2000). Furthermore, the strain-force curve of the previous study was considered to calculate the strain energy density of each specimen, taking into account the converted stresses. Finally, the maximum stress was normalized to the common mechanical property parameters for stainless steel, namely yield strength (690 MPa), ultimate tensile strength (1100 MPa), and fatigue strength (480 MPa) (Disegi and Eschbach, 2000).
Axial displacement and load were continuously recorded from the machine transducers at 64 Hz throughout testing. The numbers of cycles until construct failure were determined based on the recorded machine data. In addition, the mode of failure of each separate construct was recorded and subsequently analyzed.
FIGURE 3
Test setup with a specimen mounted for mechanical testing, adopted from (Zderic et al., 2021).
Frontiers in Bioengineering and Biotechnology frontiersin.org
Statistical analysis among the parameters of interest was performed using SPSS software package (version 27, IBM SPSS, Armonk, NY, United States). Mean and standard deviation (SD) were calculated for cycles to failure. Shapiro-Wilk test was conducted to ascertain a normal data distribution within all groups. One-Way Analysis of Variance (ANOVA) with Bonferroni post hoc test was conducted to identify significant differences between the groups. Pearson Correlation test was used to investigate the relation between maximum plate stresses and strain energy density. Level of significance set at p = 0.05.
Results
Outcome measures for maximum and minimum plate strains, together with the converted stresses, strain energy density, as well as maximum stress normalization to the common material mechanical parameters, are summarized in Table 1. For both OH configurations, peak plate stresses were overall the highest next to the RC screw and the non-occupied portion of the oval hole. Considering pooled data of all three groups together, the correlation between maximum strains and strain energy density was significant for each measured location (r ≥ 0.969, all p < 0.001). Maximum stresses remained below the yield and ultimate tensile strength in each group. However, the fatigue strength was exceeded for both OH configurations next to the RC screw and next to the non-occupied portion of the oval hole.
The RH plates exhibited the highest number of cycles to failure (695,264 ± 344,023 (mean ± SD), followed by OH-P (447,900 ± 176,208) and OH-D plates (375,954 ± 166,848). The values for RH plates were significantly higher versus OH-D plates (p = 0.028). No significant differences were detected between the OH-P and OH-D groups, p ≥ 0.999 (Figure 4). There was no significant difference in cycles to failure between the RH and OH-P plates (p = 0.092). Constructs predominantly failed by plate fracture at the level of the distal radial combination hole in all groups (RH 6/10, OH-P 8/ 10, RH-D 8/10). In most cases the fracture occurred at the locking portion of the combination hole, i.e., the distal radial locking screw (RH 6/6, OH-P 5/8, RH-D 4/8), followed by fractures through the cortex screw portion of the distal radial combination hole (RH 0/6, OH-P 3/8, RH-D 3/8). In one OH-D specimen, the plate fractured through the RC screw hole. All plate fractures were initiated at the undersurface of the plate. Most plate fractures were located at a single locale on one flank next to a screw hole, whereas any fractures with two locales were on both flanks next to the same screw hole, but asymmetric, with one side exhibiting a larger fracture surface area. There were no plate fractures through the bending point of the plates. Plate fractures were predominantly associated with concurrent distal radial screw fractures at the head-shaft interface (RH 4/6, OH-P 4/8, RH-D 5/8).
The second most common failure mode was fracturing of the 1st and 2nd metacarpal screws at their head-shaft interface, associated with plate lifting-off from the bone model (RH 3/ 10, OH-P 2/10, RH-D 0/10), which occasionally ended in plate's excessive bending at the 3rd metacarpal screw hole (RH 1/3, OH-P 1/2).
Finally, the third failure mode was a concomitant fracture of the 3rd and 4th (distal) radial screws at their head-shaft interface, which was associated with plate lifting-off from the radius bone model. Examples of plate failure modes are shown in Figure 5.
Discussion
This study evaluated the biomechanical performance of two hybrid locking plate designs for pancarpal canine arthrodesis with either a round or an oval RC hole design under cyclic loading. In addition, the effect of occupying the two extreme ends of the oval RC plate hole by a screw was investigated. The RH plates were associated with a higher fatigue life compared to both OH constructs, with a significant difference demonstrated between RH plates and OH-D plates. These results clearly demonstrate that the specific advantage of surgical maneuverability provided by the current iteration of PCA plates with an oval hole comes at the expense of reduced fatigue life. Bearing in mind the relatively high rates of reported implant related PCA failures (Johnson, 1980;Denny and Barr, 1991;Li et al., 1999), these findings should be considered when applying OH hole plates to clinical patients.
The reduced biomechanical performance characterizing OH plates is a logical consequence of the lowered area moment of inertia of the plate around its RC hole. However, it is counterintuitive that all but one failure modes occurred in the near periphery of the RC region, namely the distal radial combination hole or the proximal metacarpal holes. This is even more remarkable when considering the residual strains originating from plate contouring, amounting up to approximately 30%, as indicated by the finite element analysis from a previous study (Zderic et al., 2021), which would presume material weakening in this region ( Figure 6). However, as demonstrated by this previous study, the residual stresses are not limited to the bending point but are also existent in proximity of the locking part of the distal radial combination hole. Indeed, these residual stresses may have contributed to the plate failure in this specific region. Other factors favoring these failure locations may be the used test setup, which did not permit contact between the three bone segments (Figure 3). The plate was not constrained in both the RC bone and bending point regions. Thus, using this test setup, the arthrodesis construct acts as a bridge plating construct with the working length defined by the distance of the innermost radial and metacarpal locking screws, upon which the transmitted forces are Cycles to failure in the three study groups presented in terms of mean value and SD. Circular points indicate cycles to failure for individual specimens. Star shows significant difference.
FIGURE 5
Photographs visualizing failure modes after fatigue testing; (A) asymmetric plate fracture through the locking part of a distal radial combination hole, posterior view; (B) asymmetric plate fracture through the compression part of a distal radial combination hole, anterior view. Debris in the locking part indicate fractured screw; (C) excessive plate bending at the most distal screw hole following 1st and 2nd metacarpal screw breakage, lateral view; (D) breakage of 3rd and 4th (distal) radial screws at their head-shaft interface, with numbers indicating length scale in centimeters; (E) asymmetric plate fracture through the radiocarpal hole, posterior view.
Frontiers in Bioengineering and Biotechnology frontiersin.org concentrated. In addition, the plates were pre-contoured to match the clinical process of plate application; however, pre-contoured plates are prone to axial bending leading to cumulated stresses on the affected screws along their nominal axes. These details may explain why plate lifting-off was a common failure mode. Interestingly, plate lift-off due to screw failure is a clinically observed mode of plate failure. Although the failure modes were predominantly consistent within and across the groups, they were not unanimously reliable. This phenomenon can be ascribed to the many factors affecting the failure behavior. The constructs with pre-contoured plates were instrumented with multiple screws and subjected to complex axial loading. It is evident that the outcomes in terms of failure modes would disperse more compared to standardized tests using test coupons, although the parameters for specimen assembly were kept as reproducible as possible. The low number of failure modes detected in our study is attributed to similar failure loads, and each of them deems clinically relevant. In this regard, material contamination could be excluded as source of error, as evidenced by the failure analysis in a previous study, demonstrating that the material fulfilled the rigorous norms for implant-grade stainless steel (Zderic et al., 2022). As a result of these findings, the authors concluded that clinically translated plate failures can instead rather be ascribed to bone healing disturbances, leading to their extended loading and ultimate failure.
The relatively high fusion angle of 20°was certainly a contributing factor to plate lift-off. However, there is clinical evidence in the literature supporting a high fusion angle of 20°. A previous retrospective study compared the clinical outcome of pancarpal canine arthrodesis performed with two types of dorsally applied hybrid non-locking plates (Bristow et al., 2015). While the overall postoperative complication rates were similar for both plates (46%), implant failure was reported in 2% of the cases. However, the authors did not assess plate bending angle and since both plates feature a built-in 5°distal dorsal taper, it is reasonable to assume that the plates were applied with a 5°bend and without additional intra-operative bending. This assumption is supported by a high rate of postoperative lameness (up to 67%), regardless of plate type.
The locking screw mechanism represents another important factor for the failure behavior of the constructs. Given the anglestable feature of this fixation, stresses are concentrated in the locking region. As a result, plate fractures through the locking portion of the combination hole with concomitant screw fracturing through its head-shaft interface would be expected as the most common failure mode.
The investigated plates are somewhat unique hybrid plates, first due to their tapered profile using different screw sizes proximally and distally, but also because of their ability to accommodate either locking or cortex screws, the latter being the only option in the RC hole. One potential solution to address the findings of the present study would be to modify the design of the RC hole with a variableangle locking hole, allowing for angulated screw placement.
Considering the numbers of cycles to failure registered in this study, it must be acknowledged that fatigue-like behavior rather than true fatigue behavior was investigated. In this regard, no typical Wöhler curves could be generated based on different stress-strain tests, nor was it possible to indicate a stress level inherent in the plate cross-section due to the complex loading scenario and the inhomogeneous material cross-section. However, the load level was selected in a pilot study to achieve failure cycles according to recommended industry standards. True fatigue testing for steel would require cyclic loading over 1,000,000 cycles to reach its endurance limit (ASTM, 1999), although high-cycle fatigue tests are carried out for 10 7 or more cycles (Campbell, 2012). In our study, we did not reach the endurance limit of the plates, which also was not targeted, as an aggressive loading protocol, tailored to mimic physiological loading, is required to estimate and compare the
FIGURE 6
Finite element analysis of the plate pre-bending process demonstrating residual stresses after its completion (Zderic et al., 2021). Residual stresses also exist around the distal radial locking screw hole, as indicated by the red arrow, and are not limited to the bending point.
Frontiers in Bioengineering and Biotechnology frontiersin.org 06 mechanical competence of the tested constructs. Arthrodesis is achieved on average within 12 postoperative weeks, but can range from 9 to 30 weeks (Michal et al., 2003). The corresponding number of load cycles lies within a relatively broad range between approximately 200,000 cycles and 1,600,000 cycles. This would correspond to a period of approximately 12-96 weeks, if activity is extrapolated from humans (ASTM, 1999). An overlapping of these two ranges indicates that a clinical failure cannot be excluded. These presumptions should be considered as worst-case scenarios, as the calculations were rather conservative, neglecting the gradual increase in load bearing of the bone healing over time.
The applied peak force of 320 N was selected based on front limb ground reaction force of approximately 115% body weight. Considering that the plates used in this study would be appropriate for dogs weighing between 16 kg and 46 kg, a 320 N peak load would simulate loading conditions for an approximately 28 kg mid-size dog (Li et al., 1999;Clarke et al., 2009;Andreoni et al., 2010).
The present study built upon a previous mechanical evaluation of PCA plates (Zderic et al., 2021). Indeed, plates were instrumented with strain gauges at the most sensitive locations, as predicted by a previous finite element analysis. Constructs were then loaded in the same fashion under axial quasi-static compression over 10 cycles. Although a validated finite element model accurately predicted the weakened properties associated with the oval plate hole, actual failure locations were not precisely identified. This probably occurred because the authors' focus was limited to the RC region, neglecting the peripheral boundary conditions at the distal radial and proximal metacarpal screw holes.
The peak principal stresses, as theoretically converted from experimentally measured peak plate strains, remained unexceptionally below the yield strength, and predominantly below the fatigue strength. These findings highlight the benefits of strain gauges measurements, which remain indispensable tools for prediction of failure loads and determination of correct load magnitudes in studies investigating the fatigue behavior of different constructs.
The present and the abovementioned studies represent the only studies on arthrodesis plates using locking technology. Locking compression plates were initially designed to increase construct stability in poor bone quality and reduce the rates of screw loosening (Auer et al., 1995). In good bone quality, such as in the canine forelimb simulated here, the potentially overly stiff configuration may lead to different effects than expected, namely to earlier construct failure as also suggested in previous studies (Fitzpatrick et al., 2009;Rowe-Guthrie et al., 2015).
The test setup in the current study was similar to that used in a previous biomechanical comparison of non-locking hybrid plates versus LC-DCPs (Guillou et al., 2012). Other biomechanical studies focused on the mechanical characterization of locking and nonlocking plates (Wininger et al., 2007;Meeson et al., 2012;Rothstock et al., 2012;Rowe-Guthrie et al., 2015). In a recent study, Meeson et al. (2012) subjected plates to quasi-static load to failure and fatigue loading over 10 6 cycles in two independent test series of four-point bending, concluding that fatigue failure during the convalescence period of estimated 150,000-200,000 cycles is unlikely. However, the authors only considered straight plates and the consequences of pre-contouring was not investigated. To date, studies investigating the effect of residual stresses emerging from plate pre-contouring on their fatigue performance are lacking. We therefore anticipate that this investigation, by means of dedicated destructive, semidestructive or non-destructive techniques, may unveil new insights into this insufficiently explored area.
As with all studies, this study is not without limitations. First, an artificial bone model was used with material properties, which may not represent the complex material characteristics of several canine bones. This simplification however helped to determine the plates' fatigue properties. Second, the bone configuration was chosen to simulate a worst-case scenario with no bony interaction between the radiocarpal bone and the radius or metacarpals, which in most clinical cases does not represent the position or contribution of the RC bone relative to the remaining stability of the antebrachial bones. Nonetheless, this model mimics the immediate and short-term postoperative periods prior to bone healing contributing to construct stability. Third, streamlined uniaxial loading was applied at constant load magnitude, whereas the complex multidirectional forces occurring during a true gait were neglected, primarily since these forces in the canine antebrachium remain largely unknown. Fourth, there was no clear association between the number of cycles to failure and the corresponding mode of failure. Again, this reflects to some extent the unpredictable nature of cyclic testing.
Future studies should focus on three aspects to broaden the knowledge of locking arthrodesis plates. First, a direct comparison to non-locking plates is missing and conclusions on the effectiveness of the locking principle cannot be drawn. Of paramount importance would therefore be to provide evidence of their effectiveness over compression plating in terms of implant loosening. Second, the effect of plate pre-shaping on the fatigue behavior remains to be explored and could help guide surgeons regarding the degree of acceptable plate pre-contouring. This could be accomplished by comparing the structural and mechanical properties of plates precontoured at different angles. Third, plate design iterations with reinforced flanks around the oval RC hole may enhance the fatigue life while preserving the positive features of the oval sliding hole.
Conclusion
From a biomechanical perspective, despite the surgical advantages of the PCA plate containing an oval radiocarpal screw hole, its fatigue life under the pre-defined load magnitude of 320 N is shorter when compared to the plate design with a round radiocarpal screw hole and therefore appears to mitigate the clinical benefit of the oval hole. Moreover, the failure probability of the oval RC hole plate is increased regardless of the position of the screw in this hole. Based upon these findings, the current iteration of the PCA plate with an oval radiocarpal screw should be used with caution, and additional design features are likely necessary to increase the fatigue life of oval RC-hole PCA plates.
Data availability statement
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
Frontiers in Bioengineering and Biotechnology frontiersin.org | 2023-03-30T13:04:59.521Z | 2023-03-30T00:00:00.000 | {
"year": 2023,
"sha1": "2fc7b7cb106e6b2fd64c68c9d7845e9879a7b399",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "2fc7b7cb106e6b2fd64c68c9d7845e9879a7b399",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
58894349 | pes2o/s2orc | v3-fos-license | Identifying the Macroeconomic Factors Influencing Credit Card Usage in Turkey by Using MARS Method
We aimed to define the macroeconomic factors that have an impact on credit card usage in Turkey. Within this scope, we used quarterly data for the period between 2005:01 and 2016:02 in this study. Moreover, MARS (Multivariate Adaptive Regression Splines) method was used in order to achieve this objective. As a result of the analysis, it was determined that there is negative relationship between credit card usage and unemployment rate. Another result of this study is that people in Turkey use more credit cards in case of high interest rate. While considering these issues, it was understood that Turkish government should focus on these variables in order to increase the credit card usage.
Introduction
Credit card is described as the payment instrument that gives opportunity to the consumers to buy some goods or services without using cash (Roberts & Jones, 2001). Credit card was firstly developed by Hotel Credit Letter in 1894 in USA. This credit card could only be used in tourism sector (Karaca & Yayar, 2012). On the other hand, Setur was the first company which produced the credit card in Turkey in 1968. The name of this credit card was Dinners Club Card and its volume was very low (Altan & Göktürk, 2007). However, nowadays, credit cards are becoming more popular in Turkey. According to the reports of Interbank Card Center in Turkey, credit cards usage has grown so rapidly that the number of credit cards reached from 10,045,643 in 1999 to 57,809,641 in 2016 August in Turkey. Besides growing in credit card number, payment amount by using credit card also increased to TL 54,083,000,000 (17,502,588,997 USD) in 2016 August in Turkey. Additionally, 88% of this amount consists of shopping and 12% belongs to the cash withdrawal.
Using credit card has many advantages for the consumers. Firstly, consumers can obtain cash money within the limit of the credit card. Another advantage of the credit card is that consumers can purchase a good or service by using credit card despite the fact that they do not have enough money at that moment (Chakravorti, 2003). Moreover, card holders can enjoy from the benefit of purchasing something by paying installment. In addition to these aspects, credit card has also many advantages to the economy of the country. First of all, because credit card increases the consumption in the country, it helps economic growth of this country to go up. Furthermore, using credit card in buying and selling decreases unrecorded sales. Owing to this situation, tax revenue of the government will increase (Ludvigson, 1999).
Due to the aspects emphasized above, credit card usage plays a very significant role to improve the economies of the countries. Therefore, many governments take actions in order to encourage consumers to use credit cards in their expenditures. Thus, it can be said that academic studies related to credit cards are very important. Within this context, the main purpose of this study is to define the macroeconomic factors that have an impact on credit card usage in Turkey. In order to achieve this objective, Multivariate Adaptive Regression Splines (MARS) method was used. As a result of this analysis, it will be possible to make recommendation to increase the usage of credit card in Turkey. Additionally, it can be said that this study will make an important contribution to the literature by helping to analyze this issue with a new and original method.
Literature Review
In the literature, there are many studies that analyzed the relationship between credit card usage and macroeconomic variables. Bellotti and Crook (2013) built a model for credit card stress testing including behavioral and macroeconomic data. They used interest rate, unemployment rate, production index, FTSE 100 index, earnings, retail sales, house prices, consumer confidence, and retail price index (RPI) as macroeconomics variables. Moreover, Gross and Souleles (2001) also analyzed credit card usage to understand whether liquidity constraints and interest rates are effective on consumer behavior. They found that interest rate has strong effects for this situation. Furthermore, Agarwal and Liu (2003) showed that unemployment rate significantly affects credit card delinquency. Ekici and Dunn (2010) identified negative relationship between credit card debt and consumption rate. Telyukova (2009) analyzed liquidity, saving, and consumer debt to evaluate the role of liquidity. In addition to this situation, Stauffer (2003) demonstrated that credit card usage increases credit demand. Another conclusion of this study is that interest rate encourages credit card users. Canner and Luckett (1992) and Demirci and Akben Selçuk (2016) also determined that interest rate is an important factor in credit card choice. Differently from these studies, Ausubel (1991), Steidle (1994), and Akın, Aysan, Kara, and Yildiran (2010) defined that interest rate does not have any affect to increase credit card usage. While considering these studies, it was understood that in the most of the studies in the literature, regression model was used to achieve the objective. Owing to this situation, it can be said that there is a need for a study in which a new and original model is used.
MARS Method
Multivariate Adaptive Regression Splines (MARS) method was developed by Jerome Friedman in 1991. This method is used to understand the relationship between dependent and independent variables. The equation of MARS method is shown below.
In the equation above, "Y" shows dependent variable whereas "X" refers to the independent variable. Moreover, "B 0 " is the constant term and "a n " explains the coefficient of the basis function. Additionally, "K" demonstrates the number of basis functions while "ε" refers to error term of the equation. MARS method has some advantages in comparison with other regression methods. Firstly, it is possible to use a lot of variables in the analysis of this method because there is no multicollinearity problem. Also, it provides meaningful result since independent variables may take several coefficients for different conditions.
While making analysis in MARS method, there are two different analyses. First of all, system creates different models by using basis functions that refer to the potential functions created by using different combinations of the independent variable. This system goes on until achieving the most complex model. After that, system eliminates some basis functions which are unnecessary. As a result of this process, the best model can be obtained (Friedman, 1991).
Analysis Results
In this analysis, credit card usage amount was used as a dependent variable. It was provided from Interbank Card Center of Turkey. Furthermore, we also used five different macroeconomic explanatory variables in this study. Increase in USD/TL currency exchange rate shows the volatility in the market. Therefore, when there is an increase in this rate, credit card usage is expected to decrease. Moreover, higher GDP growth increases the life quality of people in the country, so it is expected to increase credit card usage. When there is a high inflation expectation, people prefer to consume at the moment. Thus, it is expected to raise credit card usage. Also, in case of higher interest rate, people opt for using credit card instead of taking loan. The main reason is that people will not pay any interest if they pay their total credit card debt. Finally, when people become unemployed they decrease the usage of credit card. In analysis process, firstly, MARS program created 10 different models which were detailed on Table 1. The model at the bottom of Table 1 is called as starting model. The system added all possible basis functions to this model until achieving the most complex model. After that, this system eliminated some basis functions that are unnecessary to achieve the best model. It has four basis functions, two different explanatory variables, lowest GCV (error) value, and highest GCV R 2 value. The details of the best model were given in Table 2.
As it can be seen from Table 2, p values of all basis functions are less than 0.05 which shows that all these functions are statistically significant at 5% level. In addition to this situation, the value of F test explains that the model is also statistically significant. Furthermore, the value of adjusted R-square gives information that independent variables can explain 75.8% of the dependent variable. The details of the basis functions were detailed on Table 3. According to the results of the analysis, it was defined that two different macroeconomic independent variables affect credit card usage in Turkey. Interest rate is the first significant variable that was stated in basis functions 1, 5 and 9. The sum of the coefficients (-34,459+33,686+16,723) of these basis functions is positive. This situation shows that there is direct relationship between interest rate and credit card usage. In other words, when there is an increase in interest rate, the usage of credit cards goes up as well. The main reason behind this situation is that in case of higher interest rate, people prefer to use credit cards instead of getting loans because there is no interest payment when people pay all debt amount of the credit card on time. Canner and Luckett (1992) and Demirci and Akben Selçuk (2016) also reached the similar results in their studies. Additionally, it was also identified that there is an inverse relationship between unemployment rate and the usage of the credit card owing to the negative coefficient (-12,848). This aspect explains that when people become unemployed, they decline the usage of the credit cards. Agarwal and Liu (2003) also emphasized this situation in their study.
Recommendations and Conclusions
In this study, it was aimed to define the macroeconomic variables that have an effect on the credit card usage in Turkey. Within this scope, quarterly data for the periods between 01/2005 and 02/2016 were analyzed in this study. Furthermore, an analysis by using MARS method was made in order to achieve this objective. According to the results of the analysis, it was concluded that interest rate and unemployment rate affect credit card usage in Turkey. First of all, it was determined that higher interest rate increases the usage of the credit card. The main reason for this issue is that Turkish people prefer to use credit cards instead of taking loan from the banks in case of high interest rate since they do not have to pay interest when they pay total debt amount of the credit card. In addition to this aspect, it was also identified that there is a negative relationship between unemployment rate and credit card usage in Turkey. The reason for this result is that when people lose their jobs, they minimize their consumption due to the financial problems. Owing to this situation, the usage of credit card also declines in this process. While taking into the consideration of these issues, it can be said that Turkish government should mainly focus on interest rate and unemployment rate in order to develop a policy related to the usage of the credit cards. For instance, it may try to decrease unemployment rate if they want to increase the credit card usage. Another example is that if the credit card usage is aimed to decrease, interest rate may be decreased in order to make the usage of the loans more attractive to the people. | 2018-12-17T23:48:42.092Z | 2016-12-01T00:00:00.000 | {
"year": 2016,
"sha1": "460ffec94e4e4018c0516cc4d93147f381fefc5d",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.17265/1537-1514/2016.12.003",
"oa_status": "HYBRID",
"pdf_src": "ElsevierPush",
"pdf_hash": "560aaa711c1c45289a5c33e3aa6b2fa38699a3b2",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
263675259 | pes2o/s2orc | v3-fos-license | ACTIVITIES AND PROGRAM MODEL
We present our experience using cosplay to engage attendees on the topic of microplastics pollution at the world’s largest Comic-Con convention, held annually in San Diego, California, USA. Cosplay is an activity that has gained popularity in the last two decades. Cosplayers wear costumes and fashion accessories, usually representing specific characters from comic books, manga, anime, or superhero franchises. Cosplayer conventions are often large events. For example, Comic-Con International has > 150,000 attendees over a several-day period, and provides a large platform for outreach. Our costumes and accessories were a mix of science (coral polyp costume; microplastics sampling device ‘sword’), and fantasy (Amphitrite costume; with bracelets and hair made with plastic debris). We found that the novelty factor of our costumes and accessories, not part of the traditional cosplay pantheon of characters, was a captivating way to engage convention attendees. During a 6-hour period in the Exhibit Hall, we dispersed 240 flyers with information on the problem of microplastics pollution and our laboratory’s efforts to develop sensing solutions. Engagement lasted 1–8 minutes in length, with 1–9 attendees at a time. All attendees we engaged took the proffered flyer after
INTRODUCTION
Informal science education can occur in a variety of settings (Miller, 2010;Alpert, 2018), including museums (Van Schijndel, Franse & Raijmakers 2010), aquaria (Matsumoto, 2003), parks (Clary & Wandersee, 2014), visitor centers (LeBron Santos & Pantoja, 2021), and field trips to university natural history museums (Diamond, 2000).In venues like aquaria and museums, a free-choice learning environment (Falk & Diering, 2002, 2012) is present, where self-directed exploration and learning takes place within the confines of the architecture and displays (many interactive) of these venues.Techniques for informal science education range from hands-on activities, e.g., directed activities using advanced technology such as underwater robots (Patterson, Niebuhr & Elliott 2012), serious games on issues such as climate change (Undorf et al., 2020), citizen science data collections like BioBlitz (Agersnap et al., 2022), and pop-culture themed talks (Burks, Deards & DeFrain 2017), to online materials developed specifically for a target audience, e.g., children (Bednarz et al., 2021).Cosplay is a technique that has not been explored well in marine science education and outreach.In contrast, interpreters at museums or visitor centers with a human-history focus are often in costume and many actively engage in role-playing (Oppegaard & Adesope, 2013).
Cosplay is a role-playing activity that has increased in popularity, particularly over the last 20 years (Lamerichs, 2014;Mountfort, Peirson-Smith & Geczy 2018) in part because of the investment in character franchises by major entertainment corporations (Mountfort, Peirson-Smith & Geczy 2019).Cosplayers usually model their costumes after identifiable, although often minor, characters in a particular genre.Subcultures for genres exist both online and in real life (IRL), and cosplayers often participate in several (sub)genres or emulate multiple characters within a genre (Winge, 2019).Although a large commercial marketplace exists for cosplay costumes (Yoko & Groot, 2017), many participants create their costumes themselves as this creative activity is highly valued among serious cosplayers (Crawford & Hancock, 2019).
In 2015, one of the authors (MP) received an invitation from the program team of DC Comics to present experiences living and working in underwater laboratories on a panel at Comic-Con International, held in San Diego, CA.MP had spent 89 days living underwater over 10 missions in the Hydrolab and Aquarius habitats, that were formerly funded by the National Oceanic and Atmospheric Administration.Panels are popular events at cosplay conferences as they allow fans an opportunity to connect with actors, writers, and producers of pop culture (Jenkins, 2012).The panel topics are usually related to something in comics or pop culture.Our panel on the 'Rise of Aqua(wo)man' was tied to the then upcoming major motion picture Aquaman by Warner Brothers, starring Jason Momoa, that was released in 2018.
Subject matter experts were invited to provide fans of the Aqua(wo)man franchise insights into living and exploring Aqua(wo)man's world.The panel description was written for the cosplay attendees and highlights expectations of the general Comic-Con audience: 'As millions have seen via free diving videos on YouTube, humans have never been closer to becoming aquatic beings, reminiscent of the ideal set by Atlanteans like Aquaman, Namor, and other subsea heroes.The world record for breath holding is now an astonishing 22 minutes, thanks to breakthroughs in physiology and technology, there are now humans like panelist Mandy-Rae Krack (world champion free-diver and record-holder) who have descended to 289 feet on one breath.James Leichter (professor, Scripps Institution of Oceanography-UCSD), Liz Parkinson (dive instructor, Stuart Cove's Dive Bahamas), Mark Patterson (professor, Northeastern University), and moderator Steve Broback (co-founder, Dent the Future) will discuss with panelists how divers are living undersea for extended periods, how science is extending the abilities of humans, and what tips and techniques can make us all a bit more like Aquaman and Aquawoman.'(Comic-Con, 2015).
Co-author Edson had just received a major research award at Northeastern (O'Connell, 2015) for his work developing an autonomous optical method for detecting microplastics in the ocean (Edson & Patterson, 2015), dubbed 'MantaRay'.We decided to capitalize on our lab's presence at Comic-Con International by developing a set of costumes to allow two of us (MP and SP) to conduct outreach on the problem of microplastics and the novel methods for quantifying them using new sensing techniques being developed at Northeastern University.
Microplastics, operationally defined as plastic particles 1-5 mm in dimension, are an insidious pollution issue for marine life (Stubbins et al., 2021) as they can be ingested by many organisms (Carbery, O'Connor & Palanisami 2018).Suspension feeding and deposit feeding organisms are particularly at risk as microplastics can be mistaken for food (Hall et al., 2015).Ingestion reduces caloric intake per unit time, and wastes energy in handling and processing these indigestible particles (Savinelli et al., 2020).Even brief exposures can elevate respiration rates in some species, like blue mussels that also suffer reductions in attachment strength when exposed to microplastics (Green et al., 2019;Waters, 2017).Popular press articles at the time of our Comic-Con experiment were warning of these adverse effects of microplastics on organisms like corals (Milman, 2015) and were followed by reports in the science literature (Rotjan et al., 2019).Our team decided that because coral ecophysiology under climate change had been a focus of the lab's research (Carpenter, Patterson & Bromage 2010;Certner et al., 2017;Williams & Patterson, 2020), the focus of the primary costume would be the threat posed by microplastics to corals, and the unique solution devised at Northeastern to measure microplastics concentrations.
METHODS
The primary costume (Figure 1) was modelled after a single polyp of the scleractinian Montastraea cavernosa, a Caribbean species adept at catching larger zooplankton (Porter, 1974) that is known to ingest microplastics (Hankins, Duffy & Drisco 2018).We used a public domain diagram of coral polyp anatomy (NOAA, 2015) for the overall geometry of the costume.Microplastics pieces were included inside a transparent cutaway of the gastrovascular (digestive) cavity of the polyp costume, located at chest and stomach level on the cosplayer.We also included stuffed animal representations of zooplankton such as copepods -Centropages hamatus, (Figure 2) designed by a marine biologist/artist Stephanie Wilson (VIMS, 2005) for Giant Microbes (2013).
The inclusion of these elements in the costume allowed us to discuss how microplastics are not typical food, and to allow further discussion of the importance of zooplankton to coral health in the era of global warming (Dwyer, 2019;Palardy, Rodrigues & Grottoli 2008).
To further draw attention to the polyp costume's digestive system, we used plastic fiberoptic cables that emit bright light driven by LEDs (available from a variety of online vendors, e.g., Amazon).The cables (1.6 mm diameter) were sown into flaps representing the sheets of tissue (mesenteries) that partition the interior of a typical coral polyp.The LED lighting was multi-colored, and the controller could be set to flash quickly, slowly, or to stay on all the time.The overall proportions of the costume ensemble were within 15% of the actual proportions of a coral polyp.The polyp costume was approximately 100X life size, assuming a typical polyp height of 1 cm.The zooplankton and microplastics were also approximately 100X actual size.
Most cosplayers carry accessories of some sort that tie into the theme of the character they are depicting, often a weapon of some kind (Mishou, 2021).Because our lab had been developing new technology to address microplastics pollution, we accessorized the costume with a 'MantaRay' microplastics detector as a 'sword' for the coral character to wield, the prototype of which is shown in Figure 3.We created a PVC tube mockup that could be quickly opened to reveal a scale model resembling the inner workings of the actual prototype.This costume accessory was used to answer questions on how to address the lack of knowledge of where microplastics concentrate in the world ocean because current methods are so labor intensive (Hidalgo-Ruz et al., 2012).To better explain the inner workings of the prototype device, including its principle of operation, the back panel of the polyp costume showed an engineering diagram of the instrument (Figure 4).
The second costume we developed was Amphitrite (Figure 5, lower), a Greek goddess who was the daughter of Nereus and Doris (Roman & Roman, 2010).Amphitrite came to symbolize saltwater (the ocean) itself under Roman mythology where she was known as Salacia (Demicheli 2007).Amphitrite was chosen to represent the environment in which the coral lives because the world ocean has been polluted at an unimaginable scale with microplastics, with an estimated > 24 trillion pieces thought to be present in the ocean (Isobe et al., 2021).We constructed this costume as a shiny blue/green dress with microplastics mixed into her dress layers, accessorized with a crown ensnared in macro-plastics (e.g., a plastic pop bottle).This provided a natural connection to address how macroplastics break down into microplastics when we engaged with cosplayers about the design of our costumes.We also discussed how much of the microplastics in the coastal ocean are synthetic fibers from clothing (Mathalon & Hill, 2014).Our costumes were designed over two meetings of the research team, and assembly took place over a 4-day period once costume elements had been procured.The main online source for our costume materials and accessories was Amazon.We prepared an informational flyer about the microplastics problem and our research to distribute to attendees as we engaged them in conversation about microplastics pollution, and how our research was addressing the measurement issues surrounding quantification.When emotional connection, and/or entertainment are used in a short, easy-to-follow narrative, the audience is more likely to retain the message.These principles were codified in a new method known as the And-But-Therefore (ABT) framework to package messaging (Olson 2019).
We used this ABT framework during our communications with cosplayers: The ocean is filled with microplastics causing problems for marine life, including corals who eat them accidentally, AND we don't have a complete understanding of microplastics pollution because counting microplastics using older technology was time-consuming and tedious.BUT, at Northeastern we have invented a new way to quickly measure the extent of microplastics pollution.THEREFORE, we can now (more easily) determine where the microplastics are accumulating so that we can provide better advice on how to manage this pollution source and stressor for marine life.
RESULTS
We discovered that our costumes attracted attention during our 6-hour period spent in the Exhibition Hall and in the corridors linking panel venues (Figure 5).Attendees sought us out spontaneously when they could not identify which genre of characters we were representing.Almost all encounters were driven by cosplayers asking us who we were.Our costumes allowed us to introduce the problem of microplastics pollution in entertaining way and offer information about what can be done to help manage the problem (St. Martin, 2015).Previous research has shown that providing "hope" about serious environmental issues is key to audience engagement and information retention (Park, William & Zurba 2020).
Over the course of six hours, we distributed all 240 flyers.We had 96 encounters in the Exhibit Hall.Although some children were present, all encounters were with adults.The longest interaction time was c. 10 minutes, and the minimum interaction time was c. two minutes.
We noted time to the nearest half minute by looking at our wristwatches as an encounter began and ended.Group size ranged from 1-9 persons, with an average of three people per encounter.There were also numerous encounters outside the Exhibit Hall but we did not track these, as we had exhausted our supply of informational flyers prior to exiting.
DISCUSSION
'In a media environment saturated with information, simply providing facts, no matter how well researched, will not be enough to persuade and inform citizens.Adopting the techniques of interpretation and engagement will help entomologists create more compelling messaging.…Mywearing a very large fluffy green bug costume at a Science-Fiction Convention showed my audiences that I shared a social identity with them, and helped me become a "Nerd of Trust".' (Pearson, 2019: 85, 87) Pearson ( 2019) believes science as an enterprise has a communications problem.Trust and respect are both necessary aspects of communicating to audiences about science (Fisk & (Olson, 2009).Olson makes a case that in any field where jargon is used to convey specialized knowledge, an ingroup mentality arises that frustrates clear communication with the outgroup (the lay public).Eschewing jargon and embracing narrative is the first step to more effective communication.
Co-author M. Patterson has conducted several hundred outreach events during his academic career at schools, public aquaria, museums, scientific conferences, and marine labs, including 55 live one-hour broadcasts from the underwater habitat Aquarius during the JASON project (JFE, 2000).In contrast to his previous outreach experiences, he found this experiment using cosplay to be the most intense outreach experience he has experienced to date.The cosplayers engaged were uniformly very focused and intent on assessing the costume elements and listening to our message, and the pace of interactions in the Exhibit Hall was relentless.Users of cosplay for marine education outreach at cosplayer gatherings like comic conventions should be ready to 'be on' for the duration of the event and to be fatigued by the end.Given the energy and time needed to prepare for cosplay, a short period of engagement does not make sense and educators should be prepared to spend ample time interacting with the other cosplayers.
In addition to the intensity of interactions, we predict marine science outreach cosplayers will enjoy the general positive and welcoming environment.Cosplay differs significantly from the western tradition of masquerade where costume wearers viewed themselves as their original persona under the costume (Geczy, 2016).Cosplayers see themselves as having a different personality when in costume (Mountfort, Peirson-Smith & Geczy 2018), and this could explain the lack of conflict we observed in this setting, as it would break the illusion of their focused personae.
Subsequent use of the polyp costume by co-author Williams (Figure 6) at the Marine Science Open House Day, an annual event at Northeastern's Marine Science Center attended by over 800 people, provided another opportunity to engage attendees in a more traditional setting for informal science education.Because the polyp costume showed a cutaway gastrovascular cavity, she was able to use it to explain research she was conducting on understanding the environment inside the polyp, where microplastics and zooplankton are processed.When designing a cosplay costume, we recommend considering use beyond the cosplayer events, such as outreach at marine labs or in school settings.Attention to scientific accuracy will enhance the costume's continued use as a teaching tool in a non-cosplay setting.
The motivation for participating at a cosplay convention was to informally gauge whether outreach at this type of event was possible, i.e., can we present scientific material in a fun way in this setting?An important motivation for us was the sheer size of the cosplay event.
Although we were unable to find in the literature best practices for science outreach conducted using cosplay, educators in the field of paleontology have recognized that dinosaur franchises like the Jurassic Park series offer a way to use pop-culture narratives to make science more relatable (Santos et al., 2019).To further the goal of using cosplay as a tool for making science accessible we offer five recommendations based on our experience: 1.The costume should be simple enough to quickly explain the topic, yet have enough complexity (flashing lights, removable parts, colorful artistic elements) that it draws attention.Cosplayers pride themselves on hand-fabricated efforts that they often spend considerable time assembling, so they will instinctively know whether yours' was thrown together quickly or had real effort behind its construction.
2.
A memorable accessory, in our case a scale model of the MantaRay microplastics sensor, is worth some thought.Many attendees asked what it was and how it worked; therefore, having a prop as part of the costume can help tell a compelling story.
3.
Using a two-part message wherein a problem facing the ocean is presented, followed by a solution, or at least novel research to help find a solution, is a good idea.Much research has shown that public audiences are overwhelmed by the scope of the problems facing the ocean under climate warming and other serious issues, and that without some messaging on 'hope' (solution to the problem), they tune out.Using the ABT messaging framework developed by Olson ( 2019) is advised for conciseness, as interactions are time-limited at cosplayer gatherings.
4.
A single page or postcard worth of information that repeats the central messages succinctly, ideally using the ABT format of Olson (2019), and includes your contact info, can be a graceful way to end each encounter.Some cosplayer conventions prohibit the distribution of 'promotional material,' but if contact info is provided, then your outreach material can be construed as a 'business card', not promotional material.
5.
A successful outreach event will provide opportunities to gather photographs for your use later in science communication and reporting to others about the event.Remember it is important and courteous to ask for consent to take photographs of any cosplayers with whom you interact, and to obtain written permission if you anticipate using photos in published works, or for institutional publicity.Check with your own organization's policies for such use.Furthermore, permission to take someone's photo is often codified in the code of conduct or the admission policies at cosplayer conventions, and failure to comply can lead to ejection from these events.
The outreach message or science education goal of your cosplayer costume will vary greatly, but this approach has great potential for sharing knowledge about topics in marine science such as the impact of global change in the ocean, how to achieve sustainable fisheries, the value of marine protected areas and biodiversity, the invasive species problem, among others.This outreach method scales as well: with more cosplayers, the team can interact as part of a larger narrative.A multiple-person team can also split up and maximize engagements/hour while minimizing fatigue.Schools could involve students working with teachers using this method, or an entire research group could produce and wear costumes for an event.
The question remains about how this method compares with other strategies of informal science education, as measured by impact metrics (NASEM, 2018;Habig et al., 2020).We encourage readers to experiment with this outreach method to see if the 'single exposure' inherent in this approach leads to meaningful retention of the science message (Falk & Dierking, 2010).In other words, can marine science educators become 'Nerds of Trust' (McClain, 2017) using cosplay?
ADDITIONAL FILE
The additional file for this article can be found as follows: •
Figure 1
Figure 1 Co-author M. Patterson in coral polyp costume, with microplastics detection instrument as an accessory, at a major cosplay convention to conduct informal science outreach.
Figure 2
Figure 2 Close up of coral polyp costume showing microplastics and zooplankton (copepod) trapped in the digestive system (gastrovascular cavity).Inset: Commercially available stuffed toy representing the copepod Centropages hamatus.
Figure 3
Figure 3 Prototype of microplastics detection instrument ('MantaRay') developed by co-author Edson, shown with its deployment housing.The accessory for the polyp costume was based on this instrument, and a diagram of the instrument was silkscreened on the back of the polyp costume (Figure 4), and included in the informational flyer (Appendix A).
Figure 4
Figure 4 Left: Costume accessory ('MantaRay' microplastics sampler) alongside the informational flyer (Appendix A).Right: Silk-screen diagram of the instrument on the back of coral polyp costume.
Appendix A. Screen shot of flyer designed to raise awareness about microplastics pollution and a new invention measuring microplastics, distributed at Comic-Con International 2015 by authors S. Patterson (Amphitrite) and M. Patterson (Coral Man) to attendees in the Exhibit Hall, San Diego Convention Center.(Older contact details for author Edson redacted.)DOI: https://doi.org/10.5334/cjme.80.s1 Patterson et al.
Dupree, 2014).A detailed and amplified critique for how to remedy this failure to communicate by practicing scientists is given by Randy Olson in his critically acclaimed book, Don't Be Such a Scientist: talking Substance in an Age of Style | 2023-10-05T15:08:26.467Z | 2023-10-03T00:00:00.000 | {
"year": 2023,
"sha1": "cbeb3fc1a3c735f33a0848f5209f0e7b900bea40",
"oa_license": "CCBY",
"oa_url": "https://storage.googleapis.com/jnl-up-j-ctjme-files/journals/1/articles/80/651c0546e3fd7.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "530e2d5bf652d038b35d4b7b6ad3bd2994a56707",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
55033827 | pes2o/s2orc | v3-fos-license | Landslide inventory development in a data sparse region : spatial and temporal characteristics of landslides in Papua New Guinea
Introduction Conclusions References
and increased external investment from mining and other companies, population and settled areas have increased, hence the potential for damage from landslides has also increased.Information on the spatial and temporal distribution of landslides, at a regional-scale, is critical for developing landslide hazard maps and for planning, sustainable development and decision making.This study describes the methods used to produce the first, country-wide landslide inventory for PNG and analyses of landslide events which occurred between 1970 and 2013.The findings illustrate that there is a strong climatic control on landslide-triggering events and that the majority (∼ 61 %) of landslides in the PNG landslide inventory are initiated by rainfall related triggers.
There is also large year to year variability in the annual occurrence of landslide events and this is related to the phase of El Niño Southern Oscillation (ENSO) and mesoscale rainfall variability.Landslide-triggering events occur during the north-westerly monsoon season during all phases of ENSO, but less landslide-triggering events are observed during drier season months (May to October) during El Niño phases, than either La Niña or ENSO neutral periods.This analysis has identified landslide hazard hotspots and relationships between landslide occurrence and rainfall climatology and this information can prove to be very valuable in the assessment of trends and future behaviour, which can be useful for policy makers and planners.
Introduction
Papua New Guinea (PNG) is particularly prone to landslides due its geomorphology, climate and geology.In recent years there have been numerous landslides which have resulted in large numbers of fatalities and caused significant socio-economic impacts upon communities surrounding the landslide site and further afield (e.g.Tumbi Landslide in Southern Highlands Province; Robbins et al., 2013).Although PNG has experienced some of largest recorded landslides in the world (e.g.Kaiapit Landslide in 1988, Peart, 1991a;Drechsler et al., 1989, and the Bairaman Landslide in 1985, King and Loveday, 1985), research has tended to focus on basin scale landsliding and has largely involved documenting the characteristics of individual landslides or a cluster of landslides associated with a specific trigger mechanism (Greenbaum et al., 1995).There have been field investigations to map landslide scars in particularly sensitive regions of the country, such as along the Highlands Highway (Tutton and Kuna, 1995;Kuna, 1998) and in close proximity to mining operations (Hearn, 1995;Fookes et al., 1991), but these studies have remained largely isolated and do not conform to a standard of landslide recording.To understand the temporal and spatial characteristics of landslides and their trigger mechanisms, assessments at a regional-scale are required, particularly when trying to determine trends or develop models.Development of such regional-scale inventories can prove challenging particularly in a region such as PNG, as the nature of landslides means that: (1) they frequently result in impacts over small areas compared to impacts associated with larger-scale natural hazards and (2) the areas affected by landslides are often remote and difficult to access, as well as being widely distributed relative to one another (Kirschbaum et al., 2009;Petley, 2012).Therefore, although there have been a number of fieldwork campaigns to update and extend existing geological maps in PNG, the development of regional landslide hazard maps, based on fieldwork, has proven difficult.By extension, this has limited the development of landslide inventories in the region.The aim of this study is to build-upon existing methods and approaches (Greenbaum et al., 1995;Blong, 1985) to construct Introduction
Conclusions References
Tables Figures
Back Close
Full a regional landslide inventory for PNG, to improve the current knowledge and understanding of landslide occurrences and triggering factors in the region.An overview of the materials and methods used to create the inventory are provided in section two, followed by an outline of the techniques employed to reduce temporal and spatial uncertainty in the database.In section three the results of analysis conducted on the landslide entries within the database are provided, with particular emphasis placed on the spatial and temporal distributions of landslide occurrence and the relationships with spatial and temporal distributions of rainfall variability over a range of time scales (month-seasonal-annual). Discussion and conclusions are provided in sections four and five, respectively.
Study area and landslide incidence
The dominant trigger mechanisms for many landslides around the world are earthquakes (Keefer, 2002;Meunier et al., 2007) or rainfall (Iverson, 2000;Zêzere et al., 2005;Guzzetti et al., 2007).PNG is no exception to this.PNG lies within the Maritime Continent (Ramage, 1968) and is influenced by a tropical maritime climate.This is characterised by high rainfall accumulations, which alter between wetter and drier periods seasonally, and high maximum and minimum temperatures (McGregor, 1989).Rainfall variability in this region is predominantly controlled by: (1) the meridional heat transfer of the Hadley Cell and the temporal and spatial variability of the Intertropical Convergence Zone (ITCZ), (2) the zonal Walker Circulation with its variability (e.g.El Niño Southern Oscillation) and associated oceanic currents, (3) the north-westerly monsoon circulation and (4) the physiography of the region (McAlpine et al., 1983;McGregor and Niewolt, 1998;Qian, 2008).These controls can induce rainfall variability over a range of temporal and spatial scales, which in turn affects the temporal and spatial occurrence of landslide events.In addition to the meteorological complexity of the region, PNG also lies at the intersection of the large-scale collision of the north-easterly migrating Indo-Australian Plate and the westerly-shifting Pacific Plate.Between the Proterozoic and the Holocene, the region has undergone phases of igneous activity, rifting and Introduction
Conclusions References
Tables Figures
Back Close
Full subsidence, followed by periods of convergence and arc-continent collision (Hill and Hall, 2003).These processes have caused significant deformation and up-lift, which has resulted in the formation of the central mainland cordillera and numerous additional mountain ranges (Finnisterre Range, Morobe Province; Adelbert Range, Madang Province; Torocelli Mountains, West Sepik Province) across the country.These ranges have elevations in excess of 4000 m a.s.l. in places.Continuing deformation and the resultant tectonic shearing causes extensive faulting and exposes much of PNG to regular moderate to high magnitude (magnitude 7 and above) earthquakes (Anton and Gibson, 2008).Earthquakes of this magnitude have resulted in wide-spread, high-density landsliding on numerous occasions (Pain and Bowler, 1973;Meunier et al., 2007).Such events are also frequently accompanied by landslide dams, which can cause significant additional damage upon breaching (King and Loveday, 1985).The physiography of PNG (Fig. 1) means that it is affected by a significant landslide hazard, the true nature of which has been difficult to quantify due to inadequate data (Blong, 1986).However, the regularity of destructive landslide events, particularly in mining areas, has produced a number of scientific papers reviewing landslide activity.Stead (1990) provides the most comprehensive overview, by identifying, from an engineering geology perspective, the different types of landslides which have been observed across the region.These include: (1) debris slides, avalanches and flows, (2) rotational slumps, (3) mudslides and (4) translational slides and rockslides (Fig. 2).Debris slides, avalanches and flows are considered the most common and predominantly involve the soil and uppermost weathered material.They usually occur on steep slopes (> 45 • ) of deformed rock, thinly-bedded mudstones and closely foliated metamorphic rocks and, in certain circumstances, such as in response to very high magnitude seismicity, can result in catastrophic failures (King and Loveday, 1985;Hovius et al., 1998;Ota et al., 1997).
Also common, particularly in the Papuan Fold Belt, are rotational slumps.These generally occur in homogeneous sedimentary rocks, such as mudstones, marls, sandstones and greywackes, on slopes as low as 10 • .In these circumstances the displace-Figures
Back Close
Full ment of material is generally limited, while events occurring on steeper slopes (> 30 • ) can result in deposits with volumes in excess of 500 000 m 3 (e.g.Dinidam Landslide; Blong, 1986).Numerous slumps, of varying size, have been observed along the Highlands Highway (Tutton and Kuna, 1995;Kuna, 1998) and can regularly lead to road closures and property damage.In addition to slumps, mudslides also result in damage and disruption, affecting both infrastructure and property, along the Highlands Highway.Defined as "masses of argillaceous, silty or very fine sandy debris" which displace material by "sliding on discrete boundary surfaces in relatively slow moving lobate forms" (Stead, 1990), they are most frequently observed in areas which are underlain by the Chim Formation.Movements of this type are particularly problematic because their generally slow displacement rate (∼ 60 mm yr −1 at Yakatabari; Blong, 1985) can increase rapidly associated with localized changes in shear strength and pore water pressure.Furthermore variations in depth, width and style of movement (Comegna et al., 2007) and the fact that they can occur on very low slope angles (between 6 and 15 • ), which coincide with settled and populated areas, means that they are difficult to mitigate against.By contrast, translational slides and rockslides typically occur on very steep slopes (between 30 and 50 • ) in areas with deeply incised terrain.The failure mechanisms of these slides are strongly influenced by bedding planes, joints, faults and the interface between weathered material and fresh bedrock.Given the highly fractured and deformed nature of many rocks in PNG these slides can occur in a wide range of geological materials.However, the distribution of translational slides and rockslides is strongly linked to the topography, making areas susceptible to them easier to identify.Figures
Back Close
Full 2 Materials and methods 2.1 Regional landslide inventory construction: criteria, sources and structure PNG currently has no systematic, routine approach for recording landslide events.This means that although a large number of landslides have been identified by their scars and deposits (Kuna, 1998) the dates of events are rarely recorded.To collate a landslide inventory which can be used to examine the temporal and spatial frequency of landslides and the corresponding relationship between these events and potential landslide triggers, it is essential that the dates and locations of landslides are recorded.Therefore, in the new PNG inventory only those landslides where both the date and location could be established with reasonable accuracy were entered.The precise date of landslide occurrence is often difficult to determine, however, where there are eye witnesses to the event the day of initiation can be recorded.In instances with no eye witnesses to the event, the time of initiation was approximated using a minimum and a maximum date boundary relating to the earliest and latest dates within which the landslide event occurred.The minimum and maximum dates frequently related to the start and end dates of potential landslide-triggering events, such as a flood event, which resulted in landslides.The basic location information required for a landslide to be included in the inventory was either a village or landmark name and the administrative province.All landslide records were analysed closely for the veracity and accuracy of essential data in terms of geographical locality, time, corroborating evidence (e.g.witness statements, press, quality of writing and reporting), and incident impact.Many records were dismissed as they "failed" the quality test needed for a study such as this.Records of landslide activity were collected from a number of sources, including: - Each data source used to construct the new landslide inventory had its own uncertainties and limitations and therefore the details captured for each landslide entry vary in completeness and scientific content.The most consistently available data for a landslide event were the (approximate) date of occurrence, affected areas, trigger mechanism (i.e., heavy rainfall) and the impacts of the event.Less consistently reported were the landslide type and the landslide size (volume and/or area affected).A full list of the critical and relevant information collected for each landslide entry, are shown in Table 1.
The development of a landslide database has a number of complexities.Firstly, although the majority of information sources documented individual landslides which could be spatially and temporally identified, there were a number of occasions when terms such as "some" or "numerous" were used to describe a landslide cluster, with no further spatial or temporal information related to the individual landslide deposits.
In these instances, the sources often outlined the area or villages affected by the landslides but did not provide an indication of the exact number of landslides which had contributed to the observed impacts.To account for this, an additional attribute column referred to as "landslide cluster group size" was added to the database.This allowed each entry to be assigned to one of the four cluster group sizes, representing the number of landslides that were believed to have been associated with the database entry: (1) 1-10, (2) 10-100, (3) 100-1000 or (4) > 1000.Where sources indicated that landslides affected multiple villages, resulting from the same potential triggering factors, Introduction
Conclusions References
Tables Figures
Back Close
Full an entry for each unique spatial location affected was included in the database.This method aims to capture some of the uncertainty around the recording of the "true" number of landslides initiated by a triggering event, while maintaining the integrity of those events which have the required temporal and spatial information necessary for analysing patterns, trends and triggering mechanisms.
Secondly, the data sources used to collate the PNG inventory are produced by a range of authors (e.g. research scientists, geologists, media correspondents and humanitarian agencies), some of whom have no specialist knowledge of documenting landslide events.This introduces inaccuracies particularly with regard to the more technical language used to document the event, such as identifying landslide type.In media publications for example, the majority of events are described using the term "landslide".In a small number of cases however, the landslides are referred to as mudflows but there is little evidence to suggest that this term reflects the Cruden and Varnes (1996) landslide classification.It should also be noted that there are potential inaccuracies where landslide triggers have been pre-determined in the media and other information sources, without a site inspection of the landslide by geotechnical specialists.In many instances the decision on which type of triggering event led to a landslide is based on the testimonies of people living within the affected community.The decision was taken to include information related to the potential trigger if it was available, regardless of the data source, as in the majority of cases the triggering events could be cross-referenced and verified, in terms of their timing and location by using multiple hazard and disaster databases.This approach meant that it was possible to provide corroborating support as to whether earthquakes, flooding or tropical cyclones could have contributed to the landslide entries recorded.Furthermore, it allowed multiple potential triggering factors to be attributed to a single database entry if the need was required.Data for other attributes, such as landslide type and size were only added where information was available from a scientific source (e.g.technical site reports or journal publications).
Reducing spatial and temporal uncertainty in the landslide inventory
The variety of data sources used to collate the landslide inventory introduced a number of spatial and temporal uncertainties.For example, spatial information relating to landslide activity was often provided by the name of a town, village or landmark which had been impacted by the event, rather than the latitude and longitude of the landslide head scarp or deposit.Some of these locations were found to be some distance from the actual landslide site, while in other cases village names were misspelt or had changed over time making them difficult to spatially identify.To address the issue of spatial uncertainty a number of steps were taken: 1. Landslide entries were cross-referenced against recent settlement data provided by the PNG MRA.This allowed the correct province and administrative district to be identified in the majority of cases.It was also possible to check whether the settlement had any other names associated with it, or varieties of spelling.
2. Where possible static maps found in journal publications and site inspection reports were digitized to provide information on the size and location of the landslide deposit or identify a geographical area affected by landsliding.This was particularly useful for identifying the locations or areas affected by earthquake-induced landsliding.In a number of cases, areal extents of high-density landsliding associated with a specific earthquake epicentre could be identified and mapped.
Conclusions References
Tables Figures
Back Close
Full tones), which become exposed following landslides, can be differentiated from vegetated slopes (variations of red tones).FCC images were then overlain on digital terrain and settlement data so that the location and, where possible, size of the landslide(s) could be verified (Fig. 3).In order to confirm that the blue tones observed in the FCC images were associated with landslide scars, the Digital Number (DN) values of the 7 bands were extracted from the area identified as a landslide scar and compared against the typical spectral ranges indicative of many active landslides (Table 2; Petley, 2002).If the values corresponded well, then the landslide entry was considered spatially verified.Although this method proved useful for a number of the landslide entries, cloud cover and shadowing prevented other landslide entries being verified with this technique.Furthermore landslides with widths or lengths smaller than 50 m could not be captured, due to the resolution of Landsat images (Petley, 2002).
Quantifying spatial uncertainty is particularly challenging where a wide variety of data sources have been used to collate the landslide inventory, as in this case.Therefore in this study, each entry is assigned to a spatial uncertainty group based on a more subjective decision framework.The "low uncertainty" group represents entries which were digitized based on information from journal publications and/or site inspection reports and the satellite-based FCC method.Other entries included in this group, were those where latitude and longitude information of the landslide site (i.e. the location of the landslide deposit) were available.The "medium uncertainty" group represents entries where the village or landmark affected was identified and successfully crossreferenced with MRA settlement data so that a latitude and longitude for the affected site(s) was identified.The "high uncertainty" group represents entries where only an approximate area, such as the river catchment or Local Level Government (LLG) area could be identified.In these instances an approximate latitude and longitude point representative of the catchment or LLG area were recorded in the database.In the case of earthquake-induced landsliding, where information on the location of landslides was scarce, the earthquake epicentre or a point representative of the area of high den-Introduction
Conclusions References
Tables Figures
Back Close
Full sity landsliding was recorded.In both cases these entries were assigned to the "high uncertainty" group.
In addition to spatial uncertainty, the availability of temporal information varied significantly depending on the data source used to identify the landslide.For very large or high-density landslides which resulted in large socio-economic impacts, the dates when landslides occurred were often clearly recorded in either site inspection reports or scientific journal publications.However where landslides were identified as a secondary natural hazard, occurring as a result of flooding or an earthquake, the dates of associated landslides were poorly recorded.In these instances, landslide initiation dates could only be estimated based on the date of the earthquake or the period over which flooding was recorded.For these events, the estimated time of landslide initiation was constrained between a minimum and maximum date boundary.This was accomplished by cross-referencing the date and potential landslide trigger against multiple hazard and disaster databases.For flood induced landslides, the Dartmouth Flood Observatory archive (Brakenridge, 2010) was particularly useful, as it compiles flood inundation extents and additional impact information, including secondary hazards such as landslides.This information has been collected since 1985 using news, governmental, instrumental and remote sensing sources.For earthquake-induced landslides, the PNG Geophysical Observatory and the USGS PAGER (Prompt Assessment of Global Earthquakes for Response) databases were used.Using information from the multiple hazard and disaster databases two distinct fields were added to the inventory: a start and end date boundary, holding information on the day, month and year in each case, relating to the earliest and latest possible date, respectively, when landslides could have been initiated.By carefully cross-referencing each landslide entry, the uncertainty around the time and duration of each potential triggering-event was reduced.For 80 % of entries, the time over which landslides were likely to have been initiation could be constrained within a period of 10 days or less.For the remaining 20 % of entries, triggering event durations exceeded 10 days.For 9 % of those entries, the duration of Introduction
Conclusions References
Tables Figures
Back Close
Full the triggering events exceeded 30 days which means that landslides could have been initiated or been active at any point over this period.
Rainfall data
Unfortunately at the time of this analysis rainfall gauge data were not available from the National Weather Service in PNG.However, gauge-based climatology data (monthly means over various reference periods) were available for nine sites across PNG via the World Meteorological Organisation (WMO) website (Fig. 1), while monthly rainfall data were available for an additional two sites via direct correspondence with mining companies.A major drawback of these data is their sparse number and spatial distribution.Eight of the eleven rainfall gauge sites are located in coastal areas, with only three sites representing rainfall patterns in areas known to experience the majority of landslides.Furthermore, the WMO gauge data were only available as climatological averages and therefore could not provide an indication of the temporal variability of rainfall at different timescales.Given the limitations of the available rainfall gauge data, additional sources of rainfall data were sought.These additional data were obtained from the Global Precipitation Climatology Centre (GPCC).The GPCC Full Data Reanalysis Product Version 6 (Adler et al., 2003) uses near-real-time and non-real-time gauge stations held in the GPCC database to produce gridded (0.5 • or ∼ 55 km resolution) monthly rainfall accumulations over land areas, around the globe.WMO and other rainfall gauge-based data sources are interpolated to produce the gridded datasets, offering greater temporal and spatial resolution for better comparisons between rainfall and landslide occurrences.Monthly rainfall accumulations were not available for the entire period of the landslide inventory and therefore, monthly data over the period 1970 to 2010 (41 years) have been used to form the basis of the climatological analysis in this study.
To compare the spatial and temporal characteristics of landslide activity relative to changing rainfall patterns, climatological analysis has focused on only those GPCC grid squares within which landslide activity has been recorded over the duration of Introduction
Conclusions References
Tables Figures
Back Close
Full the PNG landslide inventory.This resulted in monthly rainfall totals from 53 GPCC grid squares being used to generate a monthly rainfall climatology, a range (90th percentile to 10th percentile) of area-averaged, monthly rainfall percentiles (based on the 41 year reference period) and annual rainfall accumulations.In order to assess the time series characteristics of the 41 year rainfall record, 6-monthly rainfall totals for May to October and November to April were calculated for each of the 53 GPCC grid squares.These two periods correspond to south-easterly trade flows being dominant and north-westerly monsoon flows being dominant across PNG, respectively.Using these 6-monthly totals, a standardized rainfall anomaly index (RAI) was calculated for each of the 53 GPCC grid squares using: where X i j is the standardized 6-monthly rainfall total for GPCC grid square i and 6monthly period j , X i j is the corresponding 6-monthly rainfall total, X i is the mean 6monthly rainfall total for GPCC grid square i calculated over the period 1970 to 2010, and S i is the standard deviation of the 6-monthly rainfall total calculated over the same reference period.A landslide-area, rainfall anomaly index was then calculated by averaging over all of the 53 GPCC grid squares representing the landslide affected areas in PNG as follows: where RAI is the area-averaged value for 6-monthly period j and n is the number of GPCC grid squares.
The availability of gridded GPCC data means that in addition to analysing the temporal variability of rainfall and landslides, the spatial characteristics of rainfall and their link to landslide occurrence can also be investigated.Gridded maps of mean annual Introduction
Conclusions References
Tables Figures
Back Close
Full precipitation (MAP) and 3-monthly seasonal mean precipitation maps, as calculated from monthly data over the 1970-2010 reference period, have also been produced so that rainfall distributions can be reviewed relative to landslide affected locations.
Landslide inventory statistics
The database consists of 167 entries recorded between January 1970 and December 2013.Each entry represents a single landslide occurrence or a cluster of landslides, identifiable by a unique spatial location.The spatial locations of individual landslides or clusters of landslides are provided by latitude and longitude points, which are additionally assigned to a spatial uncertainty group (low, medium and high).The majority (∼ 63 %) of entries are in the medium spatial uncertainty group, representing entries where latitude and longitude information for the affected village or landmark associated with the landslide(s) has been successfully cross-referenced with MRA settlement data.In these instances, the landslide or landslide cluster is expected to be within 10 km of the village/landmark identified in the source material.Approximately 10 and 26 % of entries fall into the high uncertainty and low uncertainty groups, respectively.The landslides collected in the database tend to represent large-scale, highto medium-impact events.The magnitude of the landslide events has been assessed predominantly on the impacts the event had upon the community, as this information was more readily available than quantitative size (volume or area) information.61 % of entries had impact information available for analysis and 38 % of these can be categorized as high-impact landslide events which resulted in fatalities and additional damage to infrastructure.Medium impact landslides, representing those where there was significant damage affecting a number of different types of infrastructure but no recorded fatalities account for 43 % of the entries where this information was available, while 19 % of entries can be categorised as low impact landslides which resulted in some minor Figures
Back Close
Full there are no records of substantial damage associated with the event.By reviewing both the impact information and the volume/area data where it was available (∼ 24 % of database entries), it can be asserted that the majority of the landslides or landslide clusters captured in this new inventory are large-scale, high to medium impact events.The 167 database entries can be subdivided into landslide-triggering events.By doing this, database entries are grouped based on whether they are associated with the same potential triggering-event, identified by a unique temporal period.This is possible because multiple landslides and/or landslide clusters can occur associated with the same triggering event and can often affect more than one community across a geographical area.This means that there can be multiple database entries associated with the same temporally specific triggering event (e.g. a flood event).In this study, triggering-events are defined as external factors, such as a rainfall event or earthquake, which changes the state of the slope and results in a landslide.In the new PNG landslide inventory, the 167 entries are the result of 103 separate landslide-triggering events.The triggering-events captured within the inventory include earthquakes, flooding, tropical cyclones, monsoon rainfall and anthropogenic influences (e.g.excavations, mining).Using the cross-referencing approaches outlined in Sect.2.2, it was possible to verify the source information and determine whether earthquakes, flooding or tropical cyclones could have contributed to the landslide occurrences in the inventory.Frequently (> 35 % of landslide-triggering events), a combination of factors were noted as being influential for initiating the landslide or landslides, while in ∼ 15 % of landslidetriggering events the potential trigger was not recorded or was unknown.This was either because the triggering event could not be established, even after significant research on the event (e.g. the Kaiapit Landslide in Morobe Province in 1988; Peart, 1991a), or it was simply missing from the information source.Rainfall and the vari-Introduction
Conclusions References
Tables Figures
Back Close
Full ous combinations of triggers associated with it (i.e.rainfall and anthropogenic activity or rainfall and earthquake activity), account for the majority (∼ 61 %) of all landslidetriggering events in the PNG inventory (Fig. 4).This is not unexpected given that PNG experiences some of the highest annual rainfall totals globally (McAlpine et al., 1983).
In addition to rainfall associated triggers, ∼ 22 % of entries were linked with earthquakes.All of the earthquakes which were identified as triggering landslides were of magnitude 5 or greater.These events are widely distributed across PNG, with events being observed through the Papuan Fold Belt tectonic seismic zone, as well as the North Sepik, Ramu, Huon, New Britain and Bougainville Island tectonic seismic zones (Ripper and Letz, 1993).It is surprising that there are not more records of earthquakeinduced landsliding events in PNG, particularly given the complex nature of tectonics in the area and the regularity with which the region experiences moderate to high magnitude earthquakes (Anton and Gibson, 2008).Table 3 shows return periods, in years, associated with magnitude 6.0 earthquakes in the different seismic zones highlighted above (Ripper and Letz, 1993) and suggests that the number of earthquakes capable of triggering landslides (i.e.earthquakes greater than magnitude 5 in the PNG landslide inventory), are regular occurrences in these regions.This would suggest a discrepancy between the number of potential earthquakes capable of triggering many landslides and the number of earthquakes which are actually recorded to have resulted in landslides, in PNG.One reason for this is that in these instances, landslides are observed as secondary hazards to the principal hazardous event and such secondary hazards are frequently subject to large-scale under-reporting (Petley et al., 2007).Due to the limited number of earthquake-only, landslide-triggering events, these events, together with those earthquake/anthropogenic triggering events, will not be analysed further in this study.With regard to anthropogenic influences on landsliding, there are only a very small proportion of entries (< 5 %) where landslides are believed to have been triggered solely by anthropogenic activities.The majority of these entries are associated with infrastructure development in support of mining activities, and are usually well documented as the propensity for compensation payouts for perceived anthropogenic land-Introduction
Conclusions References
Tables Figures
Back Close
Full slides has increased (Kuna, 1998).Although these events may be well documented, they are not always rigorously or independently assessed in terms of the landslide trigger factors.Therefore, given that there is significant uncertainty around anthropogenic activity as a stand-alone triggering mechanism, the decision has been made to include these entries in the further analysis.It should also be noted that 3 % of the landslidetriggering events were assigned to a category labelled "Other".These landslides were thought to be associated with lake overtopping instances (Peart, 1991b) or river erosion, either of which can be linked to periods of either high-intensity or prolonged rainfall and therefore these events will also be included in the further analysis.
Based on the assessment of the potential triggering-event information held in the PNG landslide inventory, further analysis and results focus on the 86 landslidetriggering events which are associated with rainfall, and the various combinations of triggers related with it (Fig. 4), as well as all those entries linked to the "Anthropogenic", "Other" and "Unknown" trigger factor categories.
Temporal characteristics of landslide occurrence
Analysis of the annual occurrence of the 86 landslide-triggering events indicates that there is large year to year variability (Fig. 5).There are distinct periods when the number of landslide-triggering events increases (1975-1976, 1983-1991, 2002-2009) and periods when the number of landslide-triggering events is substantially lower (1972-1973, 1981-1982, 1994-1995, 1999, 2001 and 2010-2011).There also appears to be a slight increasing trend with more landslide-triggering events being recorded near the end of the time series, particularly over the period between 2006 and 2007.A review of the temporal occurrence of recorded landslides indicates a strong climatic control on the triggering events (Fig. 5), with the highest numbers being observed between December and March, with a second peak in May.Fewer landslide-triggering events are observed between June and October, after which the number of landslide-triggering events gradually increases.This pattern of landslide activity relates closely with the periods dominated by north-westerly monsoon flows and south-easterly trade flows re-Introduction
Conclusions References
Tables Figures
Back Close
Full spectively.Many locations in PNG observe drier conditions and lower monthly rainfall totals during the period between May and October when south-easterly trade winds are dominant in the region, while wetter conditions tend to prevail between November and April coinciding with the north-westerly monsoon (Figs. 1 and 5).Despite the strong seasonality illustrated in Fig. 5, landslide-triggering events associated with rainfall, continue to be observed during the drier season.This differs from regions such as Nepal, where the number of fatal landslides fall to almost zero between November and April, which lie outside of the South Asian Summer Monsoon (Petley et al., 2007).By comparing percentiles of monthly precipitation climatology (based on the 41 year rainfall data reference period) with individual monthly rainfall totals observed at times of landslide activity, it is possible to identify why this may be the case.Landslides occurring in the drier season are, in the majority of cases (∼ 61 %), associated with months which observed exceptional (> 80th percentile) rainfall (Fig. 6).This compares to the wetter season where only 44 % of entries are linked to monthly rainfall totals greater than the 80th percentile of climatology.In fact between February and May, 42 % of landslides occurred during months with rainfall totals less than the 50th percentile.This indicates that during the drier season, landslide-triggering events tend to be associated with more extreme rainfall accumulations, while during the wetter season larger numbers of landslides can be triggered during months with lower, absolute monthly rainfall totals.This may initially seem counterintuitive, as typically higher monthly rainfall totals are observed during the wetter season (Fig. 5).However, a review of the coefficient of variation (CV), calculated by dividing the standard deviation by the mean, shows that there is greater rainfall variability during the drier season (CV =∼ 52 %) compared with the wetter season (CV =∼ 30 %).The maximum and minimum 6-monthly rainfall total for the north-westerly monsoon season are 4782 and 490 mm, respectively, while the maximum and minimum 6-monthly rainfall total for the drier season is 5741 and 143 mm, respectively.These statistics indicate that over the landslide affected areas in PNG the wetter season is associated with more persistent, less variable rainfall which results in high average rainfall totals, while during
NHESSD Introduction Conclusions References
Tables Figures
Back Close
Full the drier season rainfall is less persistent and more variable, with large positive and negative departures from the mean.Relating the rainfall climatology statistics with the distribution of landslide events observed in Fig. 6, suggests two things: (1) given that the majority of landslides initiated in the drier season are linked to extreme rainfall (∼ 61 %) they are likely to be associated with convective storms which are generally small, localised, and more isolated rainfall events, (2) slope instability initiated during the wetter season is likely to be associated with the greater and more persistent water availability made possible by more consistent deep convection affecting the region.
Greater water availability and interaction with the surface and subsurface of slopes, allows multiple mechanisms of instability (e.g.changes in groundwater level, greater water-slope interactions associated with increased infiltration and increases in runoff and erosion) to act upon susceptible slopes to alter pore water pressures and shear strength to enhance potential instability throughout the wetter season.It also suggests that rainfall accumulated over all wetter season months may be important and influential in triggering landslides during the monsoon period, particularly where landslides are triggered during months with below average rainfall.Figure 6 indicates that season-to-season rainfall variability could have important impacts on the number of landslide-triggering events, particularly in the drier season.To understand how this variability is related to landslide occurrence, total numbers of landslide-triggering events per 6 month period have been calculated and compared against a 6-monthly rainfall anomaly index (RAI).Figure 7 illustrates that there is considerable interseasonal rainfall variability across the grid squares affected by landslide activity.Within this variability there are groupings of positive rainfall departures (1970-1971, 1974-1978, 1983-1985, 1988-1991, 1998-2001 and 2007-2009), which indicate wetter conditions for consecutive 6-monthly periods.The grouped positive rainfall departures are seen to persist for between 2 and 5 seasons and occur at intervals of between 3 and 7 years.The average recurrence of these groupings over the 1970-2010 reference period is approximately 4.5 years.This recurrence interval is similar to the average time between El Niño Southern Oscillation (ENSO) events (McGregor
Conclusions References
Tables Figures
Back Close
Full and Nieuwolt, 1998).Using the NOAA/ESRL/PSD bimonthly, ranked index of the Multivariate ENSO Index (MEI; Wolter andTimlin, 1993, 1998), years associated with the extreme modes of the southern oscillation have been identified.Based on these data, collected in 2012, Table 4 illustrates years which are associated with El Niño events and La Niña events, respectively.It is widely acknowledged that El Niño introduces "typically" drier than normal conditions to PNG (McVicar and Bierwirth, 2001) as the zone of deep convection, associated with the rising limb of the Walker Circulation, accompanies the eastward propagation of warmer sea surface temperatures (Qian et al., 2010), and that La Niña introduces "typically" wetter conditions.Interestingly, the groupings of positive rainfall departures tend to follow, rather than coincide with, La Niña episodes in PNG (Fig. 7).Furthermore, landslide-triggering events tend to coincide with La Niña episodes or ENSO neutral episodes and are less directly coincident to the groupings of positive rainfall departures.El Niño episodes tend to coincide with seasons where none or very few landslide-triggering events occur and where large negative departures from the 6-monthly mean rainfall are observed.These departures are usually greatest in the drier season.However, landslidetriggering events continue to occur particularly during the wetter seasons of El Niño episodes (i.e. 1987 and 1992; Fig. 7).This can partially be explained by reviewing the variability of the wetter season RAI to the drier season RAI. Figure 8 shows that 6-monthly rainfall exhibits larger variability between consecutive drier seasons, compared to variability between consecutive wetter seasons.The occurrence of El Niño and La Niña events appears to have a large influence on the drier season rainfall variability, as the peaks and troughs in the drier season 3 year running mean illustrate, but limited influence on the wetter season RAI.Therefore landslide-triggering events continue to occur during the north-westerly monsoon season during all phases of ENSO, but less landslide-triggering events are observed during drier season months during El Niño phases, than either La Niña or ENSO neutral periods.
NHESSD Introduction Conclusions References
Tables Figures
Back Close
Full
Spatial characteristics of landslide occurrence
As with the temporal variability, landslide-triggering events are very unevenly distributed spatially across PNG (Fig. 9a).Higher densities of landslide occurrences are observed in provinces which intersect the mountainous central spine of the country.The highest densities are seen in Western Highlands, Chimbu, Western, Central and West Sepik Provinces as well as in the Huon Peninsula in Morobe Province.The spatial distribution of the landslide entries appears to be determined primarily by a combination of relief, precipitation and population density.The high density pocket of landslide activity observed in northern Western Province coincides with the area of greatest annual rainfall (Fig. 9b).This zone of high rainfall accumulations extends towards the southeast as a band, following the southern edge of the Papuan Fold Belt.The area directly south of the Fold Belt, where the highest rainfall accumulations tend to be observed, is comprised of predominantly flat, swampy plains and therefore records of landslide activity are scarce in these areas.The northern edge of this band of high rainfall accumulations coincides with relief which exceeds 1000 m and this is where clusters of landslides begin to be observed, extending down the southern edge of the Papuan Fold belt in parallel with the band of high annual rainfall accumulations.Additional high density pockets of landslide occurrence are seen in Western Highlands and Chimbu Provinces (Fig. 9a), which lie within the central mainland cordillera where annual rainfall totals exceed 2700 mm yr −1 and relief can exceed 3000 m in places.The terrain is very rugged and slope angles can vary significantly, up to 50 Despite this, these areas are some of the most densely populated of the mountainousrural provinces in PNG, increasing the likelihood for landslides to interact with communities and infrastructure and be recorded (Fig. 9c).In addition, these areas also have high percentages of total cultivated land areas compared to their total land area, with Western Highlands, Chimbu and Eastern Highlands Provinces having 50, 42 and 50 % total cultivated land areas, respectively (Saunders, 1993;Bourke and Harwood, 2009).This compares to the southern Provinces (Western, Gulf, National Capital District and Introduction
Conclusions References
Tables Figures
Back Close
Full Oro) where on average less than 20 % of the total land area is cultivated (Saunders, 1993;Bourke and Harwood, 2009).Therefore in addition to higher densities of people with the potential to be affected by landslides, the populations within Western Highlands and Chimbu Provinces tend to have increased interaction with the land through agricultural practices which in turn can alter slope stability and can lead to an increased probability of landslide occurrence.Furthermore, these areas are also known to be underlain by the Chim Formation.This is comprised of dark grey, thinly laminated mudstone with siltstone and some volcaniclastic sandstone.The mudstones are generally weak and break down to form highly plastic silty clay (Peart, 1991c).Rotational landslides and mudslides are more common in areas where this formation crops-out or is overlain by limestone or unconsolidated scree deposits, as interactions with water or seismic activity can easily mobilise this weak strata.The strong seasonality observed in the temporal analysis between rainfall climatology and landslide-triggering events can also be observed spatially, particularly during the drier season.Splitting the landslide entries into 3-monthly seasons (December, January and February (DJF); March, April and May (MAM); June, July and August (JJA); September, October and November (SON)) allows the temporal and spatial distributions of medium to large landslides to be observed (Fig. 10).Corresponding 3monthly mean rainfall composites (based on the 41 year reference period) additionally illustrate how rainfall distribution varies spatially as the seasonal cycle progresses.It is evident that as the rainfall distribution alters so does the distribution of landslide affected areas, particularly as the cycle moves from the wetter season (DJF and MAM) to the drier season (JJA and SON).During DJF and SON, the highest rainfall totals are observed in the western-central region of Western Province, along the border with Indonesia, and along northerly-facing coastlines.The well defined rainfall pattern observed in the wet season plot in Fig. 9b starts to develop in SON, extending southeastwards from the western border region, and strengthens through DJF and MAM.Landslides are observed at the northern edge of this band, as rainfall interacts with topography in excess of 1000 m.In both the DJF and SON seasons, landslide-triggering Introduction
Conclusions References
Tables Figures
Back Close
Full Of the four seasons, the greatest spatial spread of landslide occurrences tends to occur during JJA and MMA (Fig. 10).The reasons for this are different in each case.
As identified in Fig. 6, landslide-triggering events during JJA tend to be associated with exceptional rainfall which exceeds the 80th percentile for the month of initiation.Rainfall during this season is driven predominantly by orographic and physiographicallyinduced processes.These mesoscale features, including mesoscale convective complexes, mountain-valley winds and land-sea breezes, lead to localised, smaller-scale rainfall events affecting distinct regions of PNG (i.e.southern coast of New Britain and the north-eastern mainland region of the Huon Peninsula).The exception to this is the area in northern-Western Province, close to the border with Indonesia, which maintains moderate-to-high rainfall totals throughout the year.The dynamical processes driving rainfall appear to broadly coincide with locations which experience landslide activity at this time of year and in fact rainfall can be considered the dominant process affecting the spatial variability of landslides during this season.However, while it is possible to identify potential zones where mesoscale features may induce rainfall more regularly and by extension trigger landslides during this season, identifying actual locations of landslides cannot be determined due to the large degrees of variability inherent to this season.By comparison the large spread of landslide-triggering events across PNG during MAM is associated with the widespread dominance of deep convection induced by the north-westerly monsoon.Both the central cordillera and areas of lower elevation are affected by landsliding, during the wetter season, due to the increase in water availability and the interaction of this water with a larger number of potentially susceptible slopes.During MAM therefore, rainfall is the less dominant process determining the spatial variability of landsliding, as the vast majority of the region experiences high rainfall accumulations during this time.It is likely therefore, that underlying landslide Introduction
Conclusions References
Tables Figures
Back Close
Full susceptibility is the more dominant process determining the spatial distribution of landslides during this season.
Discussion
In this study, the methods used to generate a spatial and temporal landslide inventory for the sparse data region of PNG have been outlined and the occurrence of landslidetriggering events between 1970 and 2013 have been examined.The development of landslide inventories is frequently challenging due to the nature of landslide events, as outlined at the start of this article.It is fully recognised that the newly developed PNG landslide database underestimates the true numbers of landslides which occur in PNG and that although we have used a number of techniques to reduce spatial and temporal uncertainty, that error levels remain quite large.For example, the uncertainty around the true numbers of landslides associated with entries in the database can be illustrated in Fig. 11.As only those entries where dates and locations could be identified with reasonable accuracy were included in the database, many individual landslide deposits associated with a specific triggering-event had insufficient attribute information (i.e.identifiable spatial references) to be individually entered into the inventory.Much of the uncertainty identified in Fig. 11, is linked to the type of landslide-triggering event.
Frequently, earthquake events which resulted in landslides had sufficient information to identify an area where the majority of landslides associated with the earthquake occurred.There was however insufficient information to provide entries for individual landslides triggered by the earthquake, unless they were of particular size or had specific, note-worthy socio-economic impacts (e.g.Bairaman Landslide in New Britain, May 1985; King and Loveday, 1985).In these instances, a single database entry represents all the individual landslides which were triggered by the earthquake.The findings from the database indicate that earthquakes and flooding generate the greatest uncertainty with regard to the "true" numbers of landslides triggered, while individual landslides associated with mining and rainfall events (which did not result in flooding) Figures
Back Close
Full are generally better documented.This is largely due to landslides being categorized as secondary hazards to earthquakes or flooding which are the primary hazards.In such cases the spatial and temporal information related to landslides is very poor.
As noted in Sect.2.1 the timing of many landslides is uncertain, particularly where there are no eye witness accounts to the event.To capture this uncertainty and constrain the landslide event for comparison with rainfall climatology data, the time of initiation was approximated using a minimum and maximum date boundary relating to the earliest and latest dates within which the landslide event occurred.These dates generally relate to the start and end dates of potential landslide-triggering events, such as a flood event.Landslides were then grouped by month (Figs. 5 and 6) or season (Figs. 7 and 10) using the end date information.This has the potential to introduce errors where landslides are assigned to a month or season which does not correspond to its time of actual initiation.In turn, this could mean that a landslide-triggering event is not being compared against the correct rainfall climatology data and that patterns of activity associated with a specific season may be over or under-represented based on the bias introduced by using this metric.Although this cannot be ruled out completely, it has been possible to determine that 80 % of all landslide entries in the database are constrained within a 10 day period or less.This means that the vast majority of landslides were initiated over a defined 10 day (or less) period and therefore we can be confident that these events are assigned to the correct 3-or 6-monthly season in the majority of cases.There is slightly more uncertainty for events assigned to monthly timescales, as where 10 day periods cross from one month into the next, the latest month will always be used.Towards the end of the wetter season, the numbers of floodassociated trigger mechanisms tends to increase and the number of days between the minimum and a maximum date boundaries also tends to increase.We believe that this helps to explain the second peak in rainfall-associated, landslide-triggering events observed in May (Fig. 5), which is more traditionally seen as a period of transition as the north-westerly monsoon period wanes and the south-easterly trade flows become more dominant.In spite of the limitations described above, we believe that this new, national-scale landslide inventory accurately captures those high-impact landslides which contribute to the majority of landslide fatalities and damage.As these are the types of landslide events which we would ideally like to mitigate against in the future, understanding how, when and where these events occur across space and time is very valuable.The findings illustrate that these landslides are strongly controlled by the annual north-westerly monsoon cycle and that during different phases of the seasonal cycle landslides are potentially triggered by very different magnitudes of rainfall (Fig. 6).Future research would hope to assess the long term trends in landslide activity at a regional-scale and assess how these changes are linked to changes in the climate, the strength of the monsoon cycle and ENSO.In order to do this effectively, continued development of the database and a more systematic approach to landslide recording is essential, so that this type analysis can be extended.
Conclusion
Regional-scale landslide inventories offer a greater understanding of the temporal and spatial distribution of landslide events, their characteristics and triggers.In this study we have constructed the first, regional-scale landslide inventory for PNG, bringing together a range of existing and new datasets to form a single, commonly formatted database of landslide entries.Whilst the challenges involved in the development of the database have been described in detail, we believe that the database constitutes a significant advance in knowledge and data that can be used by researchers and planners alike.Analyses of the newly collated landslide inventory demonstrates how this information can be used to understand regional-scale, spatial and temporal variability and the relationships between landslides and different trigger factors -Rainfall and the various combinations of triggers associated with it, account for the majority (∼ 61 %) of all medium to large landslide-triggering events in the PNG inventory.
-There is also a strong climatic control on the landslide-triggering events, with greater numbers being observed between December and March, with a second peak in May, and fewer observed between June and October.This relates closely to periods dominated by north-westerly monsoon flows and south-easterly trade flows respectively.
-The majority of landslides initiated in the drier season are linked to extreme rainfall (∼ 61 %), while landslides initiated during the wetter season can be triggered during months with lower absolute rainfall totals.
-In addition, there is large year to year variability in the annual occurrence of landslide events and this can be linked to different phases of ENSO.Landslidetriggering events continue to occur throughout north-westerly monsoon seasons in all phases of ENSO, but fewer numbers are observed during drier season months of El Niño phases, than either La Niña or ENSO neutral phases.
-The spatial distribution of landslide-triggering events is primarily determined by a combination of relief, precipitation and population density.
The information collected and analysed in this study contributes to the first countrywide assessment of landslides which result in fatalities and significant damage.Based on this analysis, landslide hazard hotspots and relationships between landslide occurrence and rainfall climatology can be identified.This information can prove important and valuable in the assessment of trends and future behaviour, which can be useful for policy makers and planners.Introduction
Conclusions References
Tables Figures
Back Close
Full Full Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | technical reports and site inspection logs obtained from the PNG Mineral Resources Authority (MRA) and the Department of Mineral Policy and Geohazards Management archives; accessible journal publications; Discussion Paper | Discussion Paper | Discussion Paper | newspaper/media records; internet publications (e.g.Office for the Coordination of Humanitarian Affairs (OCHA)/ReliefWeb); supplementary hazard and disaster archives -e.g.Dartmouth Flood Observatory (Brackenridge, 2010; Brackenridge and Karnes, 1996); USGS National Earthquake Information Centre (NEIC); OFDA/CRED International Disaster Database (EM-DAT, Guha-Sapir and Below, 2002); Global Disaster Identifier Number (GLIDE).
Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | damage or disruption.Of course, there are instances where very large landslides do not result in extensive damage or fatalities because they occur in very sparsely populated regions.The rockslide/debris flow at the Hindenburg Wall in Western Province had an estimated volume of between 5 and 7 million m 3 (Zeriga-Alone, 2012) however Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | events are broadly confined to the central mainland cordillera and mountainous areas (north-west Toricelli Mountains and north-central Adelbert Range during DJF and south-east Owen Stanley Range on the Papuan Peninsula during SON) of the country.
Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper |
Table 1 .
Critical and relevant information obtained for each landslide entry in the new PNG landslide inventory.
Table 4 .
La Nina and El Nino years identified from bimonthly MEI ranks (based on data sourced in 2012).Introduction | 2018-12-12T05:25:21.252Z | 2015-08-17T00:00:00.000 | {
"year": 2015,
"sha1": "227488ba098380309d34bbcf1c89ed0a8beae81a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5194/nhessd-3-4871-2015",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "227488ba098380309d34bbcf1c89ed0a8beae81a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
251295104 | pes2o/s2orc | v3-fos-license | Clinical Manifestation, Management, and Outcomes in Patients with COVID-19 Vaccine-Induced Acute Encephalitis: Two Case Reports and a Literature Review
Introduction: Vaccination is one of the best strategies to control coronavirus disease 2019 (COVID-19), and multiple vaccines have been introduced. A variety of neurological adverse effects have been noted after the implementation of large-scale vaccination programs. Methods: We reported two rare cases of possible mRNA-1273 vaccine-induced acute encephalitis, including clinical manifestations, laboratory characteristics, and management. Results: The clinical manifestations might be related to hyperproduction of systemic and cerebrospinal fluid (CSF) cytokines. mRNA vaccines are comprised of nucleoside-modified severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) mRNA, which is translated into SARS-CoV-2 spike protein by the host’s ribosomes, activating the adaptive immune response. Exposed mRNA or vaccine components may also be detected as antigens, further resulting in aberrant proinflammatory cytokine cascades and activation of immune signaling pathways. Both patients exhibited significant clinical improvement after a course of steroid therapy. Conclusions: The use of COVID-19 vaccines to prevent and control SARS-CoV-2 infections and complications is the most practicable policy worldwide. However, inaccurate diagnosis or other diagnostic delays in cases of vaccine-induced acute encephalitis may have devastating and potentially life-threatening consequences for patients. Early diagnosis and timely treatment can result in a favorable prognosis.
Introduction
Vaccinating as many individuals as possible against coronavirus disease 2019 (COVID- 19) is one of the most effective strategies for controlling the pandemic. In addition to complications like myocarditis and pericarditis, other neurological adverse effects have been reported following vaccination with messenger RNA (mRNA) vaccines, including the BNT162b2 vaccine (Comirnaty ® (New York, NY, USA); Pfizer-BioNTech (Mainz, Germany)) and mRNA-1273 vaccine (SPIKEVAX™; Moderna; Cambridge, MA, USA). We reported two rare cases of possible mRNA vaccine-induced acute encephalitis, both of whom had a favorable prognosis after steroid therapy. We speculate that the excessive innate immune response is due to cytokine storms triggered by vaccination. In certain individuals, vaccine components may be detected as antigens, triggering aberrant proinflammatory cytokine cascades and activation of immune signaling pathways, resulting in inflammatory symptoms, and secondary organ damage [1][2][3][4][5][6][7]. While vaccination provides substantial benefits and a means to eventually control the COVID-19 pandemic, clinicians should also be aware of the potential for vaccine-induced severe neurological complications.
Case 1
A healthy 58-year-old woman was admitted due to acute delirium 7 days after receiving the mRNA-1273 vaccination (SPIKEVAX™). Prior to the recent vaccination, she had also received two doses of the Vaxzevria ® (ChAdOx1 nCov-19; AstraZeneca (Cambridge, UK)) vaccine 11 and 27 weeks before, without experiencing significant adverse effects. She had no history of neurological disorders. Physical examination revealed a low-grade fever (38 • C), cognitive deficits, left deviation of the head and eyeballs, and mild weakness of the right upper limb. Laboratory results, including complete blood cell counts, blood sugar levels, electrolyte levels, liver function tests, kidney function tests, and urinalysis, were normal (Table 1). A real-time reverse-transcription polymerase chain reaction (RT-PCR) for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was negative. Chest X-ray and brain computed tomography showed no obvious abnormalities. A lumbar puncture was performed, and cerebrospinal fluid (CSF) in Case 1 was analyzed. Laboratory tests revealed lymphocyte-predominant pleocytosis (white blood cell (WBC) count: 40/µL, 59% lymphocytes), an elevated protein level of 82.9 (reference range: 15-45) mg/dL, an elevated CSF/serum albumin ratio of 19.7 (reference range: 5-8) × 10 −3 , a normal glucose level of 61.72 (reference range: 40-70) mg/dL, and a normal immunoglobulin G (IgG) index of 0.32 (reference range: 0.0-0.7). The patient was initially diagnosed with encephalitis based on the following clinical symptoms: (1) altered mental status lasting ≥24 h with no alternative cause, (2) documented fever within 72 h, (3) new onset of focal neurological findings, and (4) laboratory results (WBC count ≥ 5/µL) [8]. Intravenous empiric antibiotics and antiviral drugs, including ceftriaxone, vancomycin, and acyclovir, were initiated to treat acute encephalitis with an unknown cause. After 2 days of treatment, the patient's symptoms persisted, without any improvement. CSF microbiological tests were negative for herpes simplex virus-1 (HSV-1), HSV-2, tuberculosis (TB), and bacterial culture; the venereal disease research laboratory (VDRL) test was also negative. Additionally, the patient's influenza A and B viral nasal swab PCR tests, CSF cytological examination, and autoimmune encephalitis panel were negative (Table 2). Moreover, the patient's blood tests for common pathogens and auto-antibodies (blood culture, virus serology, rheumatoid factor, antinuclear antibody (ANA), antithyroid peroxidase antibody, antimitochondrial, etc.) were all negative (Table 3). Brain magnetic resonance imaging (MRI) with contrast showed unremarkable findings. Finally, a diagnosis of COVID-19 vaccine-induced acute encephalitis was made. Dexamethasone (40 mg per day) was added on the 3rd day, and the patient exhibited a dramatic improvement on the next day. She regained normal cognitive function and displayed no further neurological impairment. We maintained treatment with intravenous steroids and gradually halved the dosage every 3 days. The patient was uneventfully discharged on the 13th day.
Case 2
A 21-year-old male was admitted to the Emergency Department due to coma approximately one week after receiving the mRNA-1273 (SPIKEVAX™) vaccination. The patient had no history of seizures, and the family history was unremarkable. RT-PCR results for SARS-CoV-2 were negative. Complete blood counts and electrolytes were normal (Table 4). Chest X-ray, brain computed tomography, and electrocardiography showed no obvious abnormalities. He experienced an episode of status epilepticus in the emergency department and was transferred to the intensive care unit (ICU) for further management. A lumbar puncture was performed. CSF analysis revealed no pleocytosis, an elevated protein level of 65.5 (normal range: 15-45) mg/dL, and an elevated microalbumin level of 37 (normal range: <6.5) pg/dL (Table 5). Although brain MRI with contrast was unremarkable, electroencephalography (Supplementary Figure S1) revealed a continuous diffuse slowing in the theta and delta ranges, indicating moderate diffuse cerebral dysfunction (3rd hospital day). The cerebral perfusion scan with single-photon emission computed tomography (SPECT) indicated hypoperfusion in the right temporal region (Supplementary Figure S2), which was compatible with the probable seizure origin. The patient was also diagnosed with encephalitis based on the following clinical symptoms: (1) altered mental status lasting ≥24 h with no alternative cause, (2) generalized seizures not fully attributable to a pre-existing seizure disorder, (3) abnormal electroencephalography results, and (4) abnormal neuroimaging of the brain parenchyma [8]. The test results for HSV, VDRL, TB, and other bacterial and fungal cultures of the CSF were all negative (Table 5). Similar to Case 1, the results of Case 2 s blood tests for common pathogens and auto-antibodies were also negative ( Table 6). The autoimmune antibody tests for limbic encephalitis (anti-NMDR, anti-AMPAR1, anti-AMPAR2, anti-GABABR, anti-LGI1, anti-CASPR2) were also negative. A final diagnosis of COVID-19 vaccine-induced encephalitis complicated by seizures was made. anti-α-amino-3-hydroxy-5-methyl-4 isoxazolepropionic acid receptor; Anti-GABABR: anti-r gamma-aminobutyric acid receptor; Anti-LGI1: anti-leucine-rich glioma inactivated-1; and Anti-CASPR2: anti-contactin-associated protein-like 2.
To prevent oxidative stress and maintain cellular homeostasis, controlling status epilepticus, intravenous levetiracetam and valproate sodium were administered. His seizures persisted in the ICU, and pulse corticosteroid therapy was administered on the 6th day of hospitalization with 1000 mg of intravenous methylprednisolone. We gradually halved the dosage every 3 days during the total 21-day hospital stay (14-day ICU stay). The patient's clinical condition improved significantly after steroid administration. He was seizurefree during the rest of the hospital stay as well as at a 3-month outpatient department (OPD) follow-up.
Discussion
Given the absence of evidence of infection and the dramatic improvement after receiving corticosteroid treatments in both cases, we assumed that an immune-mediated mechanism was responsible for the presentation of acute encephalitis in both patients. In addition, both patients failed to meet the clinical diagnostic criteria for paraneoplastic or autoimmune encephalitis [9]. Therefore, we believe that the COVID-19 vaccine is the only possible cause of acute encephalitis in our patients, given the temporal proximity of receiving the COVID-19 vaccine and the lack of other risk factors for encephalitis.
A variety of postvaccination neurological complications have been reported since the introduction of the COVID-19 vaccines, but the underlying pathological mechanism remains unclear [1][2][3][4][10][11][12][13][14][15][16][17][18][19][20][21][22][23]. In addition to vaccines for COVID-19, postvaccination encephalitis has also been reported in association with several other vaccines, including those for measles, yellow fever, and smallpox [24]. mRNA vaccines consist of nucleosidemodified SARS-CoV-2 mRNA, which is translated into the SARS-CoV-2 spike protein by the host's ribosomes, thus activating the adaptive immune response. However, exposed mRNA or vaccine components may be detected as antigens in certain individuals, triggering aberrant proinflammatory cytokine cascades and activation of immune signaling pathways [1][2][3][4][5][6]. These responses may result in elevated levels of circulating cytokines, inflammatory symptoms, and secondary organ damage. The underlying pathophysiology of cytokine-related neurotoxicity may resemble immune effector cell-associated neurotoxicity syndrome [25]. Furthermore, the spike protein alone can disrupt the blood-brain barrier (BBB), resulting in an increased BBB permeability, which may allow the overproduced inflammatory substances to enter the central nervous system. The elevation of the CSF/serum albumin ratio in both patients indicated impairment of the BBB, possibly due to disruption of cerebrovascular endothelial cells by the spike protein [26][27][28]. This brief report does not challenge the benefits of vaccination, but it does suggest caution and can guide management and provide prognosis for such patients. Larger epidemiological studies or meta-analyses are needed to understand the underlying mechanisms of postvaccination encephalitis. Presently, the benefits of COVID-19 vaccination outweigh any potential risks. The innate immune response of these two may explain this phenomenon, but further studies are needed to clarify the pathophysiology. We also reviewed the literature and compared clinical manifestations, management, and outcomes in patients with COVID-19 mRNA vaccine-induced acute encephalopathies (Table 7). In summary, COVID-19 vaccinations generate antigens that may be recognized as potential pathogens by pattern-recognition receptors on resident stromal cells and circulating immune cells. Induction and transcription of specific genes may ensue, triggering the synthesis and release of pyrogenic cytokines, including interleukin [IL]-1, IL-6, tumor necrosis factor-alpha [TNF-α], and prostaglandin-E2, into the bloodstream, mimicking the response to natural infection. The cytokine-mediated inflammatory process is proposed to be the key pathophysiological mechanism underlying COVID-19 vaccine-related encephalitis [1][2][3][4][5][6][7].
Conclusions
COVID-19 vaccine-induced acute encephalitis is rare but may occur in clinical practice. This condition is characterized by activation of the immune response, triggering cytokine storm-mediated inflammation; misdiagnosis or delayed diagnosis may lead to fatal complications. Appropriate corticosteroid administration may be an effective treatment method in these patients [1][2][3][4][13][14][15][16][17]. Institutional Review Board Statement: Due to the nature of the case report, no ethical approval was required.
Informed Consent Statement: Written informed consent was obtained from the patients for the publication of this report.
Data Availability Statement:
The data underlying this article will be shared upon reasonable request by the corresponding author. | 2022-08-04T15:11:55.414Z | 2022-07-31T00:00:00.000 | {
"year": 2022,
"sha1": "f88eb68845bbabb52bafa1aa8218e3e333646378",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/10/8/1230/pdf?version=1659681992",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a06eaaf0e1568075a3332e6d633b84ad585c16e0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253259479 | pes2o/s2orc | v3-fos-license | Tripartite motif-containing protein 46 accelerates influenza A H7N9 virus infection by promoting K48-linked ubiquitination of TBK1
Background Avian influenza A H7N9 emerged in 2013, threatening public health and causing acute respiratory distress syndrome, and even death, in the human population. However, the underlying mechanism by which H7N9 virus causes human infection remains elusive. Methods Herein, we infected A549 cells with H7N9 virus for different times and assessed tripartite motif-containing protein 46 (TRIM46) expression. To determine the role of TRIM46 in H7N9 infection, we applied lentivirus-based TRIM46 short hairpin RNA sequences and overexpression plasmids to explore virus replication, and changes in type I interferons and interferon regulatory factor 3 (IRF3) phosphorylation levels in response to silencing and overexpression of TRIM46. Finally, we used Co-immunoprecipitation and ubiquitination assays to examine the mechanism by which TRIM46 mediated the activity of TANK-binding kinase 1 (TBK1). Results Type I interferons play an important role in defending virus infection. Here, we found that TRIM46 levels were significantly increased during H7N9 virus infection. Furthermore, TRIM46 knockdown inhibited H7N9 virus replication compared to that in the control group, while the production of type I interferons increased. Meanwhile, overexpression of TRIM46 promoted H7N9 virus replication and decrease the production of type I interferons. In addition, the level of phosphorylated IRF3, an important interferon regulatory factor, was increased in TRIM46-silenced cells, but decreased in TRIM46 overexpressing cells. Mechanistically, we observed that TRIM46 could interact with TBK1 to induce its K48-linked ubiquitination, which promoted H7N9 virus infection. Conclusion Our results suggest that TRIM46 negatively regulates the human innate immune response against H7N9 virus infection. Supplementary Information The online version contains supplementary material available at 10.1186/s12985-022-01907-x.
Background
The influenza A virus, avian influenza H7N9 virus, belongs to the Orthomyxoviridae family. H7N9 virus emerged in China in 2013 and posed a threat to public health [1][2][3]. As we all know, the H7N9 influenza viruses have caused over 1500 human infections, with a mortality rate of nearly 40%. A number of previous studies have offered valuable information on the pathogenesis, prevention and control of the H7N9 virus. [4][5][6][7]. Most patients infected with H7N9 developed acute respiratory distress syndrome (ARDS) and severe lung pneumonia, which was caused by a fierce increase in the expression levels of cytokines and chemokines [2,8]. Wan et al. [9] found that a 'cytokine storm' in the lungs of H7N9infected patients was associated with activation of gasdermin E (GSDME)-mediated pyroptosis in alveolar epithelial cells. However, the host factors involved in viral replication remains elusive. Thus, a better understanding of the regulatory mechanisms of H7N9 infection would be useful to combat future H7N9 virus outbreaks.
The first line of defense against invading pathogens is the innate immune response. Pattern recognition receptors (PRRs) recognize pathogen-associated molecular patterns, which subsequently activates the downstream innate immune response [10,11]. During influenza virus infection, the RIG-I (retinoic acid inducible gene I) receptor senses influenza genomic RNA and recruits the mitochondrial antiviral signaling protein (MAVS) and TANK-binding kinase 1 (TBK1) to induce the phosphorylation, dimerization, and nuclear translocation of interferon regulatory factor 3 (IRF3), which finally induces the production of type I interferons [12][13][14].
The tripartite motif family (TRIM) of proteins have been intensively studied in virus infection. One member, TRIM46, could regulate cancer cell viability, apoptosis, and the cell cycle [15][16][17]. However, the function of TRIM46 in H7N9 infection and its underlying mechanism remain to be determined. In this study, we aimed to identify the function of TRIM46 in H7N9 virus infection and the underlying mechanism between TRIM46 and the production of host RLR-dependent type I interferons. The results showed that, during H7N9 virus infection, TRIM46 acts as a negative regulator of the host innate immune response. Upon H7N9 virus infection, TRIM46 expression gradually increased over time. Furthermore, knockdown of TRIM46 resulted in increased production of type I interferons and phosphorylation of IRF3, whereas its overexpression had the opposite effects. Finally, we observed that TRIM46-mediated K48-linked ubiquitination of TBK1 resulted in the inhibition of host innate immunity. Thus, this study revealed novel activities of TRIM46 in innate immunity, which potentiates the study of innate immunity against virus infection.
All cell lines were cultured in a 37 °C incubator with an atmosphere of 5% CO 2 . Ten-day-old embryonated specific-pathogen free chicken eggs were used to isolate and propagate Influenza A Virus strain A/Zhejiang/DTID-ZJU01/2013(H7N9). The allantoic fluid from the infected chicken eggs was collected and preserved at − 80 °C. The median tissue culture infectious dose (TCID 50 ) method was used to determine the virus titer, which was calculated using the Reed-Muench method. All the live H7N9 virus experiments were performed in a bio-safety level 3 laboratory at the First Affiliated Hospital, Zhejiang University School of Medicine (Registration No. CNAS BL0022).
Lentivirus-mediated plasmid transfection
For TRIM46 knockdown, two short hairpin interfering (shRNA) sequences were designed against two different regions of TRIM46 (the target sequence of TRIM46#1 was 5'-GCT GCT GAC AGA GCT TAA CTT -3' , the target sequence of TRIM46#2 was 5'-CTG GCA CTA TAC CGT TGA GTT -3') and cloned and packed into lentiviruses. A TRIM46 overexpression construct was also created and cloned and packed into lentiviruses. The TRIM46 shRNA and overexpression lentiviruses were transfected separately into A549 cells for 72 h. The transfection efficiency was observed using a fluorescence microscope (Olympus, Tokyo, Japan).
Western blotting analysis
Cells were harvested and lysed for 30 min in radioimmunoprecipitation assay (RIPA) buffer with phenylmethylsulfonyl fluoride (PMSF) and phosphatase inhibitors. The lysed cells were then subjected to centrifugation for 10 min at 12,000 × rpm and 4 °C. We retained the supernatants and determined their protein contents using a bicinchoninic acid protein assay. Equal amounts of proteins were subjected to 12% sodium dodecyl sulfate-polyacrylamide gel electrophoresis separation. The separated proteins were electrotransferred onto a polyvinyl difluoride membrane. Non-specific binding was blocked by incubating the membranes in 5% skim milk in Tris-buffered saline-Tween 20 (TBST) at room temperature for 1 h. The membranes were then added with the appropriate primary antibodies and incubated overnight at 4 °C. Next day, three washes with TBST carried out and then the membranes were incubated with the corresponding horseradish peroxidase (HRP)-conjugated secondary antibodies. An enhanced chemiluminescence (ECL) reagent was used to visualize the immune-reactive proteins. Primary antibodies against TRIM46 (ab169044), Influenza A nucleoprotein (NP) (ab128193), Myc tag (ab9106), and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) (ab8245) were purchased from Abcam (Cambridge, UK). Primary antibodies against phosphorylated (p)-IRF3 (#4947) and IRF3 (#4302) were provided by Cell Signaling Technology, Inc. (Danvers, MA, USA). Sigma-Aldrich (Darmstadt, Germany) provided the anti-FLAG Tag antibody.
Quantitative real-time reverse transcription PCR (qRT-PCR)
The TRIzol reagent (Invitrogen, Waltham, MA, USA) was used to extract total RNA from cells. Reverse transcription was then used to produce cDNA from the total RNA.
For influenza virus replication, NP RNAs were reversetranscribed with primers as following: NP mRNA using oligo (dT), NP cRNA using 5'-AGT AGA AAC AAG G -3' , NP vRNA using 5'-AGC GAA AGC AGG -3' . The cDNA was then quantified using quantitative real-time PCR with gene-specific primers. GAPDH mRNA was quantified as an internal control and the 2 −ΔΔCt method was used to analyze the relative quantity of the target genes. The primers used in this study were as follows:
Co-immunoprecipitation (Co-IP)
The indicated plasmids were transfected into HEK293T cells. The cells were collected and lysed at 4 °C for 30 min in IP lysis buffer (1% NP-40, 0.025 M Tris-HCl, 0.15 M NaCl, 1 mM EDTA, 5% glycerol) supplemented with phosphatase inhibitor/PMSF, followed by centrifugation for 10 min at 12,000 × rpm under 4 °C. We retained the supernatants, one third of which was used for input analysis and the other two thirds were incubated with Anti-Myc Magnetic beads (Pierce #88842, Thermo Fisher Scientific, Waltham, MA, USA), or IgG control, overnight at 4 °C for IP analysis. IP lysis buffer was then used to wash the precipitates three times, followed by boiling the samples in 2 × loading buffer. Western blotting was then used to analyze the precipitates using the indicated primary antibodies, followed by incubation with HRP-conjugated anti-rabbit IgG (conformation specific) (#5127, Cell Signaling Technology) or anti-mouse IgG (Light Chain Specific) (#58,802, Cell Signaling Technology) secondary antibodies. The immune-reactive proteins were visualized using the ECL reagent.
Ubiquitination assay
We
Statistical analysis
Data analysis and processing were carried out using GraphPad Prism software version 7 (GraphPad Inc., La Jolla, CA, USA). The statistical difference between two groups was analyzed using an unpaired Student's t-test and one-way analysis of variance (ANOVA) was carried out to analyze the differences among multiple groups. Statistical significance was indicated by p < 0.05. In all figures, * indicates p < 0.05, ** indicates p < 0.01, and *** indicates p < 0.001.
TRIM46 expression is upregulated in H7N9 virus-infected A549 cells.
First, to determine the relationship between H7N9 virus infection and TRIM46 expression, we infected the human lung adenocarcinoma epithelial cell line A549 with H7N9 virus for different times and detected the TRIM46 protein level using western blotting and the TRIM46 mRNA expression level using qRT-PCR. The results showed that H7N9 virus induced the TRIM46 protein at different times, reaching a peak at 48 h post infection (h.p.i) (Fig. 1A). The mRNA level increased gradually, also peaked at 48 h.p.i. (Fig. 1B). These results indicated the H7N9 virus infection could induce TRIM46 expression.
TRIM46 knockdown inhibits H7N9 virus infection
To explore whether TRIM46 expression could regulate H7N9 virus infection, we used lentivirus-packaged TRIM46 shRNA#1 and TRIM46 shRNA#2 sequences to generate A549 TRIM46 knockdown cells. We then detected the protein and mRNA levels of TRIM46. The results showed that TRIM46 shRNA#1and #2 both worked well to reduce the protein and mRNA levels of TRIM46 in A549 cells ( Fig. 2A, B). A549 cells transfected with negative control or TRIM46 knockdown cells were infected with H7N9 virus for 12 h and A549 mock group cells were used as a control group. The results showed that TRIM46 knockdown significantly decreased the expression of the H7N9 virus NP protein (Fig. 2C). The qRT-PCR showed decreased NP mRNA, cRNA and vRNA expression (Fig. 2D) and the TICD 50 detection of the virus titer showed that TRIM46 knockdown decreased the H7N9 virus titer compared with that in the negative control group (Fig. 2E). Taken together, the results showed that knockdown of TRIM46 reduced H7N9 virus replication.
TRIM46 overexpression promoted H7N9 virus replication
We further determined whether overexpression of TRIM46 could promote H7N9 virus replication. We transfected A549 cells with lentivirus-packaged empty vector or TRIM46 overexpression plasmids for 48 h, and then used western blotting and qRT-PCR to detect the protein and mRNA levels of TRIM46. The results showed successful overexpression of TRIM46 in A549 cells (Fig. 3A, B). Subsequently, we infected the A549 empty vector group and TRIM46 overexpression group with H7N9 virus for 12 h, and then detected the NP protein and mRNA, cRNA and vRNA expression levels and the H7N9 virus titer. The results showed that overexpression of TRIM46 increased the H7N9 virus NP protein and RNA levels and increased the H7N9 virus titer compared with that of the empty vector group (Fig. 3C-E).
TRIM46 negatively regulates type I IFNs and the activation of IRF3
Type I IFNs plays an important role in defending against virus infection. To examine the role of TRIM46 in the context of viral infection, we examined the expression levels of IFNA and IFNB1. Knockdown of TRIM46 increased IFNA and IFNB1 mRNA expression levels during influenza H7N9 infection (Fig. 4A, B), meanwhile, ectopic expression of TRIM46 decreased IFNA Fig. 3 Overexpression of TRIM46 promotes H7N9/ZJU-1 infection. A. A549 cells were transfected with lentivirus-mediated TRIM46-Myc plasmids or empty vector plasmids, after 72 h transfection, the cells were harvested and subjected to western blotting for TRIM46 overexpression analysis, GAPDH was used as an internal control. B. A549 cells transfected with lentivirus-mediated TRIM46-Myc plasmids or empty vector plasmids for 72 h, A549 cells were harvested and the relative levels of TRIM46 mRNA were analyzed by RT-qPCR. C. A549 cells were transfected with empty vector plasmids or TRIM46-Myc plasmids, after 72 h, A549 cells were infected with H7N9 /ZJU-1 (MOI = 1) or mock-treated for 12 h, the cells lysates were collected and subjected to western blotting with indicated antibodies. D. TRIM46-Myc overexpression A549 cells or empty vector-transfected A549 cells were infected with H7N9/ZJU-1 (MOI = 1) for 12 h, the relative levels of NP mRNA, cRNA and vRNA were analyzed by RT-qPCR. E. A549 cells were transfected with lentivirus-mediated TRIM46-Myc plasmids or empty vector plasmids, after 72 h transfection, the cells were infected with H7N9/ZJU-1 for 12 h, the supernatant was collected and the viral titers were determined by TCID 50 method. The analysis results were presented with mean ± SD, in all situations, a p value < 0.05 was considered statistically significant, *p < 0.05, **p < 0.01, ***p < 0.001 and IFNB1 mRNA levels (Fig. 4D, E). IRF3 activation is essential for the production of type I IFNs during virus infection. Therefore, we determined the level of phosphorylated IRF3 in influenza H7N9-infected cells. The results showed that TRIM46 knockdown increased the level of phosphorylated IRF3 and overexpression of TRIM46 decreased the level of phosphorylated IRF3 (Fig. 4C, F).
TRIM46 interacts with TBK1
To investigate the interaction between TRIM46 and TBK1, TRIM46-Myc overexpression plasmids, together with TBK1-Flag or empty vector were transfected into HEK293T cells for 24 h, followed by a Co-IP. As expected, the results showed that TRIM46-Myc interacted with TBK1-Flag (Fig. 5, Additional file 1: Fig. 1A). The above results suggested that TRIM46 interacts with TBK1 to regulate innate immune response.
Discussion
A number of studies have proposed that influenza virus could use multiple host cellular components to replicate and infect host cells. Besides, influenza has evolved to utilize host factors to inhibit the host innate immune response to evade immune surveillance and eradication [18][19][20][21][22]. For example, the influenza virus NS1 protein, which plays multiple roles between influenza virus and host innate immune responses, inhibits MAVS/ IKK-mediated interferon production [23,24]. Type I interferons play important roles in defending against virus replication, and virus infection induces a series of cellular antiviral signals to produce type I interferons [25,26]. To screen and identify the host protein regulators involved in regulating the innate immune response against viruses would be helpful to identify therapeutic targets that manipulate the cellular antivirus responses.
In the present study, we found that H7N9 virus-induced TRIM46 negatively regulated the production of type I IFNs by regulating the phosphorylation of IRF3. Furthermore, we discovered that TRIM46 interacts with TBK1, leading to TBK1 degradation via K48-linked ubiquitination. Our results suggest a novel function of TRIM46 in H7N9 virus infection. TRIM proteins, belonging to the ubiquitin E3 ligase family, participate in regulating the host innate immune response against virus infection. A number of TRIM family proteins, such as TRIM22, TRIM25, TRIM35, TRIM56, have been found to be involved in the replication or pathogenesis of influenza virus [27][28][29][30]. TRIM proteins function as positive or negative regulators in host innate immune signaling pathways by mediating the ubiquitination of signaling protein [27][28][29][30][31]. For example, TRIM21 interacts with MAVS and catalyzes its K27-linked poly-ubiquitination to promote innate immune response against RNA viruses. By contrast, another TRIM family protein, TRIM29, inhibits host innate immunity by inducing K11-linked ubiquitination of MAVS [32,33]. Our study demonstrated that TRIM46 promotes H7N9 virus infection by mediating the K48linked ubiquitination of TBK1, which leads to TBK1 degradation, thus inhibiting innate immunity.
During virus infection, virus RNA is sensed by PRRs, which include RIG-I-like receptors (RLRs), NOD-like receptors (NLRs), and Toll-like receptors (TLRs) [34][35][36][37]. After influenza virus infection, influenza viral RNA is recognized by the RIG-I receptor, which activates and recruits the downstream TBK1/IKKγ/IKKε complex to induce IRF3 signaling, resulting in the production of type I interferons [38,39]. Notably, viruses have involved multiple strategies to escape host innate immune surveillance and elimination, among which TBK1 is a target for virusinduced degradation. For instance, the SARS-CoV-2 membrane protein inhibits the production of type I interferons through induction of K48-linked ubiquitination of TBK1, which subsequently impairs IRF3 phosphorylation and dimerization [40]. Ubiquitin-conjugating enzyme 2S could interact with TBK1 and recruit USP15 to remove the K63-linked poly-ubiquitin chains of TBK1 [41]. Phosphatase PP4 dephosphorylates and deactivates TBK1 to inhibit the production of type I interferons [42]. The ubiquitin-proteasome pathway plays an important role in protein degradation, and ubiquitination of TBK1 is an important method of modulate the production of type I interferons during virus infection, during which process, viral proteins and host proteins participate [43][44][45]. In the present study, we found that TRIM46 could promote K48-linked ubiquitination of TBK1, which inhibited the phosphorylation of IRF3 and decreased the production of type I interferons.
Conclusions
Taken together, the results of the present our study show that H7N9-induced TRIM46 negatively regulates the production of type I interferons by inhibiting IRF3 phosphorylation by inducing K48-linked ubiquitination of TBK1 (Fig. 7). This study highlights the underlying mechanism by which H7N9 virus escapes the host innate immune response, which might lead to the development of novel antiviral agents to prevent or treat H7N9 virus infection.
Additional file 1. Supplementary Fig 1. TRIM46 reduces TBK1 expression and interacts with TBK1 in HEK293T cells. (A) HEK293T cells were transfected with 1μg TBK1-Flag plasmids with 0 μg (-), 1μg (+) and 3 μg (++) TRIM46-Myc plasmids for 24 h, cells were lysed and subjected to western blotting to detect the expression of Flag tag and Myc tag, GAPDH was used as an internal control. (B) HEK293T cells were transfected with TRIM46-Myc plasmids with RIG-I-Flag, MAVS-Flag, TRAF3-Flag, TBK1-Flag and IRF3-Flag plasmids for 24 h. After transfection, cells were lysed and immuno-precipitated with anti-Flag antibody and subjected to western blotting to detect Flag and Myc tags. Input was detected and showed as Flag tag, Myc tag and GAPDH. | 2022-11-03T18:07:31.327Z | 2022-11-03T00:00:00.000 | {
"year": 2022,
"sha1": "04a4d3633001b4d838563bff31398815707a03ca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "6aae6fdf891ade8903d54af5b67af691965dd32c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242778493 | pes2o/s2orc | v3-fos-license | WHEN THE BABIES LOSE THEIR MOTHERS DUE TO PREMATURE DEATH
mortality: When the babies due to death. ABSTRACT Objective: I dentify in the published articles the main factors that cause premature maternal mortality in pregnant women or when they give birth. Method: This is an exploratory descriptive study in the search for scientific publications on causes of maternal death. The study took place from the beginning of August 2019 to September 2020. Platforms such as SCIELO, LILACS BVS were used and the selection criteria for the research bibliography were the eligibility and ineligibility of articles, theses and monographs. Thus, 20 studies on the subject “Maternal mortality, when the babies lose their mothers due to premature death” were selected for analysis. Results: Maternal death in Brazil is directly linked to the living conditions of the Brazilian female population, mainly to socioeconomic factors or lack of commitment to basic social assistance and health prevention. Among the measures to reduce the critical cases of maternal mortality, we can mention the specialized and humanized care, as well as accurate diagnoses so that afterwards care is sought and the worsening of the number of deaths is avoided. Thus, it is necessary to look more carefully at this critical health scenario as hundreds of cases could and can still be avoided, as long as more attention is given to women through preventive health care.
INTRODUCTION
The World Health Organization (WHO) defines maternal mortality as the death of a woman during pregnancy or within 42 days after the end of the pregnancy, due to any cause related to the pregnancy or due to factors related to it. Martins and Silva (2017) report that maternal mortality is one of the indicators of the health discrepancy between developed, developing and underdeveloped countries. However, the discrepancy between developed and developing countries was maintained. In developed countries, the lowest MRI rates were found, with 14 deaths for every 100,000 live births. In turn, in developing countries the RMM was 290 deaths for every 100,000 live births, with the majority of cases being concentrated on the continents of Africa and Asia. It should be noted that in Africa -Angola, one maternal death occurs for every 29 pregnancies and this rate of maternal mortality being one of the highest in the world. Morse et al (2011) reports that in Brazil the causes of maternal deaths, regardless of the region are hypertensive diseases and hemorrhage, alternating the position in some states, without analyzing the determinants of maternal deaths. Martins and Silva (2017) add that the direct obstetric causes are related to complications in pregnancy, childbirth or the puerperium due to inadequate treatment, bad practices and omissions. The indirect ones are those that result from diseases that already existed before pregnancy or in a pathology that developed during pregnancy without a direct relationship with obstetric causes, but that are aggravated by the specific physiological conditions of a pregnancy.
The RMME Specific Maternal Mortality Ratio showed relevant indices in single, widowed and judicially separated women and, it points out that among widowed women, rates of up to 62.9 in 100 thousand live births were observed, women declared judicially separated presented indices of 51 , 8 and single women had RMME rates of 70.8 per 100,000 live births.
The study shows that the woman's marital status and marital status contributes to a situation more vulnerable to maternal death and that the lack of a partner or husband can probably lead to insecurity and lack of family support. Thus, the presence of a partner in the pregnancy-puerperal period becomes a relevant protective factor in reducing maternal morbidity and mortality. (CARRENO, BONILHA, COSTA, 2011 To measure maternal mortality, its extent and predisposition in space and time, the Maternal Mortality Coefficient (CMM) is used as a health indicator. This indicator is evaluated by the number of maternal deaths for every 100 thousand Live Born Babies (BNV), and the limit of 20 deaths per 100 thousand live births is acceptable by WHO. (BOTELHO et.al, 2014). Martins et al, (2017) reports that in recent years some measures have been adopted in order to minimize the effects of underreporting on the prevalence of maternal mortality, obtained through data from the Mortality Information System (SIM).
Among them, the research process carried out by the maternal mortality prevention committees with a view to identifying the relationship between the basic cause of death and a possible pregnancy stands out.
Another strategy adopted is the research method called the Reproductive Age Mortality Study (RAMOS) developed to measure the number of underreported maternal deaths and calculate an adjustment factor for the correction of official data based on the death data of women of reproductive age. Vega et.al (2017) reports that simple but important measures such as reproductive planning, monitoring and treatment of cardiopathies in the puerperium, magnesium sulfate in pre-eclampsia and eclampsia, antibiotics in infection, safe abortion, oxytocin or misoprostol in post-hemorrhage-birth and professional training contribute to the reduction of the identified causes, thus guaranteeing the right to life for these women.
In Brazil, the main users of the Unified Health System (SUS) are women and, of these, 65% are between 12 and 49 years old (CARRENO; BONILHA; COSTA, 2011).
The avoidability of death means providing family structure and the construction of the mother-baby bond, providing the pregnant woman with exams and prenatal care throughout the pregnancy.
CAUSES OF MATERNAL MORTALITY
In many countries, pregnancy-related deaths are a major cause of death for women of For Costa et al (2011), the maternal mortality ratio in developing countries remains well above that recommended by the WHO (RMM below 20 deaths in 100 thousand live births). According to the WHO, mortality associated with the pregnancy-puerperal cycle and abortion does not appear among the top ten causes of death among women aged 10 to 49 years.
However, the seriousness of the problem is evidenced when related to healthy women in the reproductive period, these deaths being preventable in 92% of cases if the local health conditions are similar to those of developed countries.
It is observed that the main causes are hemorrhage, hypertension, sepsis, abortion and embolism. The challenge for reducing maternal mortality due to abortion is even greater, in view of situations such as clandestinity and illegality.
The World Health Organization (WHO) estimates that in 2008 about 13% of maternal deaths worldwide, equivalent to 47,000 were due to unsafe abortions. In Brazil, abortion is among the five main causes of maternal mortality and, it is related to approximately 5% of the total maternal deaths, for this reason, in the last years there has been a great discussion in the country about the decriminalization of abortion above all, involving a complex set of political, legal, moral, religious, social and cultural aspects (MARTINS et al, 2016).
Regarding indigenous women, Alves (2019) estimates that in Brazil the cause of death is related to the pregnancy-puerperal period, these deaths are possible to avoid in 92% of cases.
These cases are defined as maternal deaths and are characterized as violations of human rights and 99% of them occur in regions of greater poverty and higher levels of inequality, which is why maternal mortality indicators are important for assessing the living and health conditions of a population. Martins and Silva (2017) emphasize that the direct obstetric causes are related to complications in pregnancy, childbirth or the puerperium due to inadequate treatment and bad practices and omissions. The indirect ones are those that result from diseases that already existed before pregnancy or from a pathology that developed during pregnancy, without a relation to the direct obstetric causes, but that were aggravated by the specific physiological conditions of a pregnancy.
In this context, Lima et al (2017) says that in 2012 in Brazil the main direct causes of maternal death were hypertension (20.2%) and hemorrhage (11.9%) and among the indirect causes, the most frequent were diseases of the circulatory system representing (7.3%) of the total deaths. Mota et al (2012) reports that although the implications of dengue in the evolution of pregnancy have been poorly studied, some factors described by some studies have contributed to the identification of adverse outcomes that occurred in the health of pregnant women and their babies.
Since 1995, the Ministry of Health has included an option in the death certificate for women of childbearing age (10 to 49 years), so the doctor must indicate whether she was pregnant at the time of death or whether she was pregnant in the twelve months that before death. (PEREIRA, 2015).
According with Martins and Silva (2017), the literature describes that 95% of maternal deaths in the world could be prevented if public and private services expanded women's sexual and reproductive rights, in addition to ensuring safe and respectful obstetric care.
Even when the worst is avoided, some abominable sequelae remain and, therefore, it is estimated that for each maternal death, another 16 women suffer the consequences of poor care, becoming sterile or acquiring thrombosis that can lead to amputation of the legs. (PEREIRA, 2015).
Data about maternal mortality in Brazil
The Ministry of Health (2012) Graph 1 shows that obstetric causes occupy the first place in the causes of maternal mortality, followed by direct causes and finally, indirect causes. These data refer to the period from 2001 to 2010. Pereira (2015) informs that the reduction of maternal mortality is one of the eight millennium goals signed by several countries and, among them, Brazil. In our country, the expectation is to reduce the rate to 35 deaths per 100,000 births. Gianini (2010) points out that the reference hospitals, for fulfilling a defined role in the care network for the care of more complex and serious cases, end up being sentinel sites for the occurrence of events of this nature, and can offer crucial data that contribute to solving the problem maternal mortality.
The WHO stresses that, in order to contribute to the reduction of maternal mortality, it is necessary to increase research that provides evidence-based clinical and programmatic guidance, establishing global standards and providing technical support to States. Knowing the urgency of this theme in Brazil, the possibility of using these correction factors to obtain more accurate estimates of how maternal mortality rates behave over the past few years is essential to know the regional differences and the main causes, as well as being extremely important for public health policies in the country and for planning the health service (SILVA et.al 2016).
METHODOLOGY
This is an exploratory descriptive study in which we chose methods of Integrative Literature Review (RIL), as it is a method that provides the synthesis of knowledge and the incorporation of applicability of results and significant studies in practice (Souza, Silva, Carvalho, 2010). In order to investigate the Maternal Mortality enabling an understanding of the factors that lead to this reality.
The search for scientific publications was carried out from April to August 2020, using virtual libraries: online electronic scientific library (SCIELO), Lilacs and Virtual Health Library, using the following descriptors "Pregnant Women", "Prenatal", "Incarcerated", "Prison". Regarding the eligibility criteria: articles published in the last 20 years 2002-2020 in Portuguese and Spanish complete and available for free.
Ineligibility criteria: articles in summary form, monographs, master's dissertation.
To achieve the objective, the following guiding question for the study was defined: What are the main factors that lead to Maternal Mortality?
From the answer to this question, we made the discussions, as established in the next paragraph of this work "results and discussion".
4
After searching for articles through the scientific databases at the Virtual Health Library (VHL), Scielo, Lilacs and BVS, 20 studies were identified and after filtering and analysis, 15 were selected because they met the study inclusion criteria.
After detailed reading of each selected article, the data were crossed according to the objectives of the work.
In an analysis of maternal death estimates by Szwarcwald et.al (2014) in Brazil from 2009 to 2011, it was alarming in some states, especially in the State of São Paulo (Southeast Region) with more than 41 thousand maternal deaths of childbearing age.
The North Region had a result of more than 15 thousand deaths, being led by Pará State with 7,107 deaths followed by Amazonas with 3,219 deaths.
After Eligibility and Exclusion criteria
The following articles were selected to be included in the discussion
Totality: 20 Articles
In another study by Martins (2017) in the State of Minas Gerais, the characteristics of women who died due to abortion consequences were single, in a total of 117, (68%) aged between 24-34 years and 133, ( 72.7%) with schooling from 4th to 7th grade, total 36, (34.6%). Black women in a total of 105, (70.5%) and the most frequent place was in a hospital environment, 179, representing (97.8%).
In the study carried out by Gomes et.al (2018) in Bahia in the years 2004 to 2015 it was pointed out that the deaths occurred mostly with brown women (59.25%), aged between 20 and 29 years (39.12%). Regarding the level of education and marital status, the prevalence of single women (50.87%) and with schooling between 4th to 7th grade of elementary school (20.14%) was maintained. Penha et.al (2018) reports that in the years 2008 to 2010 he found 45 deaths due to direct obstetric causes, among them 50% were due to pre-eclampsia, eclampsia and hypertensive disease specific to pregnancy. Continuing the line of thought, the author claimed that hypertension is also the cause of premature births, since in this way it is possible to save the life of the mother and child.
Brito et.al (2015) points out that about 10% of pregnancies progress to high risk, of these, when SHEG (Specific Hypertensive Syndrome of Pregnancy) leads the ranking of causes of maternal mortality with fatality, with the highest rates present in the Northeast Regions and Midwest of Brazil.
The WHO cites some recommendations to prevent these cases from becoming irreversible and lethal, such as calcium supplementation for those with vitamin deficiency, regular monitoring of blood pressure and antihypertensive drugs for pregnant women with severe hypertension, in addition, it also recommends the use of low-dose acetylsalicylic acid for women who are at high risk of developing the problem in the future. In case of care during labor or delivery, the recommendation is to anticipate the induction of labor, the use of magnesium sulfate as an anti-inflammatory that can be administered intramuscularly or intravenously. Herculano et al (2012) mentions that in Brazil, despite advances in care for pregnant women at the outpatient (increased prenatal coverage and access to laboratory tests) and hospital (incentive to normal delivery, adoption of clinical protocols for management pathologies and complications), the actions developed have been shown to be less effective than desired in reducing maternal mortality. The main causes of maternal death continue to be hemorrhages and high blood pressure, both preventable through quality prenatal and childbirth care.
Regarding the cause of death from hemorrhage, Souza et.al (2013) Thus, it was possible to notice that in addition to the research periods being the same, there was also a reduction in the numbers presented, however, the numbers are still not considered ideal and need more attention and care to avoid situations that may endanger mother and child throughout pregnancy.
CONCLUSION
The study on maternal mortality shows that the situation of pregnant women in Brazil is still worrying and needs to make society in general aware of the precarious and fragile situation faced by the female population.
In this sense, it is necessary to have health policies that include assistance to pregnant women, as well as their babies throughout pregnancy, especially prevention and care actions during the prenatal period.
This research, through the analysis of studies published recently, shows that Brazilian women, especially those in a vulnerable state, are in constant risk of being part of this maternal death statistic, because they do not have it often, not even within the family and not even on the part of the state psychological, maternal and prenatal care support, essential health services for a pregnant woman.
Thus, this study makes it possible to identify the profile of women who experience pregnancy at risk, as well as the way prenatal care is performed and their differences during childbirth. In view of what was exposed in the study, there is a need for systematic monitoring of women and their babies from basic health care, especially prenatal care accompanied by health professionals. Thus, it is expected that the work presented here will contribute to the reflection of health professionals and institutions in relation to pregnant women in a vulnerable state in Brazil. Prevalence of pregnancyspecific hypertensive syndromes (SHEG).
Portuguese/BVS
The provision of qualified care is an essential component for the early detection of complications, health education and, consequently, the reduction of maternal and fetal mortality.
Epidemiological profile of maternal deaths in Rio Grande do Sul, Brazil: 2004-2007 Portuguese/SCIELO It was observed that the highest RMME was found in women with less education.
HERCULANO, Marta Maria Soares et al (2012) Maternal deaths in a public maternity hospital in Fortaleza: an epidemiological study.
Portuguese/SCIELO Low quality of death investigation records was perceived, consequently, there may be higher numbers that aggravate this cause. | 2022-05-30T17:01:58.152Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "c3affbb7d6c5d9c99f033b7c58d20f825824ffdd",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/engage/api-gateway/coe/assets/orp/resource/item/5fb3b4e62912d10015a25b8e/original/maternal-mortality-when-the-babies-lose-their-mothers-due-to-premature-death.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c3affbb7d6c5d9c99f033b7c58d20f825824ffdd",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"extfieldsofstudy": []
} |
225696422 | pes2o/s2orc | v3-fos-license | Economic Analysis of Pea (Pisum sativum) in Himachal Pradesh
The study was conducted in Solan district of Himachal Pradesh to analyze the economics of pea cultivation at different farm size category. The study reveals that total cost of cultivation of pea production was
` 84699.37 per hectare. Out of which cost A1, A2, B1, B2, C1, C2, and C3 were ` 44150.67, ` 44150.67,
` 45135.52, ` 57521.56, ` 64613.39, ` 76999.43 and ` 84699.37 respectively. The cost of cultivation of pea in case of marginal farms was higher as compared to different farm size category. The total yield of pea production was 72.16 quintals per hectare. The total returns and net returns from pea production were ` 144324.32 and ` 59624.95 per hectare, respectively. The total returns and net returns from pea in case of large farms were higher as compared to different farm size category
The potential of vegetables in contributing to the national economy has been well recognized in recent years. India ranks second in the area as well as production of pea next to China. In spite of that, this seemingly high level of production can provide only 208 grams of vegetables per capita (Sharma, 2003), as against the suggested dietary intake of 275g and 250 g per capita per day for adult male and female, respectively for undertaking moderate work (Swaminathan, 2002). The total area, production and productivity of pea in India in 2017-18 was 540.48 thousand Ha, 5422.14 thousand MT and 10.0 MT/ ha respectively (Anonymous, 2018). The major pea growing states are Uttar Pradesh, Madhya Pradesh, Jharkhand, Punjab, Himachal Pradesh, West Bengal, Haryana, Bihar, Uttarakhand, Orissa and Karnataka. Himachal Pradesh is 5 th leading pea producing state of India with total production of 294.96 thousand metric tonnes during the year 2017-18 (Anonymous, 2018).
Pea is small spherical seed or pod of fruit Pisum sativum and belongs to leguminaceae family along with beans and peanuts. It was one of the first plants cultivated by humans and remains an important food crop today. The pea is native to western Asia and North Africa. Wild peas can still be found in Afghanistan, Iran, and Ethiopia (Oelke 1991).
Peas, like many legumes, contain symbiotic bacteria called Rhizobia within root nodules of their root systems. These bacteria have the special ability of fixing nitrogen from atmospheric, molecular nitrogen (N 2 ) into ammonia (NH 3 percentage of digestible proteins, vit-A and vit-C, rich in minerals like Ca and P. Peas cultivation is highly labour-intensive like all other vegetable crops and requires high dosages of manures and fertilizers (Rao and Tripathi 1979;Khunt and Desai 1996). The main constituent of the cost of cultivation of peas is manures and fertilizers, followed by cost on bullock/ human labour/tractor and pesticides/chemicals. Thakur et al. (1994) observed that the income per hectare from vegetable crops has been almost four-times, as compared to that from food crops .Thus, the farmers should have to be motivated to diversify to more remunerative cropping patterns like vegetable cultivation instead of the traditional, less profitable ones (Singh 1995). Similar types of results were reported by (Sharma et al. 2000;Maurya et al. 2001). The objective of my study was to study the cost of cultivation of this most important cash crop among vegetables.
MATERIALS AND METHODS
The study was conducted in Solan district of Himachal Pradesh. This area was selected because of its significant contribution with respect to area and production of vegetable crops in the state and simultaneously providing fruitful employment to the families involved in vegetable cultivation.
Multistage random sampling was adopted to select the ultimate sample of the respondents i.e. the vegetable growers.
(a) At the first stage, 2 blocks i.e. Kandaghat and Solan out of 5 blocks were selected.
(b) At the second stage, a list of villages growing vegetables in the selected blocks were prepared and 5 villages from each block were randomly selected.
(c) At the third stage, list of vegetable growers of the selected villages was prepared and a sample of 10 vegetable growers in each selected village were selected for collection of primary data. Thus the total sample consisted of 100 respondents.
A pre-tested structured interview schedule was prepared. Data was collected by personal interview method. For the analysis of data the total vegetable growers were divided into four classes according to the size of their land holdings, viz., marginal (<1 ha), small (1-2 ha), medium (2-4 ha) and large farmers (>4 ha).
Cost of cultivation
The cost of cultivation of vegetables crops was worked out by using various cost concepts defined below:
Income measures
For working out profitability of vegetable cultivation in the study areas following income measures were worked out:
(a) Family labour income (FLI)
It is the return to family labour (including management). F.L.I. = Gross income -Cost B 2
(b) Net income (NI)
It is the net profit after deducting all cost items i.e., variable and fixed costs from gross income. NI = Gross income -Total cost (Cost C 3 )
(c) Farm business income (FBI)
It is the disposal income out of the enterprise and is defined as:
Definitions of terms and cost concepts used:
Fixed cost: The various items viz., land rent, land revenue, depreciation and interest on equipment investment, interest on owned fixed cost, which were used in the vegetable cultivation.
Variable cost: Variable cost includes the expenditure on labour and material input cost and interest on working capital.
Inputs and costs:
Following were the various inputs used in the vegetable cultivation.
Hired human labour cost: Hired human labour was estimated in terms of man days where in 8 hours of work in a day was considered as one man day. The man days were valued at ` 300 per man day.
Planting material cost: The planting material cost was worked out at prevailing market price in the study area.
Fertilizer cost: The fertilizers cost was calculated at the actual price paid by farmers.
Plant protection cost:
This variable included the expenses incurred on the purchase of insecticides, fungicides, weedicides etc. used for the various vegetable crops.
Depreciation:
The amount of depreciation for implements was calculated by the straight line method i.e., by dividing the original cost less junk value of implement by its expected life. This was apportioned to individual crop proportion to the total cultivated area.
Land revenue: Actually paid land revenue by the farmers was taken into the study.
Land rent: Land rent was evaluated at the rate of one fourth of the total produce produced and then converted into monetary units by multiplying it with prevailing farm harvest price.
Interest on working capital: Interest on fixed and working capital is charged at the rate of 9 per cent per annum for half of the year.
Interest on fixed capital: Interest on fixed and working capital has been charged at the rate of 9 per cent per annum on the average investment (half of the initial cost).
Family labour: Family labour cost was calculated on the basis of charges paid to hired labour.
Gross return:
Gross return refers to the total income of the farmers earned from crop and livestock sources.
Net returns: Return obtained by subtracting the total cost from gross return.
Returns from pea under open field conditions
The information regarding the returns from pea per hectare basis is given in the
CONCLUSIONS
The total cost of cultivation was found ` 84699.37 per hectare in pea cultivation. The Cost A1 of pea recorded ` 44150.67 per hectare which contributes 52.13 % of the cost C3. The yield of green peas has been higher on marginal farms than medium and large farms because of better management in small farms. The gross and net returns have been found higher in large farms due to realization of higher prices because of cultivating early-maturing varieties and exploring other markets due to higher marketable surpluses. This crop, being highly labour-intensive, will help provide employment to the family members on the farm itself, particularly in the case of small and marginal farmers. It will provide impetus to the diversification programme of the state government, besides improving the soil health, being a leguminous crop. Improved variety seeds are of higher unit price and provide high productivity and return, therefore, to be used as per capacity of the growers. Bank credit and financial assistance should be available to the individual farmers for increasing the production. Training of farmers in the areas of production technology, grading, standardization of produce, quality control and modern method of marketing will prove to be a viable move. The government should establish adequate storages at village level for the purpose of orderly marketing of green pea to benefit both consumers and producers. | 2020-06-25T09:09:14.179Z | 2020-06-24T00:00:00.000 | {
"year": 2020,
"sha1": "6505c026e17a30909bd623831d007b9e90d515e9",
"oa_license": null,
"oa_url": "https://doi.org/10.46852/0424-2513.2.2020.9",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "40776c99ac20a0eaeb2b669806d6b5490366fd90",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
240504829 | pes2o/s2orc | v3-fos-license | A Systematic Review of HPM Energy Absorbers for Building Applications
A modern weapon, high power microwave (HPM) pulses, can have a profound effect on the quality of functioning of society as the use of this weapon can result in damage to or destruction of electronic equipment and computer and telecommunications systems, both military and civilian. Protection against the energy of HPM pulses can be achieved in two basic ways: by using radiation-absorbent materials (RAM) or artificial electromagnetic (EM) structures. If the object to be protected is a building, protection based on RAM is used. Hence, this literature review focuses on the possibilities of using HPM energy absorbers in building products and structures. Attention is concentrated on four basic types of elements: claddings, concrete and mortar, small-sized elements (bricks, hollow masonry units), and paint coatings. In each of the categories, examples of HPM radiation absorbers having a high potential to be combined with basic construction materials are given on the basis of the literature on the subject.
Introduction
Today, digital technologies constitute the basis for the dynamic development of our civilisation. Increasingly more areas of human activity (communications, banking, data acquisition, navigation, etc.) depend on the ubiquitous Internet. The Information Age, in which we currently live, brings with it many threats. One of them is a modern weapon in the form of high power microwave (HPM) pulses. Even though it does not pose a direct threat to human life, as it is a non-lethal weapon [1], it can have a profound effect on the quality of functioning of society due to the fact that many spheres of our life depend on electronic devices. Acts of terrorism or military activities can be directed against the technical infrastructure used for data acquisition, information transmission, and so on. HPM pulses are characterised by frequency in the range of approximately 1 to 100 GHz, high power of the emitted pulses (with peak output power on the order of gigawatts), very short duration of the pulses (hundreds of nanoseconds), and a speed of propagation equal to the speed of light [2]. Modern weapons that use HPM pulses consist of a highfrequency transmitter and an omnidirectional or directional antenna. The HPM pulse induces high voltage in power and telecommunication networks and in electrical and electronic equipment. The increased voltage causes a sharp increase in current intensity, generating large amounts of heat and as a result damaging electronic components, electrical circuits, and transmission lines. This means that the use of HPM weapons can pose a great threat to the proper functioning of key infrastructure and, consequently, can contribute to deterioration in the quality of life in a given country. Therefore, many interdisciplinary research teams all over the world have undertaken research on ways of protecting against electromagnetic (EM) pulses.
1.
meeting the minimum requirements for physical and mechanical properties as well as durability and sustainability (defined in the commonly used design standards), 2.
meeting the minimum, pre-assumed requirements for shielding effectiveness (with regard to the individual needs of a particular project), 3. achievement of a low manufacturing cost.
This article is a sequel to a previous paper by the authors [15], which presented a review of materials that absorb electromagnetic radiation and shield against it. [15] focused mainly on EM radiation-absorbing additions that could potentially be used to modify common building materials such as concrete, resins, or rubber. In particular, the following groups of materials were considered: carbon-based materials [16][17][18][19][20], nickel powder [5][6][7][8], iron powder [21,22], ferrites [23,24], magnetites [25][26][27], polymers and hybrid composite structures [28][29][30]. By comparison, this paper focuses on the actual use as documented in the literature of EM radiation absorbers in the manufacture of building products, especially constructional ones. Thus, the aim of the paper is to systematise the available knowledge on methods of protecting buildings against HPM pulses. For this purpose, a total of 74 works (64 scientific papers and 10 patents) from widely recognised databases such as Web of Science, Scopus, Google Scholar, and others have been reviewed. The paper addresses an overview of the available methods of protection against HPM that can be used in civil engineering practice. This is mainly, but not limited to, military applications, e.g., bunkers, command centers. However, the reviewed solutions can also be applied to critical civil infrastructure, such as server rooms, headquarters of banks, or other institutions. This has emerged as a more and more important issue in the context of the development of Industry 4.0 and, in particular, Banking 4.0, which are driven by digital integration and automation [31]. Consequently, as key non-military institutions increasingly rely on technology, ensuring their sustained digital security by providing protection against HPM threats is gaining significance.
The survey conducted shows that RAM methods of protection against HPM can be split into several basic groups. The first of them involves the use of specially designed claddings, which are relatively simple and effective methods of protection against EM radiation. This issue is addressed in Section 2. Another primary means of protection consists of the use of appropriately modified construction materials. This group includes concrete and mortar as well as small-sized elements such as bricks or hollow masonry units filled with EM energy absorbing additives (Sections 3 and 4). Finishing materials, in particular special paint coatings, constitute a separate group of methods described in Section 5. Finally, Section 6 contains a recapitulation and frames the concept of a method of manufacturing EM absorbers for buildings.
Claddings
The ability of metals to conduct electric current and heat is the main basis for their use as shields, providing protection against the action of electromagnetic waves. The simplest and at the same time most effective way of protecting buildings or their components against high-frequency electromagnetic radiation consists of screening them with metal shields in the form of, e.g., claddings [32].
Thick shields, best made of ferromagnetic materials, are used to achieve high shielding effectiveness against magnetic fields. Higher shielding effectiveness is also achieved by using two or more metal layers insulated with a dielectric material. This gives a better field damping effect than the use of a single metal layer of the same thickness. Copper, Mumetal, iron, aluminium, steel, and other materials characterised by a high electric field reflection coefficient and a high magnetic field absorption coefficient can be used together in the form of a hybrid solution [33].
An example of a chamber protected against radiation penetration or escape is shown in Figure 1 (adapted from [33]). According to [30], the condition regarding the dimensions of a shielding structure is as follows: L ≥ 5 √ H 2 + W 2 (denotations as in Figure 1).
Metal nets can be used for shielding purposes, as described in patent [34], which also shows how to join individual wire mesh panels.
Paper [35] proposes a method for making gypsum plasterboards that absorb electromagnetic waves in bands S and C. Honeycomb plasterboard (see Figure 2, adapted from [35]) was coated with carbon black (CB) particles and filled with gypsum. When the plasterboard shielding effectiveness (SE) was tested, the effect of the CB content and the geometric parameters of the carboard core was examined. The test results indicate that the effectiveness of wave absorption by honeycomb plasterboards could be improved by increasing the CB content and the height of the honeycomb, which resulted in enhanced dielectric capacity and multiple reflections between the honeycomb walls. Furthermore, by shortening the length of the side of a single cell, better absorptive properties could be obtained as the electrical conductivity of the composite increased. For honeycomb cell side length and height amounting to 6 mm and 9 mm respectively and a carbon content of 0.6% wt., a reflection coefficient of 10 dB (90% electromagnetic wave absorption) in the frequency range of 2.5-8 GHz was obtained. The tests were carried out according to the Chinese standard GJB 2038a-2011. Paper [36] describes a method of manufacturing mineral wool boards that absorb microwave radiation. Similarly to paper [35] cited above, carbon black was used as the absorber. The product was endowed with a layered structure. The electromagnetic characteristic of the mineral wool boards was tested in the frequency range of 2-18 GHz by means of scanning and transmission electron microscopy and a vector analyser. In the case of a single-layer board containing 3% CB, the shielding effectiveness amounted to 10 dB in the frequency ranges of 2-3 GHz and 7-18 GHz. The shielding effectiveness of a double layer mineral wool board exceeded 10 dB in the whole frequency range (2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). The results show that mineral wool boards containing a carbon addition had good absorptive properties and could be used to protect buildings against electromagnetic pulses. Furthermore, the adsorptive properties of such boards could be improved if they were fabricated as double-layer.
Carbon fiber paper (CFP) is a thin-layer shielding material characterised by low density and good adhesion and permeability. This type of absorber was used to produce plywood characterised by good shielding effectiveness [37]. In the latter study, the optimal plywood production parameters were found to be pressing pressure of 1.2 MPa, temperature of 110 ± 5 • C and glue quantity of 380 g/m 2 . It was also found that the shielding effectiveness of the composite in the form of plywood with a single CFP layer could be further improved by hot pressing, resulting in better bonding between the conductive carbon fibers. Plywood composites laminated with two layers of CFP were found to have much better microwave screening properties than composites with a single CFP layer. In the case of plywood with two CFP layers, the distance between the absorbing linings had a significant effect on shielding effectiveness. SE increased with increasing distance between the CFP layers. In the frequency range from 30 MHz to 1 GHz an absorption level of over 30 dB was reached. Thus, CFP is a promising material for the industrial production of wood composites characterised by high shielding effectiveness [37].
A nanocomposite material is proposed in patent [11]. In the form of powder, the material is suitable for use as a component in composites with resins and plastics and as a filler for rubber, whereby an elastic magnetic wave-absorbing material is obtained. According to the patent, it can also be used in the form of pressed boards independently glued to building walls and as an electromagnetic wave absorbing layer placed inside a stiff structural material. Considering the availability of raw materials and the simple manufacturing method, the material can be freely produced.
The subject of patent [34] is an electromagnetic shield which can be used to protect individual rooms or a whole building from electromagnetic radiation. This invention provides strong damping in wide frequency bands. It is also used for protection against nuclear electromagnetic pulses (NEMP), high-altitude electromagnetic pulses (HEMP), and high power microwaves (HPM). According to the invention, the electromagnetic shield consists of current-conducting rods or wires and panels, forming rectangular grids or grid sections, usually made of stainless steel. The advantage of this solution is that the grids can be installed, for example, between the structural layer of the wall and the insulation. In addition, in combination with the building structure, the grid can be used as a reinforcement.
The subject of the patent [38] is an electromagnetic wave absorber with a cementitious matrix, containing 1-20 µm long carbon nanotubes (2-10% wt.). According to the patent specification, the absorber shows excellent absorptive properties and is noncombustible and resistant to high-power radiation. In the frequency range of 1-110 GHz, the absorber is characterised by a complex relative permittivity (ε) of 2-10 and a dielectric loss angle (tanδ) equal to or greater than 0.35.
The above range of ε is determined by the following facts. If the absorber's ε value were less than two, the absorber would be ineffective as most electromagnetic radiation would penetrate through it. If the value of ε were higher than 10, the absorber would also be ineffective as most of the incident electromagnetic waves would be reflected. The absorber's tanδ value greater than or equal to 0.35 ensures the effective conversion of the energy of electromagnetic waves into heat. Possible exemplary shapes of the absorber are shown in Figure 3 (adapted from [38]). The patent [39] is for electromagnetic wave absorbing panels for use in building structures, containing an outer layer of protection plates (e.g., silicon matrix plates), an absorber layer, a reflective metal layer, and a bearing layer made of a construction material, e.g., concrete. Generalised and simplified diagrams of the patented absorber are shown in Figure 4 (adapted from [39]). A computer simulation of reflection loss versus frequency for an absorber panel whose 13 mm thick absorber layer consists of 50% polycarbonate resin and 50% of BaTiO 3 + BiFeO 3 (at the weight ratio of 1 / 3 ) is shown in Figure 5 (adapted from [39]). 1-panel absorber, 2-bearing layer, 3-reflective layer, 4-absorber layer, 5-outer protective layer, 6-electromagnetic waves incidence surface, 7-direction of electromagnetic waves incidence (after [39]).
In patent [40] the main microwave absorbing component is pure silicon carbide or silicon carbide with titanium dioxide and carbon black additions. The components are bonded together in an elastomer matrix. The damping range can be adjusted by changing the amounts and proportions of the different microwave-absorbing components and by changing the thickness of the material layer. According to the patent, when the microwaveabsorbing material is in the form of a sheet, the advantageous sheet thickness is 1-5 mm (the most advantageous thickness being in the range of 1.35-2 mm). In the first example provided in [40] the best damping was obtained for a 2.09 mm thick matrix with a density of 2.04 g/cm 3 at a frequency of 8.6 GHz. In the second example the best damping was obtained for a 1.18 mm thick matrix with a density of 1.93 g/cm 3 at a frequency of 17 GHz.
Patent [41] describes an invention in the form of a coating that absorbs the energy of electromagnetic and mechanical waves. Two versions of the coating are proposed. In the first version, the absorbent coating has a substratum in the form of a metal sheet or a polymeric panel on which there is at least one layer of absorber in the form of loose or pressed powder grains, pellets, beads or a gel, overlaid with a polymeric layer. In the second version, the coating contains a substratum on which there is a polymeric (polyurethane) layer, a layer of absorber with a grain size of 2-3 mm and a 44 wt.% content of a ferromagnetic substance (FF), and a top layer (polyurea elastomer). The advantage of the coatings is the ease of their installation in all kinds of building structures and also the achievable reduction in EM radiation, amounting to 28-58.2 dB in the frequency range of 1-10 GHz.
The materials discussed above are presented with brief descriptions in Table 1. Table 1. Summary of the products described in Section 2.
Ref. No.
Year Product Description [35] 2016 Honeycomb structured plasterboards 6 mm and 9 mm thick boards containing carbon (0.6% wt.) absorb EM waves in S and C bands. The reflection coefficient of 10 dB (90% electromagnetic wave absorption) is obtained in the frequency range of 2.5-8 GHz.
[ 36] 2016 Mineral wool boards with carbon black SE amounts to 10 dB in the frequency ranges of 2-3 GHz and 7-18 GHz for a single-layer board containing 3% CB. The SE of a double layer mineral wool board exceeds 10 dB in the frequency range of 2-18 GHz. [37] 2014 Plywood composite laminated with carbon fiber paper The absorption level reaches more than 30 dB in the frequency range 30 MHz-1 GHz. Optimal plywood production parameters are pressing pressure of 1.2 MPa, temperature of 110 ± 5 • C, and a double amount of glue, i.e., 380 g/m 2 .
Ref. No.
Year Product Description [38] 2014 EM wave absorber with a cementitious matrix The absorber contains 1-20 µm long carbon nanotubes (2-10% wt.). It is characterised by a complex relative permittivity (ε) of 2-10 and a dielectric loss angle (tanδ) equal to or greater than 0.35 in the frequency range of 1-110 GHz.
[39] 1998 EM wave absorbing panels The 13 mm thick absorber layer consists of 50% polycarbonate resin and 50% BaTiO 3 + BiFeO 3 (at weight ratio of 1/3). Reflection loss from −18.5 to −24 dB is obtained in the frequency range of 1. The product is characterised by the ease of installation in all types of building structures. Reduction of EM radiation of 28-58.2 dB in the frequency range of 1-10 GHz is achieved.
Concrete and Mortar
Suitably modified concrete can be used to protect buildings and rooms against the action of high-frequency electromagnetic radiation. The main purpose of such modifications is to endow concrete with electrical conductivity [42]. Conductive concrete is synthesised by adding a certain amount of a conductive substance, such as steel, graphite, iron slag, nickel, copper [43], carbon nanotubes or ferrites, usually in the form of powders or fibers, which must be properly dispersed within the concrete volume.
Because of the skin effect, the size of the grains/fibers should be as small as possible. To effectively utilise the conductive filler added to the concrete, the size of a single element (the diameter of a particle or fiber) should be equal to 1 µm or less. However, thin fibers are not widely available, and it is more difficult to disperse such fillers in comparison with larger-diameter fibers. The difficulties involving the proper dispersion of fibers in cement-based materials and selected methods of dispersing them are described in [44].
To avoid the above problems with filler dispersion, larger (polymeric) fibers or particles coated with a metal are also used, but the drawback of this solution is that the interior of each fiber or particle does not contribute to protection against radiation. Consequently, a higher filler content (volume) must be used [45,46].
Because cement is slightly conductive, it constitutes a better base for producing shielding materials than the commonly used polymers. When a cementitious matrix is used in the shielding composite, this results in better electrical connection between the filler fibers/particles, which are not always in contact. Thus, the shielding effectiveness of cementitious matrix composites is higher than that of composites with a polymeric matrix, which, unlike a cementitious matrix, is a good insulator. In addition, cement is cheaper than polymers and is commonly used in construction; therefore, this material with suitable filler additions can be successfully used to protect buildings or their parts (rooms) against electromagnetic radiation [45,46].
In addition to improving the absorption properties of electromagnetic waves [47], the introduction of fibrous additions, e.g., carbon fibers, into cementitious composites can contribute to an improvement in such properties as compressive strength, flexural strength, and fatigue strength [48].
Paper [49] presents a review of cementitious matrix composites with fillers in the form of short carbon fibers. The composites are characterised by good strength properties, small drying shrinkage, good thermal properties (low thermal conductivity), high electrical conductivity, and high resistance to corrosion. Furthermore, they facilitate cathodic protection of reinforcement steel in concrete.
The high shielding effectiveness of 40 dB at the frequency of 1 GHz in the case of a cementitious matrix composite containing 1.5% (by volume) carbon fibers 0.1 µm in diameter is documented in [45]. It was also found that carbon fibers 15 µm in diameter are much more dispersible than carbon fibers 0.1 µm in diameter, but are less effective in reflecting radio waves (electromagnetic interference (EMI) shielding).
The effectiveness of electromagnetic shielding and absorptive properties of a cementitious composite reinforced with steel fibers, carbon fibers, and synthetic vinyl polymers (PVA) were tested in [50]. The test results show that as the volume fraction of the fibers increased, so did the shielding effectiveness. Moreover, when the volume fractions were changed, the frequency ranges in which the electromagnetic waves were absorbed tended to change. At a steel fiber content of 3% by volume, SE exceeded 50 dB for frequencies above 1.8 GHz. In the range of 8-18 GHz, steel fibers, carbon fibers and PVA fibers could improve the absorptive properties of concrete. Concrete containing 0.5% carbon fibers attained the best absorptive properties: the minimum reflection coefficient amounted to approximately 7 dB. The optimal steel fiber content amounted to 2%. The reflectivity of concrete reinforced with PVA fibers changed with frequency and the minimum SE value was less than 10 dB (see Figure 6, adapted from [50]). The results show that fiber-reinforced concrete can be used to protect buildings against EMI due to the enhanced absorption and reflection of electromagnetic waves. In [51] it was shown that the absorption capacity of electromagnetic waves of aerated concrete moulded into truncated pyramids improved after adding carbon fibers. The authors found that the absorptive properties of the concrete also strongly depended on the height of the pyramids. The reflection coefficient increased when the height of the pyramids increased from 45 to 80 mm. In the frequency range of 2-18 GHz an absorber in the shape of an 80 mm high pyramid was characterised by a reflection coefficient of −21 to −30 dB, while the reflection coefficient of an absorber in the shape of a 45 mm high pyramid was in a range of −18 to −27dB.
Being characterised by good conductive properties, carbon black, one of the cheapest carbon sources, can be used to produce electromagnetic wave absorbers [22]. Carbon black can improve the electric permittivity of an absorber and reduce its thickness and weight. CB can be used jointly with glass fibers. Because of their low electric permittivity, glass fibers improve impedance matching and thereby absorptive properties. They also improve the properties of a cementitious composite, i.e., its bending strength and permeability, and reduce the risk that shrinkage cracks will appear [52]. The effect of the glass fiber content (1-9% wt.) on the absorptive properties of cement filled with carbon black (5% wt.) and glass fibers in the frequency range of 2-18 GHz was determined in [53]. It was shown that as the frequency increased, so do the differences between the absorptive properties of the materials. The highest minimum reflection loss, i.e., −11.2 dB at 18 GHz, was registered for the composite with the highest glass fiber content, i.e., 9% wt., and a thickness of 10 mm.
The shielding effectiveness of concrete samples containing graphite powder in various proportions was compared in [54]. A 20 cm thick concrete element containing 12% wt. of graphite powder was found to increase shielding effectiveness (SE) by 2.4 dB at 360 MHz (see Figure 7, adapted from [54]). It was noted that reflection was the dominant mechanism acting on the shielding effectiveness of concrete with a graphite powder addition. The possibility of increasing the electromagnetic shielding of cement-based composite materials with additions derived from agricultural production waste was studied in [55]. Peanut and hazelnut shells were subjected to pyrolysis at the temperature of 850 • C under an inert atmosphere, ground to a particle size below a micron and added to cement paste. The dispersion of the carbon powder in water was assessed visually, whereas its dispersion in the cement paste was assessed by means of a scanning electron microscope (SEM). The carbon powder was found to be excellently dispersible in the cement paste. A quantitative assessment of the effectiveness of radiation attenuation showed a significant improvement due to the addition of pyrolysed nutshells to cement-based composites. At the 0.5% content of this addition, the shielding effectiveness increased maximally by 353%, 223%, 126% and 83% at 0.9 GHz, 1.56 GHz, 2.46 GHz and 10 GHz respectively. The experimental and simulation results show that the considered addition was more dispersible than carbon nanotubes (CNT) or graphene and highly effectively enhanced the EMI shielding properties of cement-based composites.
As part of the study [56], four types of building materials were tested to ensure adequate shielding effectiveness. The tests were carried out in a specially designed installation in a fully anechoic room. Two by two meter walls, each weighing up to 3.5 t, were tested. As expected, unmodified gypsum boards and concrete slabs were found to be ineffective in protecting against electromagnetic radiation. In the case of the gypsum boards, when the usual polyethylene sheeting was replaced with aluminium sheeting, the shielding effectiveness considerably increased. In the case of concrete slabs, it was shown that a steel fiber reinforcement addition extended the range of frequencies (to higher frequencies) for which protection was more effective.
The high shielding effectiveness (amounting to 70 dB at the frequency of 1.5 GHz) of a cement-based composite was documented in study [57]. The composite tested was a cement paste containing 0.72% (by volume) of 8 µm in diameter and 6 mm long stainless steel fibers and admixtures facilitating the dispersion of the fibers in the cement paste. The electrical resistivity amounted to 16 Ωcm. In this case, shielding resulted from the reflection of waves, as indicated by the low absorption of 1.7 dB (at the frequency of 1.5 GHz). It should be noted that, already at a steel fiber content of 0.36% (by volume), shielding effectiveness amounted to 58 dB at a frequency of 1.5 GHz and resistivity was 57 Ωcm. At the radiation frequency of 1.0 GHz the shielding effectiveness was lower, reaching 60 dB at the fiber content of 0.72%.
The shielding effectiveness of a cement-based composite with a steel fibers and graphite powder addition was tested in study [58]. Special attention was paid to SE variation (mainly due to drying) during the curing of the samples. The results are of interest for the manufacture of shields for buildings, as they showed SE decreasing during the drying of the samples. Figure 8 (adapted from [58]) shows that the damping caused by the water contained in the samples was particularly significant in the range of lower frequencies (1)(2)(3)(4). In the case of samples containing 20% graphite and those containing 10% graphite and 10% steel fibers, the effect of drying for 6 months on the shielding capacity for waves in the range of 4-10 GHz was negligible. The study [59] dealt with the electromagnetic characteristic and shielding effectiveness of cementitious matrix composites containing multi-walled nanotubes (MWNT) in the frequency range of 0.1-8 GHz. In order to achieve proper dispersion of multi-walled nanotubes in the composite, MWNTs were first mixed with a surface-active agent and then subjected to sonication. The quality of the MWNT dispersion in water was assessed by analysing the VIS absorption spectrum, the near-UV spectrum, and the near-IR spectrum. The quality of MWNT dispersion in the cementitious matrix was assessed by examining the fracture surface under a scanning electron microscope (SEM). Shielding effectiveness was measured using a network analyser with a coaxial transmission line. The most effective shields in a wide frequency range were obtained by adding 1.5% of MWNTs in proportion to the cement content. The produced materials exhibited better EM waves screening in band X, but a degradation of shielding effectiveness was observed due to air voids in the materials. SEM examinations showed that most of the MWNTs were separated and deposited in the cement hydration product. Therefore, one can conclude that composite materials with a MWNT addition can be used as shields against electromagnetic pulses.
The shielding effectiveness of two and three-layer cementitious matrix panels containing MWNT (10-20 nm in diameter and 10-30 µm long), Fe 2 O 3 nanoparticles (20 nm) and NiO nanoparticles (60 nm) for use in military and civil installations was evaluated in study [60]. In addition to the components mentioned above, a polycarbonate superplasticiser and a dispersant were used to produce the panels. In the case of the three-layer panels, ceramic granulate was introduced into its top layer. The composition of the composites is shown in Table 2. The mechanical and absorptive properties of the panels were tested in the frequency range of 2-18 GHz. The two-layer panels showed better mechanical properties. Their compressive strength amounted to 61.2 MPa, i.e., 12 MPa higher than that of the three-layer panels, but the latter panels had better absorptive properties (see Figure 9, adapted from [60]). As the authors explain, this was mainly due to the presence of ceramic granulate, characterised by low values of electromagnetic parameters and high porosity, in the top layer. These properties of the granulate contributed to better impedance matching. Figure 9. Reflectivity versus frequency for two-layer panels (black) and three-layer panels (red) (after [57]).
In study [61], dealing with the making of layered cementitious matrix composite absorbers, rubber powder, manganese-zinc ferrite, and manganese-zinc ferrite together with carbon fibers were used as porosity-increasing additions in, respectively, the top layer, the middle layer and the bottom layer. Each of the layers was 10 mm thick. The composition of the particular absorber layers is shown in Table 3. The presence of rubber powder in the top layer was found to improve impedance matching with voids. As a result, most of the incident electromagnetic waves entered the absorber, where they were damped in the middle and bottom layers. The ferrite-containing middle layer was characterised by high magnetic and electric permittivity, damping the energy of electromagnetic waves because of magnetic and dielectric losses. Owing to the presence of conductive carbon fibers in the bottom layer, the losses due to the multiple reflection mechanism increased. The authors demonstrated that the absorber they created was characterised by reflection losses below −10 dB in the frequency range of 8-18 GHz. The good properties of a layered resistive coating/absorbing layer/reflecting layer system were demonstrated in study [62]. The absorber layers, each 1.4 cm thick, were joined together using commercial adhesives. Each of the two layers of adhesive was approximately 0.1 mm thick. The resistive coating was a carbon coating produced from nitrocellulose lacquer filled with graphite and black carbon, using ethyl acetate as the solvent. Foamed cement reinforced with polypropylene fibers (0.3% wt.) constituted the absorbing layer. Hydrogen peroxide (5% wt.) was used as the foaming agent. Calcium stearate (2% wt.) was used as the foam stabiliser. Chloride (1.5% wt.) was used as the substance increasing the initial strength. Polyvinyl alcohol (2% wt.) was used as the binder. The metal sheet constituted the reflecting layer. In the frequency range of 2-4 GHz the absorption band of the produced composite was 100% below −10 dB and 95% below −14 dB, with the maximum absorption of −19.6 dB at 2.45 Hz.
Extruded polystyrene (EPS) has good properties, such as low density, high strength, low water absorbability, and high acid and base resistance, and is widely used in the building industry [63]. As a kind of light aggregate, EPS beads can be easily introduced into mortar or concrete to obtain light mortars or concretes, widely varying in their density. Cement-based composites filled with EPS show good thermal performance, and moreover, can be used to limit the interference of EM waves in rooms. In studies [63,64] the absorptive properties of a cement-based composite filled with EPS were tested and the test results were compared with those obtained for corresponding samples without EPS. It was shown that the addition of EPS could improve the effectiveness of microwave damping. In addition, the effects of the EPS filling ratio, the particle diameter, and the composite thickness on absorption effectiveness were studied. The lowest shielding effectiveness in the frequency range of 8-18 GHz amounted to approximately 15 dB for a 20 mm thick sample with a volume fraction of EPS beads 1.0 mm in diameter amounting to 60%. It was also shown that the damping of electromagnetic waves could be ascribed mainly to multiple reflection and dispersion inside the composite material.
In study [65] the effect of carbon fibers on the resistivity of gypsum-based plaster was examined. Owing to the 2.0% wt. addition of 3 mm long carbon fibers to gypsumbased plaster, its resistivity was reduced to 0.02% of the resistivity of ordinary gypsum plaster (without fibers). The effectiveness of shielding against electromagnetic interference increased monotonically with the fiber content, reaching 22 dB at 1.5 GHz for a plaster thickness of 4.35 mm and a fiber content of 2.0% wt. Furthermore, even at the lower fiber content of 0.3%, shielding effectiveness amounted to 10 dB at wave frequency of 1.5 GHz and plaster thickness of 3.85 mm. The additional use of chemical agents (sodium citrate, cement, and aluminium sulphate) increased electric resistivity, only slightly affecting shielding effectiveness. At the same time, the added chemical agents improved the mechanical properties of the plaster.
The authors of patent [66] added, among other things, a very soft ferromagnetic material (developed by the authors) to ferrite powder, obtaining a material with novel properties. Depending on the type and amount of particular components used in accordance with the patent instructions, the composite material can be produced in any physical state, e.g., a solid, foamed solid, semi-solid, semi-liquid, gel-like, or liquid state. Material in a semi-solid, semi-liquid, gel-like, or liquid state can be produced with any suitable consistency or viscosity, e.g., in the form of paste or mastic, depending on the desired end use. Another advantage of this material is that its mechanical and electrical properties can be adapted to a large extent individually through the proper preparation of the components of the composite. The proposed composite material exhibits strong electromagnetic energy absorption in the range of radio and microwave frequencies and better magnetic and dielectric loss properties in comparison with its components.
A summary of the individual products discussed in this Section, with their main features, is presented in Table 4. Table 4. Summary of the products described in Section 3.
Ref. No.
Year Product Description [45] The addition of steel fiber extends the range of frequencies (to higher frequencies) for which protection is more effective. [61] 2011 Layered cementitious matrix composite absorbers with rubber powder, manganese-zinc ferrite and carbon fibers The three-layer absorber is characterised by reflection loss below −10 dB in the frequency range of 8-18 GHz.
[62] 2013 Layered absorber made of carbon coating (with graphite and black carbon), foamed cement (reinforced with polypropylene fibers) and metal reflecting layer In the frequency range of 2-4 GHz the absorption band of the produced composite was 100% below −10 dB and 95% below −14 dB, with a maximum absorption of −19.6 dB at 2.45 Hz. [63,64] 2007 Cement-based composites filled with EPS beads Minimum SE amounts to about 15 dB in the frequency range of 8-18 GHz dB for a 20 mm thick sample with 60% vol. EPS beads 1.0 mm in diameter. [65] 1989 Gypsum plaster reinforced with carbon fiber SE increases with fiber content, reaching 22 dB at 1.5 GHz for a plaster thickness of 4.35 mm and a fiber content of 2.0% wt. For fiber content of 0.3% and plaster thickness of 3.85 mm SE amounts to 10 dB at 1.5 GHz.
[ 66] 2003 Energy absorbing material The composite material exhibits strong electromagnetic energy absorption in the range of radio and microwave frequencies and better magnetic and dielectric loss properties compared to its components, among other things, very soft ferromagnetic material (developed by the authors of patent [66]) and ferrite powder.
Bricks and Hollow Masonry Units-Small-Sized Elements
The possibility of using a mill scale as an admixture in the manufacture of ceramic bricks with enhanced shielding effectiveness was investigated in study [67]. The admixture, amounting up to 20% wt., improved the properties of ceramic bricks owing to its melting action and formation of magnesioferrite in the sintered ceramic body fired at a temperature of over 900 • C. Ceramic bricks, in the manufacture of which a 15-20% wt. addition of mill scale with a particle size of 0.5 mm was used, exhibited an average shielding effectiveness of up to 4 dB. Sandwich ceramic tiles of the same dimensions, containing a similar amount of scale, exhibited a higher average shielding effectiveness of up to 8 dB in the same X band frequency range. The reflection effect was almost independent of the addition. The physical and mechanical characteristics (absorption capacity, fire shrinkage, modulus of elasticity and compressive strength) of the sandwich ceramic tiles were within acceptable limits. Furthermore, a leaching test showed that all toxic elements tested were stabilised in the sintered ceramic structure of the ceramic bricks and ceramic tiles. The bricks/tiles tested and the larger elements made up of them are shown in Figure 10 (adapted from [67]). Ceramic elements containing a metallurgical slag admixture were tested in study [68]. Tests carried out in the X-wave range (8-12 GHz) showed a shielding effectiveness of up to 2 dB or 3 dB, depending on the slag used, at slag content of 10-20% wt. Similarly to the case of study [67], the physical and mechanical characteristics of the material (absorption capacity, fire shrinkage, modulus of elasticity, and compressive strength) were within acceptable limits.
As part of study [69], double-layer cementitious tiles that absorb microwaves were designed based on the theory of impedance matching and the law of electromagnetic wave propagation. Ferrite and carbon fibers were used as an admixture to improve the absorptive properties and band width. A double-layer cementitious microwave absorber was produced and its absorptive properties were evaluated in the frequency range of 8-18 GHz. Compared to the single-layer structure, the reflection coefficient of the doublelayer tiles decreased by about 6-8 dB. The maximum shielding effectiveness of the doublelayer tile containing ferrite and carbon fibers reached 16.2 dB. Table 5, summarising the materials described in Section 4, is presented below.
Paint Coatings
As part of study [70], ferrites containing single-and double-layer paints absorbing microwave radiation were developed. Theoretical and experimental results for the Ku band were compared. The single-layer absorbing paint was found to exhibit a peak absorption of 12.3 dB at 17.4 GHz for a layer thickness of 1.12 mm. The use of two layers of paint (with a different ferrite content) resulted in band widening, but at the expense of absorption.
Paper [71] presents a method of manufacturing paints and sheets designed to protect against EM radiation along with their specifications. This method was based on polymeric matrices, in which magnetic and dielectric materials were dispersed. In particular, polyurethane was used as a matrix in paints, producing different types containing carbonyl iron or polyaniline. Tests were performed to determine the electric permittivity and magnetic permeability of the materials. Paints and silicone sheets absorbed 60-80% and 90% of electromagnetic radiation, respectively. This indicates that these materials can be used for protection against penetration of EM waves through space enclosures.
In patents [72,73], carbon fibers cut such that their length corresponds to half the wavelength to be absorbed are proposed as materials absorbing electromagnetic waves in the high radio and microwave frequency ranges. The fibers are randomly dispersed and embedded in a dielectric binder (Figures 11 and 12, adapted from [72,74]). Figure 11. Electromagnetic wave-absorbing material acc. to patent. Designations: 1-fibers, 2-material with low loss factor, e.g., epoxy resin or neat resin, 3-resinous matrix, 4-protected surface, 5-composite coating (after [72]).
According to [72], the fibers can be aluminium, but other materials, such as copper, iron, titanium, and graphite, can also be used. The patent [74] proposes the use of at least two different types of fiber, so that electromagnetic waves can be absorbed in a wider frequency range. The materials described in the patents form an absorbing coating which is easier to apply onto conductive surfaces by spraying, laminating, and painting with a paintbrush. The materials are mainly used in the military sector to protect airplanes, rocket missiles, and other equipment against radar, but they are also used in industry to protect airports and seagoing vessels. In patent [73] it is proposed to use aluminosilicates thoroughly mixed with aluminium nitride and then held at a temperature of 800-1000 • C for an hour. This material can be used as a colourless coating for various structures or machines or to create protective panels shielding structural facades from electromagnetic interference. The material has good mechanical properties and good resistance to a wide range of chemicals, and is easily mouldable.
The materials described in this section are listed in Table 6. This review of the literature focused on the possible uses of HPM radiation absorbers in building products and structures. The attention was concentrated on four principle types of elements: claddings, concrete and mortar, small-sized elements (bricks and hollow masonry units), and paint coatings. On the basis of literature on the subject, examples of HPM radiation absorbers having high potential to be combined with basic construction materials were given in each of the categories. In the case of structural elements (concrete, mortar, bricks, and hollow masonry units), the main criterion for selecting absorbing additions, besides their shielding effectiveness, was no (or minimal) adverse effect on the basic mechanical and non-mechanical properties of the building materials.
This survey shows that the simplest and at the same time most effective way of protecting buildings or their rooms against the harmful action of HPM is shielding with claddings. The shielding effect can be achieved using metal shields [32] in the form of plates or nets [34], but to improve their effectiveness, it is worth considering multilayer shields that contain combinations of various metals or alloys (e.g., copper, Mumetal, iron, aluminium, steel) [33]. In addition, the use of composite boards, such as carbon-containing honeycomb gypsum plasterboards [35], mineral wool boards with carbon black addition [36], sandwich boards [38], or carbon fiber paper as used successfully in plywood production [37], can be an effective solution.
Regarding concretes and mortars, the literature review indicates that various admixtures and additions, which must be properly dispersed in the volume of concrete or mortar, are used to obtain the desired level of shielding effectiveness. Steel, graphite, metallurgical slag, nickel, copper, carbon nanotubes, ferrites, and carbon black are the additions most frequently mentioned in the literature. To achieve maximally high effectiveness against EM radiation, it is recommended that the size of individual particles be equal to 1 µm or less, but in the case of such small particles, their proper dispersion in concrete is difficult [44]. Larger fibers or particles are also used, but in this case the drawback is that the interior of each fiber or particle does not contribute to radiation protection, therefore requiring the use of higher fiber or particle contents (volumes) [45,46]. An addition of carbon or steel fiber not only improves the absorption of electromagnetic waves [47], but can also improve mechanical characteristics (compressive strength, flexural strength, and fatigue strength) [48], physical characteristics (thermal conductivity and electrical conductivity) and deformability (shrinkage) [49]. Furthermore, a change in the proportions of the cementitious composite components results in a change in the frequency ranges in which waves are absorbed [50]. A carbon black addition to concrete or mortar improves electric permittivity, as a result of which the thickness and weight of the absorber can be reduced.
Another group of building materials are small-sized elements in the form of bricks and hollow masonry units. The authors of the articles dealing with such elements with respect to protection against EM radiation proposed the use of, i.a., a mill scale admixture in the manufacture of ceramic bricks with enhanced shielding effectiveness [67], a metallurgical slag addition to ceramic elements [68] and a ferrite and carbon fiber addition to double-layer cementitious tiles [69].
The last considered group of materials that ensure protection against HPM pulses comprises paint coatings. This review covered the following materials belonging to this group: microwave radiation-absorbing single-and double-layer paints based on ferrite [70], paints and sheets based on magnetic and dielectric materials dispersed in polymeric matrices [71], dielectric binders with an addition of carbon fibers cut such that their length corresponds to half the wavelength to be absorbed [72,74], and colourless coatings produced from aluminosilicates mixed with aluminium nitride [73].
Implications for Practice
Based on this review of the literature and the experience of the authors gained from a vast research project aimed at developing a technology to manufacture anti-HPM building absorbers, the following general conclusion can be drawn: the development of a building material (concrete, mortar) or element (a brick, a masonry unit, a wall) is a complex problem and requires an interdisciplinary approach. Collaboration between scientists from the fields of, i.a., chemistry, electronics, and building engineering is essential. Each of the teams should supervise the respective component of the project. The task of chemistry specialists would be to select proper HPM-absorbing materials as additions to building materials. The task of electronics engineers would be to test the prototypes of manufactured building materials with regard to HPM protection. The team of building engineers would be responsible for the manufacture of prototypes, ensuring their desired mechanical and non-mechanical properties. When undertaking the interdisciplinary research project (as mentioned in the Funding section) the authors formulated the following procedure for developing construction materials for the protection of buildings against HPM pulses: (1) carry out preliminary studies to obtain information about promising absorbers which could be added as fillers to cement-based materials, (2) prepare composites and carry out tests on building absorber prototypes, (3) develop a hybrid solution (e.g., for a multilayer wall) by combining several different protection methods, (4) obtain approvals, certificates, etc., and (5) implement industrial scale production.
As stated in the Introduction, the solutions being developed need to meet the three criteria concerning requirements for building products, suitable shielding effectiveness, and reasonable costs. These criteria set basic limitations on any products designed to protect against HPM pulses. Therefore, development of such products should always be a result of multi-criteria analyses and/or optimization. Furthermore, one should bear in mind that the development of building materials providing a barrier against HPM pulses is not enough to ensure the full protection of buildings. The walls of buildings have door openings and often window openings, which are the weak points in the shielding. Furthermore, the wiring systems (e.g., mains power supply, lighting, data transmission systems) and plumbing systems (e.g., air conditioning, ventilation, water and drainage piping) pose a problem. In order to achieve the desired shielding effectiveness for the whole building and so eliminate discontinuities in the EM barrier, various kinds of EM shields are usually used. Regarding windows and doors, protection can be achieved using metal nets with a suitably matched mesh size, or metallised glass panes. Moreover, it is essential to use special electromagnetic seals (e.g., copper-beryllium finger door seals). Ventilation openings can be protected using EM barriers in the form of damping waveguides or metal nets with variously shaped meshes (e.g., orthogonal, honeycomb) [33].
The patent solutions and scientific papers cited in this review give grounds for formulating the following basic conclusion concerning a hybrid anti-HPM building absorber: the development of a space-dividing element in the form of a multilayer wall, providing effective protection of a building against EM pulses in a wider frequency range, is a complex but achievable task. The task can be achieved through combining the attributes of the particular materials that comprise the layers of a hybrid building absorber. Ensuring the continuity of the EM barrier by matching proper absorber layers and protecting all openings in the wall, one can achieve comprehensive protection of buildings against HPM. The development of a solution in the form of a hybrid multilayer wall is the subject of the R&D investigations currently conducted by the authors. The results of these investigations will be presented in future articles. Funding: Badania zostały sfinansowane przez NCBiR-Umowa DOB-1-3/1/PS/2014 pn. "Metody i sposoby ochrony i obrony przed impulsami HPM" w ramach programu strategicznego "Nowe systemy uzbrojenia i obrony w zakresie energii skierowanej". The work was supported by the Polish National Centre for Research and Development within the project "Methods and ways of protection and defence against HPM impulses", pending within the strategic project: "New weaponry and defense systems of directed energy". | 2021-10-19T15:48:55.101Z | 2021-09-23T00:00:00.000 | {
"year": 2021,
"sha1": "8487554c128ce81f6a59674524a8a2b7aa660a32",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/19/6061/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3c9e4697c4ab36e8f895d53984ccad0c8682dafd",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
18644087 | pes2o/s2orc | v3-fos-license | Optimal Throughput for Cognitive Radio with Energy Harvesting in Fading Wireless Channel
Energy resource management is a crucial problem of a device with a finite capacity battery. In this paper, cognitive radio is considered to be a device with an energy harvester that can harvest energy from a non-RF energy resource while performing other actions of cognitive radio. Harvested energy will be stored in a finite capacity battery. At the start of the time slot of cognitive radio, the radio needs to determine if it should remain silent or carry out spectrum sensing based on the idle probability of the primary user and the remaining energy in order to maximize the throughput of the cognitive radio system. In addition, optimal sensing energy and adaptive transmission power control are also investigated in this paper to effectively utilize the limited energy of cognitive radio. Finding an optimal approach is formulated as a partially observable Markov decision process. The simulation results show that the proposed optimal decision scheme outperforms the myopic scheme in which current throughput is only considered when making a decision.
Introduction
Cognitive radio (CR) technology can improve spectrum utilization by allowing cognitive radio users (CUs) to share the frequency assigned to a licensed user, called the primary user (PU). In order to avoid interference with the operation of the licensed user, CUs are allowed to be active only when the frequency is free. Otherwise, when the presence of the PU is detected, CUs have to vacate their occupied frequency. Subsequently, an essential problem arising in CR implementations is reliable spectrum sensing. In the CR network, since the amount of energy consumed by spectrum sensing increases with sensing time duration, which is one of the main factors affecting sensing performance, sensing energy can significantly affect throughput. In addition, more throughput can be achieved by adapting an adaptive transmission power control (ATPC) [1,2] in the case of a fading communication channel.
As a normal wireless node, a CU has a finite capacity battery which can be recharged by an energy harvester and is consumed by spectrum sensing, data processing, and data transmission. Therefore, a primary challenge of cognitive radio is how to optimize functionality. The problem of optimal energy management has been considered previously [3,4] where an optimal energy management scheme for a sensor node with an energy harvester to maximize throughput is proposed. For maximizing throughput of a CR system, the optimal choice about when to keep silent or carry out spectrum sensing is addressed in [5,6] in which the partially observable Markov decision process (POMDP) [7,8] is adopted to obtain an optimal secondary access policy. However, in previous works [5,6] there are some limitations: a constant harvested energy is unrealistic, the effect of energy consumed by performing spectrum sensing on system throughout is not addressed, and an ATPC is not investigated.
In this paper, we propose an optimal mode decision policy (i.e., keep sleeping mode or change to accessing mode) for CR with a non-RF energy harvester to maximize the CR system throughput. An optimal sensing energy algorithm and an ATPC are also considered in the proposed scheme in order to guarantee effective utilization of CU's limited energy resource, which extends life time and improves throughput of the CR system.
System Model
We assume that a CR network and a PU operate in a time slotted model. The status of the PU changes between two states of the Markov chain, that is, presence (P) and absence (A), as shown in Figure 1. The transition probabilities of the PU from state P to state A and from state A to itself are defined as PA and AA , respectively. The CU is assumed to always have a data packet to transmit. When the CU wants to access the channel of the PU, it needs to perform spectrum sensing.
Only if the sensing result is the state A of PU, CU will be allowed to use the channel. The energy of the CU is stored in a battery with a finite capacity of ca packets of energy. In general, the CU needs to decide its operation either in sleeping mode or in accessing mode to maximize throughput and energy utilization. In both sleeping and accessing modes, the CU can harvest energy from the environment by using its non-RF harvester while performing other operations. At the th time slot, SU can harvest ℎ ( ) energy units that can be used in the next time frame. ℎ ( ) takes its value from a finite number ℎ of energy units: where 0 ≤ ℎ 1 < ℎ 2 < ⋅ ⋅ ⋅ < ℎ ℎ ≤ ca . The probability mass function (PMF) of the harvested energy is given as follows: We assume that the harvested energy follows the stochastic process that is marked by the Poisson process. Subsequently, ℎ ( ) is a Poisson random variable with mean ℎ mean . The PMF in (2) can be rewritten as follows: At the beginning of the time frame, information on the amount of remaining energy , 0 ≤ ≤ ca is available at the CU. Furthermore, the CU has a belief , which is the probability of the PU being absent (A) at the time frame. This information can be calculated by statistics of history sensing results from the CR network. Based on the values of and , the SU decides to keep sleeping or to carry out spectrum sensing and transmit data if the state A of the PU is detected.
We consider fading at the data channel between the CU transmitter and the CU receiver. At the CU receiver, we assume that the channel gain takes its value from the set of finite integers: where 1 > 2 > ⋅ ⋅ ⋅ > . The CU receiver reports this channel gain to the CU transmitter over low-rate, error-free, and zero-delay feedback channel, called causal channel state information (CSI) feedback [9].
The PMF of channel gain can be defined as By applying an ATPC, the required transmission energy of CU, ( ), can be determined corresponding with the channel gain ( ): where the smallest required transmission energy, 1 , corresponds with the highest channel gain, 1 , and, similarly, the CU consumes the largest energy for transmission, , when the channel gain is the lowest, ; that is, 1 < 2 < ⋅ ⋅ ⋅ < ≤ ca . The PMF of transmission energy can be expressed as follows: We assume that the level of channel gain follows the Poisson process. Therefore, ( ) is a Poisson random variable with mean value mean . As a result, the PMF of the transmission energy in (7) can be given as For efficient utilization of energy, we define a transmission energy threshold th to consider the transmission cost so that if the required transmission energy exceeds this threshold, the CU will drop the transmission.
Optimal Mode Decision Policy Based POMDP
In this study, we obtain an optimal mode decision policy by adopting POMDP for the object of maximizing the throughput of the CR system. Two operation modes, sleeping mode (S) and accessing mode (AC), are considered for the CU. As a normal device with limited energy resources, if the CU lacks energy for operations (i.e., spectrum sensing and transmission data), it will keep sleeping and only harvest energy for the next time operation. This operation is called sleeping mode. In the accessing mode, on the other hand, the CU performs spectrum sensing to detect the state of the PU and further if the state A of the PU is detected, the CU transmitter will send data to the CU receiver.
In spectrum sensing, consumed energy can significantly affect the throughput of the system, especially in the case of limited energy devices. Subsequently, in the next subsection we will propose an algorithm to obtain the optimal sensing energy for the CU.
Optimal Sensing Energy for Maximizing Throughput.
The spectrum sensing of the CU, which is assumed to be performed by using an energy detection method, is to distinguish between two hypotheses of the PU, presence (P) or absence (A). Consider the Gaussian noise in the sensing channel, hence when the number of sensing samples is relatively large (e.g., > 200), the received signal energy can be closely approximated as a Gaussian random variable under both hypotheses such that [10] ∼ { ( , 2 ) , A, ( ( + 1) , 2 (2 + 1)) , P, where is the SNR of the sensing channel between the PU and the CU. The decision about state of the PU can be made as follows: where is the energy threshold and "1" and "0" correspond to the states P and A of the PU, respectively. The sensing performance of the CU can be evaluated by the probability of false alarm ( ) and the probability of detection ( ), which are given, respectively, as and = ( − ( + 1) ) .
The number of sensing samples is assumed to be = 2 , where is the sensing time duration and is the bandwidth. Therefore, for the required probability of detection * , the probability of false alarm according to the sensing time can be calculated as follows: Here, energy consumed by spectrum sensing is defined as . Then, we can assume that is proportional to with a constant of proportionality ; that is, = . Therefore, the probability of false alarm depends on sensing energy according to * ( ) = ((2 + 1) −1 ( * ) + √ ) .
If the sensing results of the CU is the state A of the PU, then the CU can transmit its data. But the throughput is achieved only when this transmission is performed and the PU is really in state A (i.e., the sensing result is correct). The average throughput according to sensing energy can be defined as where is the total time frame for both spectrum sensing and data transmission and 0 is the standard throughput of the CR link, which is defined as 0 = log 2 (1 + SNR CR ), where SNR CR is the SNR received in the CU receiver. The optimal value of for each time frame such that the average throughput of the CU is maximized while maintaining a low level of interference with the PU (i.e., meet the requirement of * ) can be found as the solution of an optimization problem as follows: The problem can be solved by using a numerical method and value of the optimal sensing energy ,opt will be utilized for the proposed optimal mode decision policy of the CU transmitter based on POMDP, as shown in Figure 2.
Optimal Mode Decision Policy.
The optimal mode decision policy related to sleeping or accessing is formulated as the framework of POMDP. The value function ( , ) is defined as the maximum total discounted throughput from the current time slot when the remaining energy is and the belief regarding state A of the PU is . The value function is given by where 0 ≤ < 1 is the discount factor and and are the remaining energy and belief at the beginning of the th time slot, respectively. ( , , ) is the throughput of the CU achieved at the th time slot, which is mainly dependent on , , and action . As described above, action can be either to remain sleeping or change to accessing; that is, ∈ { , AC}. If the CU decides to change to accessing mode it will use ,opt as sensing energy. In addition, an ATPC will calculate the transmission energy according to the channel gain information which is provided by causal CSI feedback from the CU receiver. ( 1 ). If the CU decides to remain sleeping, no throughput is achieved; then ( , , | 1 ) = 0 and the belief for the next time slot is updated as follows:
Sleeping Mode
Also, the remaining energy of the battery will be increased according to with transition probability for = 1, 2, . . . , ℎ .
Accessing Mode.
When the CU decides to change to accessing mode, the achieved throughput of the system depends on the observation of the CU. In this paper, we define 4 observations for the accessing mode of the CU which are as follows.
Observation 1 ( 2 ). The sensing result is state P of the PU; then the CU does not transmit data and there will be no achieved throughput, ( , , AC | 2 ) = 0. The probability that 2 happens is The belief in the current time slot can be updated by using Bayes' rule as follows: As a result, the updated belief that the PU is in state A at the next time slot is given by The updated remaining energy is obtained as: with transition probability for = 1, 2, . . . , ℎ .
There is no PU signal detected (i.e., state A). The required transmission energy is smaller than the threshold th ; then the CU transmits data and can receive an ACK message. This means that the sensing result is correct (A is the real state of the PU) and the CU is successful at transmitting data. The throughput is achieved as The probability that 3 happens is The belief and remaining energy for the next time slot can be updated, respectively, as with transition probability for all = 1, 2, . . . , ℎ and = 1, 2, . . . , .
Observation 3 ( 4 ). State A of the PU is detected. The required transmission energy is smaller than the threshold th ; then the CU transmits data but can not receive the ACK message. This means that the sensing result is incorrect (P is the real state of the PU), the transmission data fails, and ( , , AC | 4 ) = 0. The probability that 4 is obtained is The belief that the PU will be in state A at the next time slot is given as The remaining energy of the CU can be updated similar to the case of 3 .
The Scientific World Journal 5 Observation 4 ( 5 ). The sensing result concludes that the PU is in state A. The required transmission energy exceeds the threshold th ; then the CU does not transmit data and ( , , AC | 5 ) = 0. The probability of the case 5 is given as where ,A is the probability that the sensing result is state A of the PU, which is given by Based on Observation 4, we can update the belief of current time slot by using Bayes' rule as follows: Subsequently, the updated belief for the next time slot is calculated as The updated remaining energy for the next time slot can be obtained similar to the case of 2 .
According to those observations, the value function in (17) can be expressed as follows: The optimization problem in (37) can be solved to find an optimal mode decision for maximizing throughput of CR system by using the value iterations method [11].
Simulation Results
In this section, we present simulation results of the proposed scheme and the Myopic scheme that only considers the current time slot for the value function (i.e., = 0) under the parameters as shown in Table 1. Figure 3 shows the optimal mode decision policy for the sleeping and accessing modes based on the values of and . It can be seen that when the remaining energy is low, the CU changes to the accessing mode when value of is high. In contrast, when the value of is low, more remaining energy is required for carrying out the accessing mode. Figures 4, 5, and 6 illustrate average throughput according to the required probability of detection * in some cases of Remaining energy (e) Figure 3: The optimal mode decision policy when ca = 10, ℎ mean = 2, and mean = 7 (black area: sleeping mode and white area: accessing mode). ca , ℎ mean , and mean . It is clear that the required probability of detection * represents the protection level of the PU. That is, a high value of * offers a high protection level for the PU. However, the high protection level of the PU may reduce the opportunity for communication in the CR system. The clear relation between throughput and the value of * is shown through Figures 4, 5, and 6; that is, average throughput tends to decrease with the improvement in * . On the other hand, the increases in ca and ℎ mean provide the CU with a higher probability of being active in accessing mode, which results in increase of average throughput. On the contrary, the higher transmission energy (i.e., higher mean ) may reduce the average throughput in the case of a constrained energy resource. Figure 7 shows average throughput when the capacity of battery ca and the required values of * are considered. From the figure, it is observed that the average throughput increases with decreasing of * and the bigger capacity of the battery (i.e., bigger ca ) results in higher average throughput. However, when ca reaches a certain level that is sufficient to store all harvested energy, then the throughput can not be improved due to enhancement in ca . Figure 8 compares the POMDP-based proposed scheme with the Myopic scheme. We define three cases of the Myopic scheme in this simulation: (1) "Myopic-original, " the scheme is described in [5] in which no optimal sensing energy and no ATPC are considered; (2) "Myopic-ATPC, " Myopic scheme in which an ATPC is considered; (3) "Myopic-,opt and ATPC, " Myopic scheme in which both optimal sensing energy ,opt and an ATPC are considered. It can be seen that an ATPC and/or an optimal sensing energy algorithm can improve throughput of the system compared with the "Myopic-original" scheme. In addition, the proposed scheme achieves better performance than all Myopic schemes because it considers the throughput of future time frame based on POMDP.
Conclusion
In this paper, a POMDP-based proposed scheme is investigated in order to find an optimal mode decision policy to maximize the throughput of the CR system. The random value of harvested energy considered in the proposed scheme is more practical than that in the previous studies. An ATPC scheme and an optimal sensing energy algorithm are proposed for efficient utilization energy from a limited capacity battery of the CU. Simulation results demonstrate that the proposed scheme significantly improves the throughput of the CR system. More specifically, throughput of the CR system depends on protection level of the PU system, * . With higher level of * , the opportunity for communication of CR system is decreased and corresponding throughput The Scientific World Journal is also decreased. In addition, the increase of harvested energy and capacity of battery can improve throughput of the system. However, higher transmission energy reduces the throughput. | 2018-04-03T01:17:56.494Z | 2014-01-20T00:00:00.000 | {
"year": 2014,
"sha1": "677582922eb6bcf237732fbcdb9d67f7078160c0",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2014/370658.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34c71737614880a166b81dbeff9d2848c3b5a5d9",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
67090381 | pes2o/s2orc | v3-fos-license | Application of FTTH Access Scheme in Digital Television System
. Due to the increase in the demand of users, digital TV network can not only meet watching the digital TV programs, but also meet the fast service of various data upload and download. So, FTTH program is used in an increasing number of digital distribution network transformation projects. According to the characteristics of different types of buildings, the writer discussed three access network schemes in detail for multi-story residential access, high-rise residential access and villa area access.
Networking Scheme. The number of users in this scheme is 400, less than 1000, so it is recommended that the OLT placed in the front machine room, in order to facilitate the later maintenance. And the distance between sub-front machine room to the area is less than 5Km. OLT has 10 PON ports, with each PON port using optical splitting ratio of 1:64 to access the users. The output optical power of optical amplifier is about 22dBm. Each port uses optical splitting ratio of 1:256 to access the user's points. In this spectral structure of the model, television broadcasting platform and bidirectional data platform are used two-stage optical splitting structure. The signal of television broadcast platform is conducted the first level of 1:16 optical splitting in the junction box, and then the second level of 1:16 optical splitting in the distribution box in the building. And then the signal will enter into the user's home via fiber links. The two-way data platform signal will complete the first level of 1:4 optical splitting in the junction box, and then after the completion of the second level of 1:16 optical splitting in the distribution box, the signal will achieve the user's home through the fiber access. In line segment, an optic cable junction box of 128 cores capacity will be installed in the district. The trunk fiber from the front end will be conducted optical splitting of 1:4 and 1:16 in transfer box. Then, from the area machine room, a twelve-core optical cable is laid to reach each unit. Four cores of the optical cable are used for TV broadcast platform, and four cores for bidirectional data platform. In the home phase: optical cable is introduced through the first floor. Optical splitting is completed in the distribution box. The television broadcasting platform will complete 1:64 optical splitting by four 1:16 splitters, and then the signal will reach into the user's home by indoor cable. The bidirectional data platform also uses the same method to enter into user' home. And in the user's family, using SC optical fiber connector to achieve FTTH access.
Optical Path Attenuation Calculation. The loss of 1:4 splitter is 7.5dB, and it of 1:16 splitter is 13.8dB.The connector insertion loss is about 1dB and splice loss is about 0.5dB.The district internal wiring terminal loss is about 0.2dB. The average loss per kilometer of TV programs transmission fiber is 0.2 dB/km, and that of bidirectional data is 0.36 dB/km. Also because the distance from the front room to the village is less than 5km, 1dB margin should be leaved at the time of construction. So, the total bidirectional data signal optical attenuation is 0.36×5+7.5+13.8+1+0.5+0.2+1=25.8dB. This value can meet the design requirement of less than 26dB. And the total attenuation of television broadcasting platform is 0.2×5+13.8+13.8+1+0.5+0.2+1=31.3dB.Which also can meet design requirement of less than 32dB. The Advice of Multi-story Residential Access. Optical splitters are placed in transfer box and the distribution box. The optical splitter output from optical transfer box uses skeleton type optical cable to cover the whole area by the way of directly-buried or pipe laying method. Near each building, Optical cable connector box is used to complete cable branching. The output of the optical splitter in distribution box can use indoor cable to reach the user's home.
High-rise Residential Access
The characteristics of high-rise residential is more than 10 layers, with more users in each floor and more total users. The building has the improved shaft and pipeline facilities. This scheme assumes that the model has a total number of about 1000 households, 10 units, 24 floors per unit, and 4 households per floor.
Networking Scheme. In principle, if there are more than 1000 users, the number of PON ports is greater than 15, and there is no access within 1 kilometer, the OLT can be considered to set in the district. In order to reduce the occupancy of core fiber, the OLT position is proposed to move down to the district. The OLT, supplied power by AC or DC, can use cassette and small volume OLT equipment. It has 15 PON ports for output. Each PON port access the user with total 1:64 optical splitting ratio. The optical amplifier outputs power of 22dBm, each port with total 1:256 splitting ratio for user access. Uplink signal will be connected to the three-layer switching devices through the 10GE link ports. The optical amplifier of the broadcasting network also moves down to the district to achieve the access of television business for 1000 users. In this spectral structure of the model, television broadcasting platform and bidirectional data platform are used two-stage optical splitting structure. The total optical splitting ratio is 1:256 in broadcasting channel. The first 1:8 is completed in the district machine room. And then in the building, 1:32 is completed every 8 floors. At last, the broadcasting signal will access the user's home through indoor cable. The total optical splitting ratio of bidirectional data channel is 1:64. The first 1:2 is completed in the district machine room. Then in the building, 1:32 is completed every 8 floors, At last, the data signal will access the user's home through indoor cable. In line segment, an optical cable junction box of 96 cores capacity is installed in the district machine room. The signal from OLT and EDFA equipment will be conducted optical splitting of 1:2 and 1:8 by splitters in transfer box. Then, from the area machine room, an eight-core optical cable is laid to reach each unit. Six cores of the optical cable are used for business, including three TV broadcast platform cores and three two-way data platform cores. The other two cores are for backup and test. In the last phase, the cable wiring enters the building through light current cable channel. The cable is divided into vertical wiring by the eight cores fiber splitting box. A 64 cores wall-mounting cable wiring box is used every 8 floors. Optical signals of broadcasting platform and two-way data platform will complete 1:32 optical splitting in the box. Then, the signals enter into the comprehensive information box for each family using indoor cable through the corridor underground pipe. In the comprehensive information box, they access the ONU (optical network unit) and optical receiver with SC terminals.
Optical Path Attenuation Calculation. The loss of 1:8 splitter is 10.5dB, and 3.8dB of 1:2 splitter, 17dB of 1:32 splitter. Because the distance from the front room to the village is less than 5km, 1dB margin should be leaved at the time of construction. So, the total bidirectional data signal optical attenuation is3.8+17+1+0.5+0.2+1=23.5dB. This value can meet the design requirement of less than 26dB. The optical attenuation of television broadcasting platform is 10.5+17+1+0.5+0.2+1=30.2dB.Which also can meet design requirement of less than 32dB.
The Advice of High-rise Residential Access
Optical splitters are placed in transfer or building shaft cable wiring box. Optical cables enter into the building from the underground pipe line. In the building, the cables are placed vertically along the shaft using vertical wiring method. Optical distribution boxes are arranged in a certain interval of floors, with one box for 1-8 floors. Actually, it can be determined according to the number of users per floor. The optical signal enters into the user's home by indoor cable.
Villa Area Access
Villa area users are scattered. The total number is less, but, with a wide range of coverage. Usually, the building model can be double villa or single family villa. Each building has only 1-2 households. The assumption of this scheme is that the total number of users in a villa area is 100.
Networking Scheme. OLT equipment and optical amplifier are placed in the sub-front machine room, and the distance between sub-front machine room to the area is less than 10Km. OLT has 4 PON ports, with each PON port using optical splitting ratio of 1:32 to access the users. The optical amplifier uses a general frame or rack type with single port output. Output optical power is 22dBm. Each port uses the splitting ratio of 1:128 to access users. Because of the scattered characteristics of villa residential buildings, scattered optical splitting is very difficult to find reasonable placement. It proposes to adopt concentrated splitting method. The optical cable transfer box can be placed in the community center (such as green belt, etc.) and splitters are installed in the transfer box to realize concentrated optical splitting. Television broadcasting platform optical signal will pass through 1:4 and 1:32 splitters placed on the transfer box, and then the signal will be introduced to the user's family by the introduction optical cable. The two-way data platform signal will be handled by 1:32 splitter in the junction box, then, introduced to the user by the introduction optical cable. In line segment, every 10 households as a wiring area, an optical cable splitting box is set up. 24-core optical cable is laid. In the last phase, 2-core indoor optical cable is laid from the fiber splitting box to each villa to achieve FTTH. In the comprehensive information box, they access the ONU and optical receiver with SC terminals.
Optical Path Attenuation Calculation. The loss of 1:4 splitter is 7.5dB, and 17dB of 1:32 splitter. The average loss per kilometer of TV programs transmission fiber is 0.2 dB/km, and that of bidirectional data is 0.36 dB/km. Also because the distance from the front room to the village is about 10km, 2dB margin should be leaved at the time of construction. So, the total bidirectional data signal optical attenuation is0.36*10+17+1+0.5+0.2+2=24.3dB. This value can meet the design requirement of less than 26dB. The optical attenuation of television broadcasting platform is 0.2*10+7.5+17+1+0.5+0.2+2=30.2dB.Which also can meet design requirement of less than 32dB. The Advice of Villa Area Access. Optical splitters are placed centrally in cable transfer box. The output uses skeleton type optical cable to cover the whole villa area by the way of directly-buried or pipe laying method. Separating fiber box can be placed by underground pipes or rods in the villa area. One separating fiber box can provide 4 to 10 users access. After splitting, the optical cables use of indoor and outdoor self-bearing type or pipe-shaped to introduce into the home. All fiber must be introduced into the integrated information box for better protection. In practical engineering, pigtail connection can be used to realize fiber terminal.
Conclusion
FTTH has many advantages. First, it is a passive network. Second, its bandwidth is relatively wide, long distance is in line with the operators. Third, it can support many protocols, suitable for the introduction of a variety of new business. So, it is the most ideal program in digital television system construction.
[4] We should take the road of the development of the digital television with Chinese characteristics. The digital of television is not only the upgrading of technical equipment, more important is it will change the service and management. It is an important social system engineering, involving the interests of millions of households. [5]We must start from the national conditions, set up new ways to promote the development of digital television industrial. | 2019-02-17T14:03:23.971Z | 2017-05-10T00:00:00.000 | {
"year": 2017,
"sha1": "a385534c423c95dd6122b2d0661da813fdb70eaa",
"oa_license": null,
"oa_url": "http://dpi-proceedings.com/index.php/dtetr/article/download/9240/8806",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9ba155773053f11dceab5ee4db07f008ddc834b7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
118880164 | pes2o/s2orc | v3-fos-license | P-P' Strings in M(atrix) Theory
We study the off-diagonal blocks in the M(atrix) model that are supposed to correspond to open strings stretched between a Dp-brane and a Dp'-brane. It is shown that the spectrum, including the quantum numbers, of the zero modes in the off-diagonal blocks can be determined from the index theorem and unbroken supersymmetry, and indeed reproduces string theory predictions for p-p' strings. Previously the matrix description of a longitudinal fivebrane needed to introduce extra degrees of freedom corresponding to 0-4 strings by hand. We show that they are naturally associated with the off-diagonal zero modes, and the supersymmetry transformation laws and low energy effective action postulated for them are now derivable from the M(atrix) theory.
Introduction
D(irichlet)-branes have many faces. In string theory, they arise as nonperturbative dynamic objects, allowing strings to end on them and carrying R(amond)-R(amond) charge [1]. In the conformal field theory formulation, a Dp-brane is a p-dimensional hyperplane in target space on which strings satisfy the Dirichlet boundary conditions [2]. In the low energy field theory limit (supergravity), it appears as a soliton-like background with nontrivial R-R antisymmetric tensor field, solving classical equations of motion (see a recent review [3] and references therein). The low energy dynamics of parallel D-branes, due to strings stretched between them, can be described by a dimensionally reduced supersymmetric Yang-Mills theory on their world volume [4], which happens to describe a quantum space in the sense of non-commutative geometry [5].
In the M(atrix) model [6] for M theory, which is conjectured to unify all known perturbative string theories, the D0-branes are treated as fundamental microscopic degrees of freedom. The SYM quantum mechanics, which was originally thought to be the low-energy theory of N D0-branes, is promoted in the large N limit to the status of the fundamental light-cone dynamics of M theory. As dimensionally reduced U(N) SYM theory, its field content matches the lowest modes of open strings ending on D0branes. Thus, in M(atrix) theory, everything else appears as a collective (bound) state of D0-branes. In particular, a multiple parallel D-brane background is realized as a block-diagonal matrix [6], each block represented by a topologically nontrivial gauge field configuration [7,8] on a D-brane volume. In this paper we study the dynamics of D-branes by introducing and examining off-diagonal blocks, that are supposed to correspond to strings stretched between D-branes. One of the advantages of the M(atrix) theory is that it provides a unifying framework for explicitly dealing with both D-brane backgrounds and strings stretched between them.
Previously Berkooz and Douglas [9] have considered the background of a longitudinal M5-brane, which wraps around the (invisible) 11-th direction that defines the light-cone to give rise to a D4-brane in IIA language. They bypassed the question of explicitly representing the D4-brane in matrix form, but rather proposed a modified M(atrix) theory by introducing by hand additional dynamical variables that are supposed to correspond to the massless modes of open strings stretched between the D4-brane and the D0-branes (called 0-4 strings). It was shown that integrating out the extra variables leads to the correct gravitational field of an M5-brane. Later Dijkgraaf, Verlinde and Verlinde [10] showed that if one integrates out the off-diagonal blocks in the U(N) matrix fields with two diagonal blocks for a D4-brane and a D0-brane respectively, one can also recover the gravitational field of a longitudinal M5-brane. Based on this result, one may be tempted to identify the extra fields introduced in Ref. [9] with the above-mentioned off-diagonal blocks. However, there is a mismatch for the quantum numbers: the extra bosonic field in Ref. [9] is a spinor of the SO(4) in the 4-brane directions, in accordance with string theory [11], while the bosonic off-diagonal block is an SO(4) vector. Resolving this puzzle was part of the motivation for this paper.
Another related, unsettled issue is how to obtain the 32 additional fermions in the heterotic matrix theory, which is the M(atrix) theory compactified on S 1 /Z 2 . First it was suggested to add these fermions by hand to cancel anomalies in the 1+1 dimensional field theory [12,13,14]. Later Horava [15] proposed that they are zero modes of the off-diagonal blocks that correspond to 0-8 strings. However, there is a puzzle of why these fermions are invariant under surviving supersymmetries. A better understanding of the 0-8 strings in the M(atrix) theory should help resolve this problem.
In this paper we study the spectrum of the off-diagonal blocks in M(atrix) theory that are supposed to correspond to p-p ′ strings in the background of a Dp-brane and a Dp ′ -brane. In particular we show that the spectrum of zero modes for the offdiagonal blocks matches the massless spectrum of p-p ′ strings. Since the string theory results about p-p ′ string spectrum are most directly seen in the Neveu-Schwarz-Ramond formalism, while the M(atrix) description of type IIA theory [17,18,19] is in the Green-Schwarz formalism, it is nontrivial to check if their predictions agree. Moreover, note that D-brane charges and supersymmetry do not give a complete characterization for parallel D-brane configurations in M(atrix) theory. The study of the zero-modes of off-diagonal blocks will provide more information on proper identification of D-brane backgrounds, and on their dynamical behavior as well, such as R-R charge and stability etc.
In this paper we will refer to configurations in M theory by their names in the IIA theory that is related to the M theory through compactification of the (invisible) eleventh dimension. Hence a D0-brane is a Kaluza-Klein mode of a graviton, a D2brane an M-membrane, a D4-brane a longitudinal M5-brane [20]. It is unclear what D6 and D8-branes in IIA really correspond to in M theory, but they are needed to give various D-branes under compactifications.
We will review related results in string theory in Sec.2 and M(atrix) description of D-brane configurations in Sec.3. In Sec.4 we derive the equations of motion for the bosonic and fermionic zero modes of the off-diagonal blocks, which we will use to find the zero modes, and explain how to derive their supersymmetry transformations and low-energy effective action, for 0-2, 0-4, 0-6 and 0-8 strings respectively in Sec.5-7. Sec. 6 also includes a discussion on the application of the off-diagonal zero modes to the matrix description [9] of longitudinal fivebranes. More discussions on the physical implications of our results can be found in Sec. 7 and in Sec. 8 .
Review of p-p ′ Strings
In this section we briefly review the results in string theory on p-p ′ strings [11]. First we consider an open string connecting a Dp-brane and a Dp ′ -brane parallel to each other. Since we are using IIA language, both p and p ′ are even integers. Assuming that p ′ ≥ p. In directions 0, 1, · · · , p, where the two D-branes overlap, the bosonic fields X have Neumann boundary conditions on both ends. In directions p + 1, · · · , p ′ , they have Dirichlet boundary condition on the p-brane and Neumann condition on the p ′ -brane. In the rest directions p ′ + 1, · · · , 9, the open string has Dirichlet conditions on both ends.
There will be unbroken supersymmetries for a system of parallel Dp-branes and Dp ′ -branes, if and only if the number, ν, of the directions in which the bosonic sector has DN or ND boundary conditions is 0,4 or 8. (Note that ν = p ′ − p for a parallel Dpand Dp ′ -brane.) The Ramond sector of the p-p ′ string has the same kind of boundary conditions as the bosonic part. It always offers a massless fermionic SO(1, 9 − (p ′ − p)) Weyl spinor (after GSO projection) for the directions with NN or DD boundary conditions. The NS sector has the opposite kind of boundary conditions. Only when (p ′ − p) = 4 will there be a massless SO(p ′ − p) bosonic Weyl spinor for the directions with ND or DN boundary conditions.
Since we can always use T-duality to switch a Dp-brane to a D0-brane, we only need to consider four types of open strings: the 0-2, 0-4, 0-6 and 0-8 strings. In summary, the massless spectrum for a 0-p string consists of only a fermionic SO(1, 9 − p) Weyl spinor, except that when p = 4 there is in addition a bosonic SO(4) Weyl spinor. Below we are going to verify this spectrum of massless fermionic and bosonic modes in M(atrix) theory. (Though it is amusing to note that in M(atrix) theory, the bosonic off-diagonal blocks that are supposed to correspond to 0-4 strings are SO(4) vectors!)
M(atrix) Description of D-Brane Configurations
The action of the M(atrix) model is [6] where µ, ν = 0, 1, · · · , 9, F µν = [X µ , X ν ] and X 0 = −iD 0 = −i( ∂ ∂t + A 0 ). X µ and Ψ α are Hermitian N × N matrices. The dynamical and kinematical SUSY transformations are respectively [7] δX µ = iǭΓ µ Ψ, µ = 0, 1, · · · , 9, and each with 16 generators. The configuration of a Dp-brane in M(atrix) theory is given by big (infinite dimensional) matrices giving the appropriate p-brane charge [7]: 1 We can choose the X's to satisfy for n = 1, 2, · · · , p/2 with F µν being constant K × K matrices. The fermionic partner is taken to be zero. There are two ways to realize this physical setting in the M(atrix) theory. Take the D2-brane as an example. One way is to set [6]. P and Q can in turn be realized as P = −i(2π/N) ∂ ∂σ and Q = −σ through an angle parameter σ ∈ [0, 2π). Another way is to first compactify the M(atrix) model on a torus with radii R i , i = 1, 2, and then take the limit R i → ∞ if one wishes. A Dpbrane configuration corresponds to a gauge field configuration with certain topological charge [7] (the k-th Chern character Q k = trF k for k = p/2.) on the dual torus which becomes infinitesimal in the large radii limit. The X matrices in the p-brane directions become ((−i) times) the covariant derivatives. For a D2-brane it can be taken as, say, For our purpose the difference between the two descriptions is only a scaling in the derivatives. For simplicity in notation we choose to use the latter description in this paper.
A static Dp-brane configuration (6) preserves half of the total SUSY if and only if the F 's are proportional to the unit matrix, in which case 16 linear combinations of the dynamical and kinematical SUSY are preserved [7]. These states contain D0, D2,..., D(p − 2)-branes in addition to the Dp-brane. The kinematical SUSY (4) is never preserved by itself. The condition for part of the dynamical SUSY to be preserved is for some ε i = ±1. It preserves 1/2 (p/2−1) of the dynamical SUSY parametrized by ǫ satisfying Because tr(F 2 12 ) = 0 it follows from (7) that any D6 or D8-brane configuration with unbroken dynamical SUSY must always include D4-branes. A discussion on general bound states from the low energy D-brane point of view can be found in [21].
If all F µν 's in (6) are proportional to the unit matrix, they define a natural complex structure on the dual torus. It can be used to view the dual torus T p as composed of p/2 complex tori T 2 . A Dp-brane with unit p-brane charge can be realized by a U(K) gauge field with twisted boundary conditions. This is analogous to how one defines a long string [19,18] in the conjugacy class of length K. The unit Dp-brane charge means a twisted bundle with the minimal topological charge on each T 2 .
An explicit construction of the minimal twisted bundle of the fundamental representation of U(K) is given in Ref. [8]. There the gauge fields can be chosen as A 1 = 0 and A 2 = −i(σ 1 /2πK)1, where we use σ i as the coordinates on T 2 normalized to range between 0 and 2π and 1 is the unit matrix. The field strength F 12 is then 1/2πK. The quasi-periodic boundary conditions on A are [8]: where Ω 1 and Ω 2 can be chosen as where q = e i2π/K , U ij = q i δ ij and V ij = δ i+1,j with i, j = 0, · · · , K − 1 (mod K). U and V satisfy UV = q −1 V U. It can be checked that Ω 1 (2π)Ω 2 (0) = Ω 2 (2π)Ω 1 (0). This is in contrast with the twisted bundle of SU(K)/Z K where one has Ω 1 (2π)Ω 2 (0) = Ω 2 (2π)Ω 1 (0)Z for some element Z in the center Z K of SU(K) [22]. The bundle in fundamental representation has the corresponding boundary conditions: Note that consistency of the boundary conditions requires A section of the bundle has the general form of [8] for an arbitrary functionφ for which the series converges.
Since this is the D-brane analogue of a long string in the conjugacy class of length K, this gauge field configuration is identified with a single D2-brane instead of K D2branes. Here K gets interpreted as the longitudinal momentum carried by the single D2-brane, as can be seen by examining its light-cone energy.
It is essential that the gauge group is U(K) instead of SU(K). Although there are twisted SU(K)/Z K bundles in the adjoint representation [22] with the same topological charge, there is no corresponding vector bundle in the fundamental representation, because the element Z acts nontrivially on the fundamental representation while it acts trivially on the adjoint. Note that the presence of anything other than the D4-branes introduces off-diagonal blocks in the fundamental representation. Hence, for instance, although one can use two copies of the twisted SU(2) bundle on T 2 with (anti-)selfduality to construct pure D4-brane states preserving half of the dynamical SUSY, at this moment it is unclear how to describe their interaction with other D-branes.
Equations of Motion
Consider a D0-brane very close to a Dp-brane. We decompose the matrix fields into the block form: where Z µ represents the Dp-brane and x µ the D0-brane. The generalization to many Dp-branes and D0-branes is straightforward. While the Z's are realized as covariant derivatives, the x's in general can have nontrivial coordinate dependence on the dual torus, but when we take the limit of infinite radii, only coordinate-independent states can have finite energy and remain coupled to the theory. (We allow infinite energy for Z because it is just the energy for the Dp-brane.) For simplicity we choose the coordinates of spacetime such that x µ = 0 and Z a = 0, a = p + 1, · · · , 9. The Dp-brane is parallel to the directions 1, 2, · · · , p and the D0brane is right on top of it. The diagonal part is taken as the background configuration.
When putting the two D-branes together as in (14) and set the off-diagonal parts to be zeros, one can check easily that part of the supersymmetry is preserved only if p is 0, 4 or 8.
To count the number of zero modes, or equivalently to count the dimension of the moduli space for this background, it is easier to consider the perturbation of this background and keep only the lowest order terms to obtain linear differential equations for the perturbative fields y and θ. In this way we count the dimension of the tangent space on the moduli space. One may also introduce perturbations in the diagonal blocks for fluctuations on the Dp-brane and deviations of the D0-brane from the origin, but here we are for the time being only interested in the off-diagonal blocks y and θ since they represent the p-p ′ strings. The perturbations of the diagonal blocks can be studied in the same way we study the off-diagonal part. To the lowest order in perturbation, the perturbative diagonal and off-diagonal parts are not correlated, hence we can treat the off-diagonal ones alone.
Plugging the expression of the matrix fields (14) into the action of the M(atrix) model (1), we find where L Z and L x are of the same form as (1) except that we replace (X, Ψ) by (Z, Θ) and (x, ψ), respectively. and For more than one D0-branes the x's are matrices and we need to take traces for these formulas. ¿From the action one can derive the equations of motion for y and θ. Since the Hamiltonian for a time-independent background in the temporal gauge (A 0 = 0) is minimized by time-independent y, we look for time-independent solutions for y and θ. Ignoring the time derivative and higher order terms, we find where D µ = iZ µ are covariant derivatives on the dual torus T p as D µ = 2πR µ ( ∂ ∂σ µ + A(σ)), µ = 1, · · · , p. Eq.(18) has to be supplemented by the gauge-fixing condition Using (20), eq. (18) can be written as The equation of motion for θ is In terms of the covariant exterior derivative d A , its dual d * A , the Hodge dual * and the projection P = 1 2 (1 − * ) (so P 2 = P ), eqs. (18) and (20) now read where y = y µ dσ µ . These equations are formally the same as those for the instanton zero modes, which correspond to perturbations of the Z's above. The only difference is that the perturbations of Z is in the adjoint representation of U(K), while y is in the fundamental representation. Because we are considering the Euclidean torus, the inner product ·|· defined by integration on the torus and the trace of matrices is positive definite. Hence y|d * A P d A y = 0 implies that P d A y = 0.
In addition, eq. (19) implies that D µ y a |D µ y a = 0 and so D µ y a = 0, which means that the topological charge vanishes unless y a = 0. Thus we conclude that y a = 0 for a = p + 1, · · · , 9.
0-2 Strings
Let Z 1 and Z 2 be realized as U(K) covariant derivatives on the dual torus as Z i = −iD i with D i = ∂ ∂σ i + A i as given in Sec.3 so that where f = 2πR 1 R 2 /K. For simplicity we are considering unslanted torus with radii R 1 = R 2 = 1/2π. It is straightforward to generalize to slanted tori with arbitrary radii. Let z = (σ 1 + iσ 2 )/2π,z = (σ 1 − iσ 2 )/2π be the complex coordinates on T 2 , and let where D 2 = D 2 1 +D 2 2 . Note that the algebra ofD and −D is the canonical commutation relation for annihilation and creation operators scaled by 2f . Therefore the spectrum of DD is {0, −2f, −4f, · · ·} and the spectrum of D 2 is The fermionic zero modes satisfy (22), which gives (D 1 + Γ 1 Γ 2 D 2 )θ = 0, so that where θ ± are the two Weyl components of θ satisfying iΓ 1 Γ 2 θ ± = ±θ ± . Because θ + |DDθ + = θ|(D 2 − f )θ < 0 for any θ + = 0, we must have θ + = 0. The solution of θ − is obviously the vacuum state annihilated byD. One can easily get the explicit expression of the vacuum as a section of the twisted bundle using the explicit construction in Sec.3. Another way is to note that the equation Dφ = 0 has the general solution of φ = exp(− π 4K (z 2 + 2zz))f (z), where f (z) is an arbitrary holomorphic function. For φ to be a section of the twisted bundle, we need to impose the quasi-periodic boundary conditions on φ. One then sees that f (z) is related to the third elliptic theta function ϑ 3 and the solution is where q = exp(−πK) and φ k (k = 0, 1, · · · , K − 1) gives a section on the vector bundle in the fundamental representation. Applying the creation operator D to the vacuum one obtains other eigenstates of the operator D 2 .
Obviously the zero mode of θ − is just given by the solution (30). The fermionic zero mode is an SO(2) Weyl spinor with negative chirality.
The equations of motion (21) for y µ are where y = y 1 + iy 2 ,ȳ = y 1 − iy 2 . The constraint (20) is Since the spectrum of D 2 is given by (28), we see that there is no solution for y,ȳ, hence there is no bosonic zero mode.
0-4 Strings
We decompose the 10 dimensional γ-matrices as where the σ i 's are the Pauli matrices satisfying σ 1 σ 2 = iσ 3 , the γ µ 's (µ = 1, · · · , 4) are the γ-matrices for SO(4), the γ a 's (a = 5, · · · , 9) are SO(5) γ-matrices. Corresponding to this decomposition, a 10-dimensional spinor θ iαβ has three indices, where i = ±, α, β = 1, · · · , 4. For Weyl spinors with positive (negative) chirality one has i = + (i = −). Since all spinors in this theory are 10-dimensional Weyl spinors with positive chirality, we will omit the index i in the following. We consider the case where the gauge field for the D4-brane background is self-dual, so that half of the dynamical SUSY with parameter ǫ satisfying is preserved. The number of zero modes for the case of T 4 for (anti-)self-dual gauge field configurations can be determined using the index theorem [23,24]. The number of spinorial zero modes is found to be α V k, where α V is the Dynkin index for the representation V of the fermions, and k is the instanton number. The number of vectorial zero modes is 2α V k.
While according to the the index theorem the number of zero modes is independent of the details of (anti-)self-dual gauge field configurations, here we give for example an explicit construction of a twisted bundle for the cases with R 1 R 2 = R 3 R 4 . On each T 2 factor of T 4 , one can construct a twisted U(K) bundle as in Sec.3. When putting them together, we obtain a U(K 2 ) bundle with unit instanton number: 1 8π 2 tr(F 2 ) = 1. Unlike the case of twisted SU(K)/Z K bundles which can have fractional instanton numbers, for U(K) the instanton number are always integral [21]. A section on the twisted U(K 2 ) bundle on T 4 has the general form of a linear combination of products of sections on each T 2 : φ j (σ 1 , σ 2 )φ k (σ 3 , σ 4 ), where φ is defined by (13) for j, k = 0, 1, · · · , (K − 1). Indices j and k compose an index for the fundamental representation of U(K 2 ). In general one can also consider a U(K 1 ) and U(K 2 ) bundle on the two T 2 factors, respectively, and obtain a U(K 1 K 2 ) bundle on T 4 .
Because the supersymmetry is not completely broken, the solution of fermionic zero modes can be used to obtain the solution of bosonic zero modes. The solution of the fermionic and bosonic zero modes can be obtained explicitly by considering T 4 as T 2 ×T 2 and using the methods in Sec.5. Let the SO(4) spinor satisfying (22) be denoted by θ 0 . It is easy to see that the fermionic zero mode satisfies iΓ 1 Γ 2 θ 0 = iΓ 3 Γ 4 θ 0 = −θ 0 , which implies that θ 0 is of negative chirality as an SO(4) Weyl spinor: Γ 1 Γ 2 Γ 3 Γ 4 θ 0 = −θ 0 . (If the gauge field is anti-self-dual, the zero mode will be a Weyl spinor with positive chirality.) For a single D4-brane there is only one fermionic zero mode, which is given by the product of the solutions (30) on each T 2 factor in T 4 .
Since the equations of motion for y and θ are supersymmetric, the bosonic zero mode can be obtained by SUSY transformation [25] as where v ρ is an SO(4) Weyl spinor with positive chirality. This comes from the SUSY transformation of y: δy µ = iǭΓ µ θ. When one replaces in this transformation θ αβ by the zero mode θ 0 ρ , δy will satisfy the equations of motion of y for any ǫ ρ in the SUSY preserved by the background (37). It follows that the y given by the above expression is a zero mode of y. Since θ 0 is a function (bosonic), v is a bosonic variable. It matches the massless bosonic field from the NS sector of the 0-4 string. Here it is amusing to see how supersymmetry dictates the zero modes of a field y in vector representation to be described by a variable v in spinor representation. The index theorem [23] assures us that these are all the zero modes in the theory, giving precisely the massless spectrum of 0-4 strings. The supersymmetry transformation between χ and v is induced from the SUSY transformation between θ and y by factoring out the common factor of θ 0 . Up to first order perturbation, the SUSY transformation of θ is: 9. Using (25), (19) and (37), one finds 2 The instanton connection lies in SU(2) R ∈ SO(4) which is supposed to be the global R-symmetry for the action of 0-4 strings. Field v carries the fundamental index of SU(2) R . Let τ i denote generators of the R-symmetry group. There are two possible SU(2) R invariant D-terms, i |v + τ i v| 2 and |v + v| 2 . The two terms are different when there are more than one D0-branes, in which case only the first is actually present in the action [11]. These D-terms are expected to arise from the F 2 term in the Super Yang-Mills theory. Upon expanding this term in y one finds tr|y µ y + ν − y ν y + µ | 2 and |y + µ y ν − y + ν y µ | 2 . For a given instanton background, since SU(2) R is broken explicitly, these terms do not give those SU(2) R invariant D-terms. Only after averaging over the moduli space does one expect that the symmetry SU(2) R is restored. However, we do not know how to rule out the U(1) D-term |v + v| 2 .
The above discussion easily generalizes to the case of instanton number k. There are 2k zero modes for y µ , and can be interpreted as the fundamental of U(k) × SU(2) R , where U(k) is the gauge group associated to k coincident D4-branes.
In ref. [9], an action describing M(atrix) theory of a longitudinal 5-brane is proposed. Since a longitudinal 5-brane in M-theory corresponds to a D4-brane in type IIA string theory, some extra dynamical variables corresponding to 0-4 strings were needed and were introduced by hand. Their quantum numbers are exactly the same as those of the variables v and χ that we have discussed above. Thus, it is natural to identify the additional variables introduced by Berkooz and Douglas [9] with the degrees of freedom associated with the off-diagonal zero modes. We have verified that indeed the action of the latter naturally derives from the fundamental M(atrix) model action, and it agrees with the action postulated in ref. [9], with a possible U(1) D-term as we mentioned above. (In the derivation, the coefficient of each term in the action is determined by an integral of a product of the zero mode solutions θ 0 . We have not been able to calculate all coefficients; presumbly they are uniquely determined by the surviving supersymmetry.) In addition to the variables v and χ, the action in ref. [9] has included also fields describing fluctuations of the longitudinal fivebrane background, which in our approach correspond to fluctuations residing in the diagonal blocks. In principle one can consider fluctuations of all blocks in the matrix fields for a given background, and then solve the exact (nonlinear) equations of motion. The parameters analogous to v and χ above for the general solutions correspond to the massless modes of the whole system of (p ′p)-branes. In the above we have only solved the linearized equations of motion for the off-diagonal blocks. The supersymmetry derived from our solutions will only hold to the lowest order in perturbation. If one solves the exact nonlinear equations of motion, one should be able to derive the exact SUSY transformation among the zero mode parameters.
In the above we have only considered the case with vanishing distance between the D0-brane and the D4-brane. When we pull the D0-brane away from the D4-brane, the zero modes will gain masses proportional to the distance. But we expect that the number and representation of the lowest energy modes will remain the same as the zero modes. The proposal of Ref. [9] contains only the lowest energy modes and therefore should be viewed as a low energy effective theory.
0-6 Strings and 0-8 Strings
The case of 0-6 strings and 0-8 strings can be studied in a similar fashion as the 0-2 and 0-4 strings. To generalize the consideration for 0-2 and 0-4 strings to 0-p strings for p = 2, 4, 6, 8, we choose the gauge field configuration for the Dp-brane to be p/2 copies of the gauge field configuration on T 2 described in Sec.5, that is, where f = 1/2πK. This defines a twisted U(K p/2 ) bundle with unit p-brane charge: We focus our attention on the first copy of T 2 . Let y = y 1 + iy 2 andȳ = y 1 − iy 2 . The equations of motion for them are (D 2 − 2f )y = 0 and (D 2 + 2f )ȳ = 0, where D 2 = p µ=1 D 2 µ for a Dp-brane. Since the spectrum of D 2 1 + D 2 2 is shown to be {−f, −3f, −5f, · · ·} in Sec.5, the spectrum of (D 2 + 2f )y is {−(p/2 − 2)f, −p/2f, · · ·} and the spectrum of (D 2 − 2f ) is purely negative for any p. It then follows that there is a zero mode for y only if p = 4.
The equation of motion for the fermionic mode is decomposed into p/2 equations for a Dp-brane: (D 2i−1 + Γ 2i−1 Γ 2i D 2i )θ = 0, i = 1, · · · , p/2. Obviously the solution of θ is simply the product of the solution (30) for each copy of T 2 and it is of negative chirality on each T 2 so that Γ 1 · · · Γ p θ = i p/2 θ. The index theorem [26] ind(E, D) can be used to show that there is only one fermionic zero mode if one can show that there is no zero mode of the opposite chirality: Γ 1 · · · Γ p θ = −i p/2 θ. Indeed one can consider the spectrum of the Dirac operator squared ( The spectrum of D 2 is given above and the spectrum of the second term is {−(p/2)f, −(p/2− 2)f, · · · , (p/2)f }. It follows that the zero mode must have negative chirality on each The result is therefore that for a 0-p string there is always a single fermionic zero mode and there is no bosonic zero mode except for the 0-4 string. This is in agreement with the results of string theory.
In Sec.6 we showed that the SUSY property of the zero modes of a 0-4 string follows from that of the off-diagonal blocks. The SUSY transformation of the zero mode for a 0-8 string can also be derived from the SUSY of SYM. Now let us show that the bosonic zero modes derived from the fermionic zero modes using the SUSY transformation as in Sec.6 merely vanish. Note that the SO (1,9) symmetry is decomposed into SO(1, 1) × SO(4) × SO(4) for the 0-8 string, where the D8-brane has two D4-branes with it. The Γ matrices can be taken as in Sec.6. A 10-dimensional spinor θ ±αβ has three indices corresponding to the three factors of orthogonal group. The SUSY preserved by the D8-brane background is parametrized by ǫ with positive or negative chirality on both factors of SO(4); and the zero mode of θ has negative chirality for both SO(4). Since a given Γ-matrix can change the chirality of only one of the two copies of SO(4), the SUSY transformation δy µ = iǭΓ µ θ vanishes for θ being the zero mode and does not give nontrivial solutions to y.
It is easy to see that the fermionic zero mode is given by θ +σρ = χ + λ 0 σρ , where the λ 0 is the zero mode solution on T 8 . The SUSY transformation of the fermionic zero mode is trivial (δχ = 0) because all y's vanish. This agrees with the proposal of Horava [15] to interpret the zero modes as the extra fermions needed in the heterotic matrix theory [27,12,14].
If the gauge field configuration for a D4-brane is not (anti-)self-dual, it is found that [24] the configuration is not stable because of the existence of negative energy states in the perturbation of the gauge fields. Therefore all states tend to decay into an (anti-)self-dual state with the same topological charge. In our consideration of the off-diagonal blocks y, the spectrum of the operator (−D 2 ± 2f )/2 corresponds to the energy of states on the 0-p strings. For the 0-2 string the lowest energy of y is −f < 0 and it signifies the instability of the system. This is consistent with the fact that the 0-brane tends to distribute uniformly over the D2-brane [28] to form a bound state. For D4-branes corresponding to (anti-)self-dual configurations the lowest energy of y is 0, but otherwise there would be states with negative energy equal to −|f 1 − f 2 | where F 12 = if 1 and F 34 = if 2 . In general for a 0-p string the lowest energy is the minimum of {( . While there are D2-branes inside the D4, D6 and D8-brane configurations we considered, the interaction between the Dp-brane and D0-brane includes the attraction from the D2-brane and repulsion from the D6. (The D0-brane is marginally bound to a pure D4-brane.) [29] If the lowest energy is positive, zero or negative, it means that the configuration is stable, marginally stable or unstable, respectively. In the cases of D6 and D8-branes, the negative modes are due to the D2-branes inside the higher branes. Take D6-brane as an example. Let f i > 0 and f 1 = f 2 , then there is a D4-brane wrapping around the first two tori. If f 3 > 2f 1 , there is a negative mode of energy 2f 1 − f 3 . Apparently, the attractive force due to the D2-brane on the third torus overcomes the repulsive force of the D6-brane.
Generically for Dp-branes there is a Fock space H i for each T 2 , where −D i / √ 2f i and D i / √ 2f i act as the creation and annihilation operators. After imposing the constraint (20), the spectrum of y µ is found to be , · · · , p/2; n j = 0, 1, 2, · · ·}. (43)
Discussions
In this paper, we have presented a general framework and a systematic analysis for the zero modes in the off-diagonal blocks in M(atrix) theory. More concretely, we have shown how to determine the number of zero modes by index theorem and surviving supersymmetry, and moreover we have determined the quantum numbers of the zero modes, including the chirality of the fermion zero modes. These quantum numbers are nontrivial, and crucial for us to show the agreement with string theory predictions on open p − p ′ strings stretching between D-branes, providing one more check for M(atrix) theory. Previously in Refs. [34,35,36], in the middle of computing the effective potential between a D0-and Dp-brane, the energy levels of the off-diagonal block have been determined using a slightly different representation for the Dp-brane. But the zero modes were not mentioned and identified, and their quantum numbers were not studied. Now let us discuss the significance in M(atrix) theory of the zero modes residing in the off-diagonal blocks. First we have shown in Sec. 6 that for the case of a longitudinal fivebrane, the degrees of freedom associated with the off-diagonal zero modes naturally provide the extra degrees of freedom put in by hand by Berkooz and Douglas, ref. [9]. And we have checked that the action they postulated are derivable from the M(atrix) theory action, with a possible D-term. Indeed, in this case, besides the right topological number (or brane charge), the correct counting of zero modes we found in Sec. 6 is crucial for justifying our identification of a longitudinal 5-brane with proper instanton configuration on T 4 rather than on S 4 . Also the correct number of zero modes is crucial for a check of the correct tension and R-R charge for the longitudinal 5-brane. It is argued in [9] that upon integrating out 0-4 strings the long range force between a longitudinal 5-brane and a probe supergraviton is generated. If we had a different number of zero modes we would obtain a gravitational field with a different magnitude for the 5-brane. Also as shown in Ref. [9], the R-R charge of a longitudinal 5-brane manifests itself in the Dirac quantization of a membrane moving in its background. By realizing the membrane as a collection of D0-branes, the zero modes on the 0-4 strings would induce fields on the membrane. It is the fermion zero mode χ induced on the membrane that is responsible for generating the Berry phase. In fact, by T-duality the induced zero mode is related to the zero mode on a 0-6 string. Our results in Sec.7 provide the proof for the existence of a single chiral zero mode necessary for the correct Berry phase. Had we had two zero modes, we would have generated twice the correct Berry phase, and therefore twice the R-R charge. As pointed out in Sec.6, in the background of k instantons, there are k fermionic zero modes. The Berry phase is then k times as large, and this signals that there are k units of R-R charge.
Upon compactifying on a 5-torus T 5 , instanton strings will appear in the spectrum. These are part of constituents of some 5D black holes [30,10]. A 5D black hole is described by a long instanton-string carrying momentum. Probing the black hole with a supergraviton, one expects that the corresponding static potential as well as the velocity dependent force are generated by integrating out the off-diagonal blocks. This is shown to leading order in Ref. [31], where the full 5 + 1 massive modes are integrated out. It is an interesting question whether the relevant terms can be generated by integrating out only the zero modes discussed in Sec.6.
It should be interesting to compare our result with the work of Ref. [32]. There a D5-brane is interpreted as an instanton inside 9-branes. The probe is a D1-brane. The 1-5 string sector is constructed with the D-brane technology. A (0,4) sigma model in an instanton background [33] results from integrating out the massive 1-5 strings. There are two differences between the case under discussion and Ref. [32]. First, it is crucial for us to work with T 4 , only then we have the correct number of zero modes. Second, the SU(2) R symmetry in our problem comes from the SO(4) of T 4 , while the SU(2) R of [32] does not act on the gauge field, since the gauge field carries an index transverse to the D5-brane.
Finally, the origin of p − p strings is also easy to see. When p = 2, the world-volume action is written down [7]. For p = 4, one can consider zero modes of the fundamental of SU(2)×SU(2) ∈ SU(4) in the background instanton number 2 solution with a gauge group SU(4). It is important to embed the instanton to SU(4) rather than to a single SU(2), in order to be able to higgs the off-diagonal strings. By an index theorem, there are 16 real bosonic zero modes. 8 of them are W-bosons, and the other 8 are massive Higgs. The 8-8 strings are discussed in [15].
We have identified the stretched strings between a p-brane and a p ′ -brane as just the zero modes of off-diagonal blocks; one would like to ask what about the massive modes of p-p ′ strings in the M(atrix) theory. On one hand, for short open strings these modes, similar to the massive modes of short open 0-0 strings, are simply absent in the M(atrix) model by postulate. (It would be interesting to examine the long strings in M(atrix) theory ending on p p ′ -branes.) On the other hand, it might be wise to leave the possibility open that these massive modes on short strings and other massive modes, such as KK modes in a higher dimensional super Yang-Mills theory could be physically relevant so that their inclusion is necessary to make the high dimensional theory well-defined in the UV regime. We leave investigation of this issue to the future.
How about the higher modes of the off-diagonal blocks? Could their effects approximate to those of the massive modes of p-p ′ strings? We do not think so, since the latter is graded by α ′ , while the former is determined by the scale of the background field and the scale of the torus. The modified M(atrix) model in the presence of a longitudinal 5-brane proposed in Ref. [9] should be viewed as a low-energy effective theory of the fundamental M(atrix) model, in which the higher modes of the off-diagonal block are ignored. Indeed, in this case the zero-modes of the off-diagonal block dominate the low-energy physics, since surviving supersymmetry makes the contributions of the higher modes cancel in the leading order at large distances.
Although in this papers we have used the IIA language for brane names, the above discussions are of M theory nature. It may be amusing to consider an alternative IIA theory which is obtained by compactifying the ninth direction and interchanging the role of the ninth and eleventh directions. What we called D0-branes above become short strings, which are also understood as D0-branes by introducing unit electric flux to the corresponding matrix element [19]. We leave the complete analysis and related topics for the future. | 2019-04-14T02:59:53.254Z | 1997-06-10T00:00:00.000 | {
"year": 1997,
"sha1": "30997c493d4d44e75e63ffc437ad70bfab767548",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9706073",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "30997c493d4d44e75e63ffc437ad70bfab767548",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
260927288 | pes2o/s2orc | v3-fos-license | Unleashing the Power of NR4A1 Degradation as a Novel Strategy for Cancer Immunotherapy
An effective cancer therapy requires both killing cancer cells and targeting tumor-promoting pathways or cell populations within the tumor microenvironment (TME). We purposely search for molecules that are critical for multiple tumor-promoting cell types and identified nuclear receptor subfamily 4 group A member 1 (NR4A1) as one such molecule. NR4A1 has been shown to promote the aggressiveness of cancer cells and maintain the immune suppressive TME. Using genetic and pharmacological approaches, we establish NR4A1 as a valid therapeutic target for cancer therapy. Importantly, we have developed the first-of-its kind proteolysis-targeting chimera (PROTAC, named NR-V04) against NR4A1. NR-V04 effectively degrades NR4A1 within hours of treatment in vitro and sustains for at least 4 days in vivo, exhibiting long-lasting NR4A1-degradation in tumors and an excellent safety profile. NR-V04 leads to robust tumor inhibition and sometimes eradication of established melanoma tumors. At the mechanistic level, we have identified an unexpected novel mechanism via significant induction of tumor-infiltrating (TI) B cells as well as an inhibition of monocytic myeloid derived suppressor cells (m-MDSC), two clinically relevant immune cell populations in human melanomas. Overall, NR-V04-mediated NR4A1 degradation holds promise for enhancing anti-cancer immune responses and offers a new avenue for treating various types of cancer.
Introduction
The tumor microenvironment (TME) consists of many cell types that cooperatively promote tumor development and progression. Most cancer therapeutics are designed to target one molecule in one defined cell type. For example, vemurafenib (BRAF inhibitor) inhibits melanoma through targeting mutated BRAF; whereas pembrolizumab (anti-PD-1 antibody) blocks the immune checkpoint PD-1 on T cells to increase anti-tumor immunity. Several FDA-approved combinations therapies involve cancer cell-killing chemotherapies and immune checkpoint inhibitors (ICI) to activate anti-cancer immunity within the TME, suggesting that co-targeting cancer cells and other cell types within the TME can be effective therapeutic regimens for cancer.
The current research focus is NR4A1, an intracellular transcription factor that is known for its crucial role not only in immune regulations but also in many other functions 1-4 . Specifically, within the TME, NR4A1 is known to act on several cell types: 1) NR4A1 is involved in angiogenesis in the B16 melanoma model 5 . The impact of NR4A1 on neoangiogenesis has also been confirmed in several other tumor models [6][7][8][9] . As neoangiogenesis in tumors induces the formation of new blood vessels with increased permeability, NR4A1 plays a critical role in regulating basal vascular permeability by increasing endothelial nitric-oxide synthase and downregulating several junction proteins involved in adherens junctions and tight junctions 10 ; 2) NR4A family has been shown to be elevated in exhausted CD8 + T cells, and their deletion rescues the cytotoxicity function of CD8 + T cell during tumorigenesis 1,11 ; 3) Tumor-infiltrating regulatory T cells (TI-Tregs) rely on NR4A1 and its other family members for their immune suppressive function 4, 12 ; 4) NR4A1 can be induced in natural killer (TI-NK) cells through the IFN-γ/p-STAT1/IRF1 signaling pathway, which leads to diminished NK cell-mediated cytotoxicity against hepatocellular carcinoma 13 . Under physiological conditions, NR4A1 exerts crucial regulatory functions in B cells, limiting their expansion in response to antigen stimulation the absence of secondary signals and constraining the survival of self-reactive B cells in peripheral tissues 3 . B cells have been increasingly recognized for their important role in cancer, with studies indicating a correlation between B cell presence in the TME and improved prognosis and sensitivity to ICIs 14 . While the specific impact of NR4A1 on TI-B cells remains unexplored, the involvement of NR4A1 in modulating B cell responses underscores its potential as a therapeutic target for harnessing the anti-tumor functions of B cells in the TME.
macrophages (M), dendritic cells (DCs), Treg cells, and exhausted CD8 + T (Texh) cells (Fig.
1C) 25 . NR4A2 displayed a ubiquitous presence across all immune cell populations, while NR4A3 predominantly appeared in monocytes or macrophages, DCs, and a specific subset of CD8 T cells (Fig. 1C). Leveraging the TCGA datasets, we investigated NR4A1 expression in melanoma patients and identified a negative correlation between NR4A1 and anti-tumor immune responses, including the gene expression of IFNγ (interferon γ ), GZMB (granzyme B), and PRF1 (perforin) (Supplementary Fig. 1A-1C). Gene set enrichment analysis (GSEA) revealed that NR4A1 expression was inversely associated with T cell receptor (TCR) and B cell receptor (BCR) signaling pathways (Supplementary Fig. 1D-1G). Collectively, these findings highlight the potential involvement of NR4A1 in the immune modulation of human cancers.
To determine whether NR4A1 plays important roles in the TME, we implanted 3 syngeneic tumors into wild-type (WT) or NR4A1 -/-(KO) mice, including MC38 colon cancer model, Yummer1.7 and B16F10 melanoma models. We used a minimal number of tumor cells that can produce tumors in WT mice and found that NR4A1 -/-(KO) mice exhibited much slower tumor growth rate in all of the tumor models ( Fig. 2A-2C). MC38 model showed very minimal tumor growth that peaked at the 3 rd week in the KO mice but was regressed thereafter ( Fig. 2A), suggesting that tumors be eradicated by the induction of anti-tumor immunity in KO mice. These data strongly support the role of NR4A1 in the TME and immune modulation, and suggest that targeting NR4A1 is a potentially promising immunotherapy for cancer.
The design and screening of PROTACs against NR4A1. A number of NR4A1 ligands have been reported 26 , and celastrol is one of the few that has been well characterized. Celastrol covalently binds to NR4A1 through engaging cysteine C551, and the binding affinity is within the subnanomolar range based on several biophysical assays [16][17][18] . The structure and activity relationship (SAR) study of celastrol on NR4A1 has been explored, which indicates that the carboxylic acid group of celastrol is amenable for chemical modifications 17 . Therefore, we reasoned that celastrol might be a suitable warhead for PROTAC construction. We performed a molecular docking study between celastrol and the ligand binding domain (LBD) of NR4A1 and found that the carboxylic acid group is solvent-exposed, which represents an ideal tethering site for linker attachment (Fig. 3A). We utilized polyethylene glycol (PEG) linkers for the conjugation of celastrol to a VHL E3 ligase ligand via two amide bonds. As a proof-of-concept study, three PROTACs with different PEG linker lengths were synthesized (Fig. 3B) and tested in CHL-1 human melanoma cells ( Fig. 3C-3D). Celastrol treatment did not alter the protein level of NR4A1 ( Fig. 3C-3D). NR-V04 -which bears a 4-PEG linker, exhibited the highest potency in reducing the protein level of NR4A1 ( Fig. 3C-3D) and was thus chosen for further investigations.
NR-V04 efficiently reduces NR4A1 protein level. Following a 16-hr treatment, NR-V04 induced a dose-dependent decrease of NR4A1 protein in CHL-1 cells with a 50% degradation concentration (DC 50 ) of 228.5 nM and A375 melanoma cells with a DC 50 of 518.8 nM (Fig. 4A).
Time course studies indicated that the efficient reduction of NR4A1 occurred between 8-48 hrs after NR-V04 treatment (Fig. 4C). Interestingly, we observed an initial induction of NR4A1 protein after 4 hrs of NR-V04 treatment (Fig. 4C), along the timeline of increased NR4A1 mRNA after 4 hrs of NR-V04 treatment ( Supplementary Fig. 3). We reasoned that this induction should be mainly caused by celastrol, because we observed an increase in NR4A1 mRNA levels after 2 hrs of celastrol treatment ( Supplementary Fig. 3). Among the three NR4A family members, NR-V04 selectively reduced the NR4A1 protein level while sparing NR4A2 and NR4A3 (Fig. 4D, Supplementary Fig. 2C). Our data support that NR-V04 is an effective NR4A1 degrader in vitro.
NR-V04 induces ternary complex formation and proteasome-mediated NR4A1 degradation.
Ternary complex formation is a prerequisite for PROTAC to mediate protein degradation 22 . We employed proximity ligation assays (PLA) to detect localized signals only when NR4A1 and VHL are in close proximity with the presence of NR-V04. NR-V04 treatment induced very strong PLA signals but not when cells were treated with celastrol or DMSO (Fig. 5A, Supplementary Fig. 4A). Additionally, we conducted co-immunoprecipitation (Co-IP) experiment using Flag-NR4A1 expressed in the HEK293T cells with and without NR-V04 treatment. Notably, NR-V04 treatment led to the formation of a complex between NR4A1 and VHL, which was not observed with DMSO-treated cells (Fig. 5B).
We investigated the immune profile in the Yumm1.7 tumor model, known for its human-relevant genetic mutations in melanoma cancer, including Braf V600E/wt /Pten -/-/Cdkn2 -/-27 . NR-V04 treatment also resulted in a significant increase in the TI-B cells ( Fig. 8J-K), supporting a general mechanism of action for NR-V04 in TI-B cell proliferation and induction.
NR-V04 exhibits an excellent safety profile in mice. Using tumor-bearing mice ( Fig. 9), NR-V04 did not significantly induce body weight changes, nor did it induce significant changes in peripheral blood or spleen (Fig. 9). We further assessed the toxicity of NR-V04 using both male and female C57BL/6J mice with increased doses up to 5 mg/kg (Fig. 9A); there were no significant change in body weight (Fig. 9B). Complete blood counts (CBC) were determined at different time points afterNR-V04 treatment that did not result in any significant changes of the hematology parameters, including whole blood cell count ( Furthermore, we conducted a histological examination using hematoxylin and eosin (H&E) staining to evaluate the kidney, liver, and small intestine tissues in both male and female mice at day 42. Notably, NR-V04 treatment did not induce tissue damage in any of these organs (Fig. 9H). These findings provide important insights into the potential clinical application of NR-V04 as a safe and effective immunotherapeutic agent.
Discussion
In this study, we successfully developed NR-V04, a PROTAC that efficiently degrades NR4A1 in the TME. treatment, we observed an increase in the B cell population that primarily consists of plasmablasts, a subset of B cells known for their rapid production and early antibody responses to tumor antigens 45 . This increase in plasmablasts is associated with a favorable prognosis and has been observed within tumors 31 . Additionally, NR-V04 treatment enhances the expression of . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint both BCR isotypes, IgD + IgMand IgD + IgM + , suggesting an enhanced B cell response to tumor antigens. An additional advantage of NR-V04's B cell regulation is its selective impact on the TME, sparing the B cells or other immune cells in the spleen and blood. This minimizes potential side effects, such as autoimmune responses triggered by B cell antibodies in peripheral tissues.
It is crucial to acknowledge that the frequency of B cell infiltration varies among different tumor types. For example, B16F10 melanoma exhibits a high B cell infiltration, accounting for more than 50% of total tumor-infiltrating lymphocytes, whereas Yumm 1.7 shows a much lower B cell infiltration with only 2-3% of total tumor-infiltrating lymphocytes. This variation directly correlates with NR-V04's therapeutic outcomes, implying that NR-V04 application may achieve more favorable responses in cancers with high B cell infiltrations.
NR-V04's impact on the TME extends beyond regulating B cells to influence anti-tumor immunity. NR-V04 decreases mMDSCs (CD11B + Ly6C + ) in tumors and blood. mMDSCs are known to suppress the immune response, including B cell function 46,47 . Furthermore, B cell activation results in immune complex formation that attracts pro-inflammatory cytokines produced by mMDSCs 31, 48 . Thus, NR-V04's action on mMDSC can potentially alleviate immune suppression, thereby enhancing B cell-mediated anti-tumor effects. Moreover, NR-V04 significantly increases CD8 + Tem cells in the spleen of B16F10 melanoma-bearing mice, supporting potential systematic protection of tumor cell recurrence when encountered with those CD8+ Tem cells.
The PK-PD decoupled, long-lasting degradation effect of NR-V04 in vivo (Fig. 6B) is expected for a PROTAC molecule but could also suggest that NR-V04 can specifically accumulate in the tumors, a favorable feature for drug development that warrants reasonable lower effective doses and longer treatment intervals. The warhead used in NR-V04 is celastrol which has been associated with several adverse effects such as hepatotoxicity 49, 50 , hematopoietic system toxicity 51 , nephrotoxicity 52 , weight loss, and negative impact on metabolic and cardiovascular functions 53-55 . However, with the same treatment regimen, NR-V04 exhibits an excellent safety profile in vivo. This safety advantage could be attributed to NR-V04's superior specificity as a PROTAC that targets NR4A1 rather than a collection of other known celastrol targets, which reduces off-target effects that could lead to toxicity in patients. One caveat related to the NR-V04 is the usage of VHL which is a well-established tumor suppressor gene 56 and very commonly mutated in human cancers such as RCCs. We have been actively developing other E3-recruiting NR4A1 PROTACs, but at the current stage, none of those different PROTACs exhibit better tumor suppression and safety profile than NR-V04.
Materials and Methods
Chemistry DMF and DCM were obtained via a solvent purification system by filtering through two columns packed with activated alumina and 4 Å molecular sieve, respectively. Water was purified with a Milli-Q Simplicity 185 Water Purification System (Merck Millipore). All other chemicals and solvents obtained from commercial suppliers were used without further purification. Flash chromatography was performed using silica gel (230-400 mesh) as the stationary phase.
Reaction progress was monitored by thin-layer chromatography (silica-coated glass plates) and visualized by 256 nm and 365 nm UV light, and/or by LC-MS. 1 H NMR spectra were recorded in CDCl 3 or CD 3 OD at 600 MHz, and 13 C NMR spectra were recorded at 151 MHz using a Bruker Intermediate 1a to 3a were synthesized according to previously reported procedure 57 , briefly, to a solution of VHL-amine HCl salt (1.0 equiv), and corresponding acid-terminated linkers (1.0 equiv) in 5 mL DCM was added HATU (1.1 equiv) and DIPEA (5.0 equiv), the reaction was stirred at room temperature overnight. After extraction with EA and brine, the residue was concentrated and purified by silica gel column chromatography to yield 1a to 3a.
Quantitative PCR (qPCR) Pharmacokinetic study of NR-V04 The pharmacokinetic study of NR-V04 subjected to PK studies was done by Bioduro Inc. on healthy male C57BL/6 mice weighing between 20 g and 25 g with three animals in each group.
Test compounds (i.p.) were dissolved in 5% DMSO/3% tween 80 in PBS with 2 mEq 1N HCl For syngeneic tumor models, 5 × 10 5 cells MC38, Yumm 1.7 and Yummer 1.7, 1 × 10 5 cells of B16F10, were resuspended in phosphate buffered saline (PBS), and implanted into 7 to 9- week-old mice subcutaneously. For tumor inhibition experiment, mice were treated with 1.8 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint mg/kg NR-V04 or 0.75 mg/kg celastrol by IP injection twice weekly. For immune profile experiment, once tumors reached 0.5 cm in diameter, mice were treated with 1.8 mg/kg NR-V04 by IP injection twice weekly. For toxicity experiment, male and female non-tumor bearing mice were treated with 2 mg/kg NR-V04 for two doses for first week and 5 mg/kg for another two doses in second week. Blood samples were taken from the submandibular (facial) vein before each treatment and 42 days later for hematological analysis and body weight were measured twice per week up to day 42. Mice were euthanized in accordance with IACUC protocol once the largest tumors reached 2 cm in diameter and tissues were collected for further analysis. FcR blocker (anti-mouse CD16/CD32, clone 2.4G2, BD Biosciences). After surface staining, cells were fixed and permeabilized using the FOXP3/Transcription Factor Staining Buffer Set (eBioscience). Cells were stained with a combination of the following antibodies: anti-mouse FOXP3-efluor 450 (clone FJK-16S, 1:50, eBioscience), anti-mouse Granzyme B-BV421 (clone GB11), anti-mouse NR4A1-PE (clone 12.14, eBioscience). Human cells were stained with a combination of the following antibodies: anti-human CD45-BV510 (clone H130), anti-human CD3-AF700 (clone HIT5a), anti-human CD4-BV421 (clone OKT4), anti-human CD8-BV711 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint (clone RPA-T8), anti-human CD127-BV605 (clone A019DS), anti-human CD25-PE-Cy7 (clone MA251) plus FVD-eFluor-780 (eBioscience) and human FcR blocking Reagent (StemCell Technologies). Cells were washed then fixed and permeabilized using the eBioscience FOXP3/Transcription Factor Staining Buffer Set. Cells were further stained with a combination of the following antibodies: anti-human FOXP3-FITC (clone 206D), anti-human NR4A1-PE (clone D63C5, Cell Signaling). Flow cytometry was performed on a 3 laser Cytek Aurora Cytometer (Cytek Biosciences, Fremont, CA) and analyzed using FlowJo software (BD Biosciences). All antibodies are from Biolegend, unless otherwise specified. Most antibodies were used at 1:100 dilution for flow cytometry, unless otherwise specified.
Hematology analysis
Mouse blood was prepared in micro-centrifuge tubes containing PBS with 10 mM EDTA. Blood indices were analyzed using an automated hematology analyzer (Element HT5, Heska).
H&E staining
H&E staining for mouse tissues were fixed in 10% formalin (SF98-4, Thermo Fisher) processed in UF pathology core. H&E staining was conducted using standard procedures. The sections were hydrated through xylene and a series of ethanol, followed by staining with Hematoxylin and Eosin on slides.
Statistics
Graphs and statistical analyses were performed using Prism software (GraphPad Software) unless otherwise specified. Tumor growth curves were compared using a two-way analysis of variance (ANOVA). For comparisons involving three or more groups, a one-way ANOVA was conducted, followed by Dunnett's multiple comparison test for specific group comparisons.
Unpaired T tests were used to compare means between two groups. . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made .
Reference
: . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint 4 0 . B i s w a s S , M a n d a l G , P a y n e K K , A n a d o n C M , G a t e n b e e C D , C h a u r i o R A , C o s t i c h T L , M o r a n C , H a r r o C M , R i g o l i z z o K E . I g A t r a n s c y t o s i s a n d a n t i g e n r e c o g n i t i o n g o v e r n o v a r i a n c a n c e r i m m u n i t y . N a t u r e . 2 0 2 1 ; 5 9 1 ( 7 8 5 0 ) : 4 6 4 -7 0 . 4 1 . B r u n o T C , E b n e r P J , M o o r e B L , S q u a l l s O G , W a u g h K A , E r u s l a n o v E B , S i n g h a l S , M i t c h e l l J D . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; were treated with DMSO, 500 nM Celastrol, or 500 nM NR-V04 for 16 hrs. Representative images were shown for PLA assay (20 x magnification). B. Co-immunoprecipitation (co-IP) experiment showing complex formation between NR4A1 and VHL by NR-V04 treatment. Co-IP was performed in NR4A1-Flag overexpressed HEK293T cells that were pretreated with 0.5 μM MG132 for 10 minutes, followed by 16-hour treatment with DMSO or 500 nM NR-V04. NR4A1 was pulled down using an anti-Flag antibody conjugated to magnetic beads. C. NR-V04 induces NR4A1 degradation via VHL E3 ligase-and proteasome-dependent manner. CHL-1 cells were . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
Figure Legends
The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint pretreated with 0.5 μM MG132 or 10 µM VHL 032 for 10 minutes, followed by 16-hour treatment with DMSO or 500 nM NR-V04. A-C: n=2. (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint melanoma, n=7. J-K. NR-V04 treatment increased B cell percentage in the TME, but not in spleen and blood in Yumm 1.7 melanoma (n=7). E. Two-way ANOVA was performed for the tumor growth curve with P values indicated. Others are shown as the mean ± SD. A two-sided unpaired t test was performed, with P values indicated. Figure 9. NR-V04 has minimal toxicity. A. Schematic of the toxicity testing. Male and female mice were treated with two doses of 2 mg/kg NR-V04 and two doses of 5 mg/kg NR-V04 over two weeks. Blood samples were collected on day 0, 7, 14, and 42 for hematology analysis, and body weight was measured twice per week. After day 42, all mice were euthanized, and tissues (kidney, liver, and small intestine) were harvested for H&E staining (n=3). B. Mice did not experience significant weight loss with NR-V04 treatment during the 42-day period. C-G.
Hematology analysis of different blood cell components after NR-V04 or vehicle treatment, including (C) whole blood cells, (D) lymphocytes, (E) neutrophils, (F) red blood cells, and (G) platelets. H. NR-V04 impacts on tissue histology, including representative kidney, liver, and small intestine.
. CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; Figure 1 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure 2 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure 3 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure 4 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure 5 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure 6 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; Figure 7 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure 8 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; Figure 9 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Supplementary Information. (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint in B16F10 tumors. B16F10 tumor-bearing mice were treated with vehicle and NR-V04 as in Fig. 8. n = 7. Supplementary to Figure 8. . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure S1 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure S2 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure S3 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure S4 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure S5 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure S6 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure S7 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure S8 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint Figure S9 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 13, 2023. ; https://doi.org/10.1101/2023.08.09.552650 doi: bioRxiv preprint | 2023-08-17T13:12:28.726Z | 2023-08-13T00:00:00.000 | {
"year": 2023,
"sha1": "4a47bdf2a87c988ac6fb3091ecc6209c2996523e",
"oa_license": "CCBYNCND",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/08/13/2023.08.09.552650.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "697c13262989bdc4d40985528923509d7f285a5b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
252448521 | pes2o/s2orc | v3-fos-license | Cytological Grading of Breast Carcinomas and Its Prognostic Implications
Introduction Determining the histological grade of breast carcinomas before mastectomy is necessary to decide about neoadjuvant chemotherapy. Core needle biopsies used for this purpose often under-grade the tumour. The grade obtained from fine needle aspiration cytology samples will help in such situations and whenever biopsy is not done, as in a resource-poor setup. Many studies are being done to find out the cytological grading system that correlates well with histological grading. Methods This study was done between 2016 and 2019 including the cases in which both modified radical mastectomy and fine needle aspiration of the tumour had been done. Robinson’s cytological grading was done in Papanicolaou and haematoxylin & eosin (H&E) stained cytology smears and correlated with modified Bloom-Richardson histologic grading done in modified radical mastectomy specimens. We also studied the prognostic significance of Robinson’s method by studying the association between cytological grade and lymph node metastasis. Results Sixty cases were studied. The two methods had the same grade in 49 (81.7%) cases. They showed a significant positive correlation (Spearman correlation coefficient 0.848, p-0.0001), significant association (Chi-square test, p-0.0001), and substantial agreement (kappa value 0.72). Multiple regression analysis showed chromatin score and nucleoli score as the most influential parameters. Lymph node metastasis showed significant association with cytological grade (p-0.0003), cell dissociation score (p-0.0001), nucleoli score (p-0.01), and chromatin score (p-0.04). Conclusion Robinson’s cytological grading is a simple, reliable adjunct/alternative to core needle biopsies for grading breast carcinomas before mastectomy. Hence, it can be made a part of routine cytology reporting of breast carcinomas. Further long-term studies will help in confirming its prognostic significance.
Introduction
According to the GLOBOCAN 2020 statistics, breast cancer is the most common cancer among females as well as the leading cause of cancer-related mortality in them [1]. The prognostic significance of histological grading in invasive breast carcinomas is well established and is routinely done in almost all centres [2,3]. In breast carcinomas, histological grading done on excised tumour specimens is usually taken as the gold standard [4,5]. However, tumour grade in excision specimens may change or become difficult to assess following neoadjuvant chemotherapy. In such circumstances, it becomes essential to know the actual prechemotherapy grade of the breast carcinoma by other means to determine the prognosis of the patient [6,7]. Similarly, it may be necessary to know the grade of breast carcinoma before tumour excision to decide about neoadjuvant therapy [4,5,8]. In these circumstances, the grade of breast carcinoma can be obtained only from fine needle aspiration cytology (FNAC) smears or core needle biopsy (CNB) specimens done before mastectomy. Grading obtained from CNB specimens, although widely used, has shown a concordance of only 59-75% with the final histological grade in previous studies [4,9,10]. Hence, there is a need for a cytological grading system that can complement CNB grading. Many methods have been suggested for the cytological grading of breast carcinomas [11,12]. Recent studies have shown that the grading method suggested by Robinson et al strongly correlated with the histological grade [13][14][15][16].
The aims of this study were to grade invasive breast carcinomas in FNAC smears by Robinson's method and 1 1 1 correlate it with the final histological grade obtained from tumour excision specimens. We also studied the possible prognostic significance of Robinson's method by studying its association with lymph node metastasis.
Materials And Methods
This study was conducted between 2016 and 2019 in SRM Medical College Hospital and Research Centre, a tertiary care health institution in Chengalpattu, Tamil Nadu, India. Institutional Review Board approval was obtained from the Institutional Scientific and Ethical Committee of SRM Medical College Hospital and Research Centre (approval number: 1024/IEC/2016). Cases of invasive breast carcinoma in which both the modified radical mastectomy (MRM) and FNAC of the breast lump had been done in our institution were included in our study. All these MRM specimens were received in the histopathology lab within six hours after excision. Informed consent was obtained from the patients before including them. Samples which were received after chemotherapy were excluded from our study. Cases in which the FNAC had inadequate material with less than six clusters were also excluded from our study. In all these patients, fine needle aspiration (FNA) was performed using a 20ml disposable syringe and 22-gauge needle using the aspiration technique with multi-directional passes. FNA smears were stained with Papanicolaou and hematoxylin & eosin (H&E) staining. In Robinson's cytological grading system, six different cytological parameters, namely cell dissociation, cell size, cell uniformity, nucleolus, nuclear margin and nuclear chromatin were used to grade the tumour ( Table 1). Statistical analysis was done using IBM SPSS Statistics for Windows, Version 22.0 (Released 2013; Armonk, New York, United States). Association between the two grading systems, between cytological grade and lymph node metastasis, and between various cytological parameters and lymph node metastasis were assessed using Chi-square test. Kappa value of agreement was used to measure the strength of agreement between the two grading systems. Correlation between the two grading systems was assessed by Spearman's rank correlation coefficient. Multiple linear regression analysis was done to find out the most influential parameters for cytological grading. A p-value of less than 0.05 was considered statistically significant.
Results
Sixty-four cases of invasive breast carcinomas were identified during the study period in which both MRM and FNAC had been done. Among them, four had undergone chemotherapy before MRM and were excluded from our study. All 60 patients were females. The age of these patients ranged from 35 to 82 years with a mean age of 54.3 years. Grade II was the most common histological grade in our study (27 cases, 45%) followed by grade III (19 cases, 31.7%) and grade I (14 cases, 23.3%). Robinson's cytological grading was done in these cases (Figures 1, 2, 3,4,5,6). Grade II was the most common in Robinson's cytological grading method as well (24 cases, 40%) followed by grade III (18 cases, 30%) and grade I (18 cases, 30%). A comparison between histological grading and cytological grading in these 60 cases is shown in Table 3.
FIGURE 1: Grade I tumour -cell arrangement
Robinson's cytological grade was in absolute concordance with the histological grade in 49 cases (81.7%). Even in all the 11 discordant cases, cytological grading was only one grade higher or lower when compared to the histological grade. Robinson's cytological grading and NGS histological grading had a significant association in the Chi-square test with a p-value of 0.0001. The two grading systems showed a strong and positive correlation with Spearman's rank correlation coefficient of 0.848 with a p-value of 0.0001. The two methods also showed substantial agreement with a kappa value of 0.72. In multiple linear regression analysis, all the cytological parameters except the nuclear margin score were significantly influential in predicting the final cytological grade. The most influential parameters were nucleoli score and chromatin score with a p-value of 0.0001 (Table 4). Table 5). This difference was found to be statistically significant in the Chi-square test with a p-value of 0.0003. When we studied the association between individual cytological parameters and lymph node metastasis by Chi-square test, we found that cell dissociation score (p-0.0001), nucleoli score (p-0.001) and chromatin score (p-0.04) were the cytological parameters that showed statistically significant association.
Discussion
The role of cytology is not just limited to diagnosis anymore. Significant prognostic information including even hormonal receptor expression in breast carcinomas can be assessed from FNAC smears [17]. If the grade of the tumour too can be obtained from FNAC smears, it will be immensely useful to the treating clinician. It will act as a complement to CNB grade whenever it is necessary to know the grade of the breast carcinoma before the excision of the tumour. Recent studies have shown a high concordance between histological grading and Robinson's cytological grading ranging from 78% to 88% [13][14][15][16]. Our study also has shown a high concordance of 81.7% reiterating the validity of this method. This in turn indirectly indicates that Robinson's cytological grade could be equal to or better than the CNB grade in predicting the actual histological grade of the tumour. [14,15]. This ability to distinguish low-grade tumours from high-grade tumours will be of clinical significance since low-grade tumours are often resistant to chemotherapy whereas high-grade tumours respond well [4,5]. [14,18]. Thus cytological grading can identify the grade III tumours that were missed by CNB and can be used as an adjunct to CNB. This will help such patients to undergo chemotherapy who otherwise might not have received it.
Concordance between the two grading systems in different grades of tumours
Among the histological grade I cases, most cases (92.9%) were identified correctly as grade I in cytology in our study. Studies done earlier have shown similar findings too [15,18]. Thus, Robinson's cytological grading can help in excluding patients with low-grade tumours from neoadjuvant chemotherapy and prevent unnecessary adverse effects. This will be useful particularly when CNB has not been done in the patient, as may be the case in developing and resource-poor countries where only either FNAC or CNB is usually done.
In our study, the lowest concordance between cytological and histological grading was noted for the grade II tumours (74.1%), which was still in the acceptable range. Phukan
Possible reasons for the discordance noted in a few cases
Among the 11 cases that showed discordance between the two methods, eight cases were under-graded and three were over-graded in Robinson's cytological method. Five grade II cases and three grade III cases had been under-graded. When these slides were reexamined, it was noted that in most of the cases that were under-graded, high pleomorphism score (score 3) was seen in histology but the cells showed less atypical features in the cytology smears. Probably the less atypical areas were sampled inadvertently during the FNAC procedure in these cases. Ensuring multidirectional needle passes during the FNAC procedure might help in further improving the accuracy. Such sampling issues can be more common in CNB due to the inherent nature of the procedure. This could also possibly be one of the reasons for the less concordance noted in CNB grading. Such underscoring can happen for cell dissociation score also if the more cohesive areas alone are sampled in FNAC. We could not conclude the reasons for the over-grading noted in three cases. Sampling the least cohesive areas alone or applying more pressure during smear preparation can result in a high cell dissociation score resulting in over-grading of some of the tumours.
Robinson's cytological grading and NGS histological grading are two very different methods for breast carcinoma grading where the parameters used for grade assessment are different. Hence, achieving 100% concordance between the two entirely different grading systems might not be feasible. But still, a high concordance rate can be achieved with Robinson's method as found in our study. Extensive multidirectional sampling during the FNAC procedure, applying optimal pressure during cytological smear preparation, immediate alcohol fixation of the cytological smears, careful attention to the microscopic findings, and strict adherence to the grading criteria will help in improving the accuracy of Robinson's cytological grading.
Parameters influencing the cytological grade of the tumour
In multiple regression analysis, all parameters except the nuclear margin score were significantly influential in determining the final cytological grade in our study. The nucleoli score and chromatin score were the most influential among them. These two were found to be among the most influential factors determining the cytological grade in the study by Khan et al. as well along with cell dissociation score [19].
Association between cytological grade and lymph node metastasis
Lymph node metastasis is one of the major prognostic factors in breast carcinomas [20,21]. Hence, we indirectly studied the prognostic significance of Robinson's cytological grading by studying its association with nodal metastasis. An increase in cytological grade was strongly associated (p-value of 0.0003) with the increased percentage of lymph node metastasis in our study. A similar strong association of cytological grade with lymph node metastasis has been noted in the previous studies by Robles-Frias et al. and Khan et al. [19,22]. Only a few studies are available studying the relationship between individual cytological parameters and lymph node metastasis. Robles-Frias et al. and Lingegowda et al. found a significant association between cell dissociation, cell uniformity, and nuclear margin with lymph node metastasis in their studies [22,23]. The strongest association with lymph node metastasis was for the cell dissociation score in our study (p-value of 0.0001). E-cadherin expression in the tumour cells has been shown to have a significant relationship with the cell dissociation score in cytology smears [24]. Alteration in E-cadherin expression is believed to affect the cohesiveness of the tumour cells resulting in cell dissociation. Loss of cell cohesion facilitates the metastatic cascade as the tumour cells can easily dissociate and enter the lymphatic vessels.
Our study had a few limitations. Our results were based on the analysis of 60 cases of invasive breast carcinomas. The number of cases in our study was comparable to most of the previous studies on Robinson's cytological grading. Still, we would have liked to study more cases to add more value to our results. There are many large-scale studies on histological grading since it is used routinely. Such large-scale studies on cytological grading will be possible in future if cytological grading is done as a part of routine reporting. Another limitation of our study was that we were not able to compare the cytological grade with the histological grade. To our knowledge, there is no such study in the literature comparing the cytological grade with CNB grade and our study would have been the first of its kind. But we were not able to do it as all three types of samples were not available together in many of our cases.
Conclusions
Histological grade and lymph node metastasis are two very important prognostic factors in breast carcinoma. Our study results showed that Robinson's cytological grading showed strong relationship with both these prognostic factors indicating the prognostic significance of this grading system. Performing this simple cytological grading will add more prognostic value to FNAC in breast carcinomas. In places where immunohistochemistry has been standardized, hormonal receptor status can also be studied from FNAC smears along with cytological grading, thus enabling FNAC to be a major prognostic tool. More large-scale studies in the future along with long-term follow-up will help in confirming the impact of cytological grade on overall survival and disease-free survival in breast carcinoma patients.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Institutional Ethics Committee, SRM Medical College Hospital and Research Centre, Chengalpattu, Tamil Nadu, India issued approval 1024/IEC/2016. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-09-23T15:01:25.499Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "0a4648390ddf33037e2b82ddb76f01ec6b94dd42",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/113953-cytological-grading-of-breast-carcinomas-and-its-prognostic-implications.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3f7dd718dc6bd3ff21ff2457152805edfc2f6e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
53226950 | pes2o/s2orc | v3-fos-license | Contact allergy to fragrances: current clinical and regulatory trends
Abstract. Several fragrances are important contact allergens. Compared to the immense multitude of more than 2,500 fragrances used in cosmetics, the spectrum of single substances and natural extracts used for patch testing appears limited, albeit comprising the supposedly most important contact allergens. The present review summarizes the most important results of the opinion of the Scientific Committee on Consumer Safety on fragrance allergens in cosmetic products from July 2012. Clinical results beyond abovementioned screening allergens, animal results in terms of the LLNA and structure activity considerations point to 100 single substances and extracts, respectively, which, in addition to those 26 already identified, must be considered contact allergens, and the presence of which should be declared in cosmetics. In case of the most commonly used fragrance terpenes limonene and linalool hydroperoxides resulting from autoxidation constitute the major allergens. These have become available as patch test material recently. Altogether 12 single substances have caused a (very) high number of published cases of sensitization. Thus their use concentration should be (further) reduced or, in case of hydroxyisohexyl 3-cyclohexene carboxaldehyde (HICC, e.g., Lyral®), use should be abandoned altogether. This is also recommended in case of oak moss and tree moss due to their content of the strong sensitizers atranol and chloroatranol. As generic maximum dose for the remaining 11 single substances 0.8 µg/cm2 are suggested, which corresponds, under conservative assumptions, a maximum concentration of 100 ppm in the finished product.
Contact allergy to fragrances: current clinical and regulatory trends
Several fragrances are important contact allergens. Compared to the immense multitude of more than 2,500 fragrances used in cosmetics, the spectrum of single substances and natural extracts used for patch testing appears limited, albeit comprising the supposedly most important contact allergens. The present review summarizes the most important results of the opinion of the Scientific Committee on Consumer Safety on fragrance allergens in cosmetic products from July 2012. Clinical results beyond abovementioned screening allergens, animal results in terms of the LLNA and structure activity considerations point to 100 single substances and extracts, respectively, which, in addition to those 26 already identified, must be considered contact allergens, and the presence of which should be declared in cosmetics. In case of the most commonly used fragrance terpenes limonene and linalool hydroperoxides resulting from autoxidation constitute the major allergens. These have become available as patch test material recently. Altogether 12 single substances have caused a (very) high number of published cases of sensitization. Thus their use concentration should be (further) reduced or, in case of hydroxyisohexyl 3-cyclohexene carboxaldehyde (HICC, e.g., Lyral ® ), use should be abandoned altogether. This is also recommended in case of oak moss and tree moss due to their content of the strong sensitizers atranol and chloroatranol. As generic maximum dose for the remaining 11 single substances 0.8 µg/cm 2 are suggested, which corresponds, under conservative assumptions, a maximum concentration of 100 ppm in the finished product.
This review paper summarizes the current knowledge of contact allergies to fragrances. It is mainly based on the opinion of the Scientific Committee on Consumer Safety (SCCS) published in July 2012 (http:// ec.europa.eu/health/scientific_committees/ consumer_safety/docs/sccs_o_102.pdf; last accessed May 13, 2013; [1]). While the clinical and allergological basics are assumed to be known to the reader, the clinical epidemiology of the most common fragrance contact allergens are presented in a more detailed way than in [2]. Furthermore, experimental data (LLNA) and knowledge on the (bio-)activation of substances and haptens as well as chemical considerations of structure-effect relationships are used to identify fragrances that pose a particular problem and make step-by-step preventive measures necessary. To keep the list of references concise, only selected, exemplary references were included; for further information and a complete list of references please refer to [1] and to the above-mentioned opinion of the SCCS, which is available as an open-access publication on the above-mentioned website. The review paper presented here does not cover substances or extracts that are banned from use in cosmetic products (Annex II of the Cosmetics Regulation) [3].
Allergens for screening
A mixture of fragrances, as is used in a perfume or as a perfume component of a cosmetic product, contains several to several hundred single fragrances. consisting of what have been defined to be the most common fragrance allergens, plus (since about 10 years) one single synthetic fragrance, are currently used as the patch test standard series for clinical diagnosis. For several decades, fragrance mix I, a mixture of 1% each of the 7 synthetic substances (INCI nomenclature) amyl cinnamal, cinnamyl alcohol, cinnamal, eugenol, geraniol, hydroxycitronellal, isoeugenol, and oak moss (Evernia prunastri) in petrolatum, together with 5% sorbitan sesquioleate, as an emulsifier, has been used. In Europe, the prevalence of sensitization in consecutivelytested patients lies between 4.5% and 14.8%; worldwide, the difference is even larger. In central Europe, the frequency was 7.3% for the years 2005 -2008, according to data collected by the Information Network of Departments of Dermatology (IVDK) [4]. The prevalence of sensitization in the general population lies between 1% and 3%, according to most studies.
The most important ingredient in fragrance mix II is hydroxyisohexyl 3-cyclohexene carboxaldehyde (HICC, also known as Lyral ® ), which is additionally tested in 5% (pet.) in the standard series due to its significance. Around the year 2000, high concentrations of HICC were used in cosmetic products, e.g., in deodorants. This led to a downright epidemic of HICC sensitizations, which still has not been sufficiently controlled by self-regulatory measures applied by the industry ("IRFA standards"). In central Europe (according to IVDK data), the prevalence of HICC sensitization was almost 20% in 2011 [7]; in Denmark, it was 2.5% [8]. Interestingly, there are important differences among European countries, with lower prevalence in the south [9]; in the USA, sensitization to HICC is also significantly less frequent [10], which suggests marked differences regarding exposure (use in products, consumer habits).
Another mixture that has been used as a screening allergen for years is Balsam of Peru (Myroxylon pereirae, INCI). While the balsam as such is not used in cosmetic products in Europe, extracts and distillates are [11]. Furthermore, exposure through topic drugs has to be considered in some regions. With a prevalence of sensitization between 3.9% and 8.0% in consecutively-tested patients in Europe and strong associations with other fragrance allergens, Balsam of Peru is a "traditional" but still common allergen, although the composition and the role of individual ingredients as sensitizing agents has not yet been fully explained. Turpentine, as an allergen, is significantly less common; currently, the prevalence of sensitization in consecutively-tested patients is usually no higher than 2%. The content of relevant substances varies widely, according to their origin; nevertheless, turpentine is a common raw material in the perfume industry and contains substances (terpenes) that come from other sources.
Activation of substances tosensitizers: pre-and prohaptens
To our current knowledge, most fragrances are haptens, which, after binding to proteins, become allergens and are able to induce an immune reaction (sensitization and subsequent elicitation). Some fragrances need to be activated before they can bind to proteins. If this activation takes place outside the body, for example by autoxidation or photoactivation, the substance is a prehapten. Prohaptens, on the other hand, are transformed into immunogenic haptens within the skin, usually by enzyme catalysis. It is not always clear whether a substance is a prehapten, a prohapten, or both, as both activation pathways can result in the same products, such as geraniol (geranial, epoxy-geraniol, and epoxy-geranial), for example [12,13].
From an allergological point of view, the most common reaction products of prehap-tens are hydroperoxides, but also secondaryreaction products like aldehydes and epoxides can contribute to sensitizing potential [14]. In animal experiments, the oxidation products of terpenes, like limonene, linalool, geraniol, and linalyl acetate, which are frequently used as fragrances, have been identified to be markedly more potential allergens than the nonoxidized raw substances. These results concur with clinical trials in which patch tests using oxidized terpenes resulted in a significantly higher prevalence of sensitization than patch tests using nonoxidized material. Interestingly, the oxidation of different substances results in identical, or at least similar, reaction products, which could explain cross-reactivity. As oxidation can be avoided or at least delayed by the addition of antioxidant agents, these are used more and more frequently. However, it has to be closely monitored as to whether the antioxidant agents, like the frequently-used butylated hydroxytoluene, can themselves cause allergies.
Various enzyme systems in the skin are able to metabolize foreign substances (xenobiotics), including prohaptens. The aim is "detoxification"; what happens, however, is the transiently increased harmfulness of a substance in terms of a sensitizing effect. The influence on allergenicity has only been investigated in relatively few substances so far, e.g., in α-terpenes, geraniol, cinnamyl alcohol, eugenol, and isoeugenol. Predictive in-vitro tests, which will gain importance once animal experiments on ingredients of cosmetic products expire, have so far not included this aspect. In clinical practice, i.e., for patients, the process of bioactivation is of high importance as it leads to the necessity to take into account the exposure to mother substances that produce the reaction product against which sensitization is present (e.g., isoeugenol acetate results in isoeugenol after scission of the ester bond, and cinnamyl alcohol is metabolized to cinnamal) [15,16].
Clinical results
The SCCS's opinion followed a structured approach in its evaluation of whether and to what extent a fragrant substance or mixture has to be regarded as allergically rel-evant [1]. The first step was to sift through the publications on clinical cases of sensitization. When at least two independent centers reported either well-documented case reports or several positive patch test results in a series of patients, the substance or extract was categorized as "established allergen in humans". The results are presented in Tables 1 and 2. Only if no clear classification could be obtained based on human data, which -if sufficiently validated -is always preferred to other data, results from animal experiments and structural chemistry were additionally taken into account (see below).
Structure activity relationship (SAR)
The ability of a substance to act as a hapten, be it after (bio-)activation, significantly depends on its bonding capacity to skin proteins. This characteristic can frequently be deduced from the chemical structure of the molecules when "structural alerts" are observed [21]. A further option is to study the quantitative structure activity relationships (QSAR); this investigation is based on experimental findings on reactivity and other substance-specific data. However, for many fragrances, no quantitative data are available. Furthermore, the sometimes decisive (bio-)activation [14] makes valid modeling difficult. Therefore, fragrances that are important in terms of exposure, but for which insufficient human or experimental data were present, were categorized for the SCCS opinion based on the consenting expert opinion of the involved chemists. is predicted ("++") or possible ("+") and for which additionally (i) human data are present that alone were not sufficient to categorize a substance as "established allergen in humans" or (ii) findings from animal experiments suggest an important sensitizing potency. The latter was not demonstrated by the above-mentioned, separately-considered experimental studies but rather on the basis of an "R43"-label according to REACH.
Exposure
Skin contact with fragrances can be present through the personal use of cosmetic or household products etc., but it can also take place when using pharmaceutical products or occupational substances, having close contact with other people, and even over the air. In addition to a substance's intrinsic allergenic potency, the following exposure factors are important for the risk of sensitization or elicitation: area dose (usually presented as µg/cm2), vehicle effects, simultaneous presence of irritants or further potential allergens, time and frequency of exposure, localization, skin status, and occlusion (e.g., in skinfolds, under clothing or personal protective equipment). In a series of tests, either the qualitative formulations (INCI declaration, e.g., [22,23]) or -by chemical analysis -the quantitative compositions [24,25] were studied with regard to relevant fragrances. The most frequently-identified substances were -with certain differences between the types of products -limonene and linalool. The relatively limited quantitative data show that the content of the most common allergens in perfumes and deodorants has markedly decreased [24]. However, it was also found that the mean concentration of atranol, one of the most common allergens in oak moss and tree moss, rather increased from 2004 to 2007, while the chloroatranol concentration decreased [25].
Some fragrances can, for example, be used as repellants, insecticides, or bactericides (see, e.g., biocide directive 98/8/EC). The use of benzyl benzoate as a scabicide, farnesol as a bacteria-inhibiting additive in Table 3. Results of the local lymph node assay (LLNA) for fragrances that have not been categorized as "established allergen in humans" (for a presentation of all substances see [1] [17] deodorants, or benzyl alcohol as an antioxidant in external agents are only three of the better-known examples. This leads to additional manners of exposure to these fragrances beyond their use in cosmetic products and also beyond their usual function as a fragrance. The same holds true for the use of certain fragrances or natural extracts in aromatherapy, massage oils, or the like. With regard to exposure from various sources, it has to be taken into account that particularly the hands are exposed not only to fragrances but also to other allergens when applying body lotions, facial creams, or other products. This is called "aggregate exposure"; by cumulative effects, critical area doses can be exceeded, thus facilitating sensitization or elicitation.
Dose-effect relationships and thresholds
In general, risk estimation is based on data on hazard (i.e., sensitization potency), exposure, and dose-effect relationship at induction. For ethical reasons, human induction studies are objectionable today, and the industry only uses them to verify an elsewhere-deduced "no effect level" (NOEL), therefore, usually no cases of sensitization are observed; but it also has to be taken into account that the samples sizes are always very small. Thus, only data on elicitation (i.e., studies in sensitized patients) are available to evaluate dose-effect relationships. Ideally, these kinds of studies would be (i) available for all relevant (i.e., problematic) fragrances, (ii) performed as repetitive open application test (ROAT) according to the standardized guidelines for cosmetic application [26], and (iii) carried out for various types of products. An area dose that does not lead to an allergic reaction in most sensitized patients (e.g., an "eliciting dose (ED)10", which is tolerated by 90% of patients) can usually be regarded as safe with regard to the primary prevention of an induction.
However, a ROAT study design is highly complex so that triggering thresholds are available for only few fragrances: -Isoeugenol at a concentration of 63 ppm in deodorants leads to an allergic reaction in 3/13 sensitized patients. In a ROAT study that used ethanol as a vehicle (representing "hydroalcohol" perfume bases), 2.2 µg/cm 2 triggered a reaction in 42% in one investigation, and 5.6 µg/cm 2 triggered a reaction in 63% of isoeugenol-sensitized patients in another investigation. -Cinnamal 320 ppm in deodorant triggered an allergic contact eczema in 2/8 sensitized patients, 100 ppm triggered the same reaction in 1/9 sensitized patients, and a ROAT using 0.1% in ethanol triggered allergic contact eczema in 44% of sensitized patients. Table 4. Fragrances for which only single-center clinical data are available or for which "R43"-labeling ("none*") plus a sensitization potency according to SAR analysis is possible ("+") or probable ("++"). an ED10 of 4.9 µg/cm 2 for a cream base were detected in a larger ROAT study carried out by the IVDK [23]. This corresponds to concentrations of 270 ppm (alcohol base) and 88 ppm (cream). In a further ROAT study, 15.3 µg/cm 2 in ethanol led to a positive reaction in 61% of patients; using an ethanol/water mixture, the ED10 was found to be 0.064 µg/cm 2 in another investigation.
Chloroatranol, the allergologically relevant component of Evernia prunastri (oak moss), led to an allergic reaction in 92% of oak moss-sensitized patients at particularly low area doses of 0.025 µg/cm 2 . In ROAT, even extracts, in which the atranol and chloroatranol contents could be reduced to 75 ppm (3.4%) and 25 ppm (1.8%), respectively, triggered allergic reactions in most patients with a sensitization against oak moss [27] so that sufficient reduction of allergens does not seem to be achievable by this means.
As the data are incomplete and cannot be applied to each fragrance, the SCCS opinion suggests using a generic threshold of 0.8 µg/cm 2 . This value is based on the observation that this area dose can be regarded as the mean ED10 in several other allergens, including metals and biocides. Because a certain area dose corresponds to different concentrations in different products (depending on the base, frequency of use, etc. [26]), the suggested threshold value of 0.8 µg/cm 2 was translated to a maximum concentration of 100 ppm (0.01%) based on the most critical base, i.e., deodorants.
Prevention
In fragrance contact allergy, as in general, a distinction between primary and secondary prevention is possible. While primary prevention aims to avoid sensitization from the beginning, secondary prevention tries to avoid relapses, i.e., episodes of allergic contact eczema, in sensitized patients.
For primary prevention, there are various measures, and some of them are even carried out before the market launch of a fragrance: Substances that turn out to be (too) sensitizing are excluded from use in cosmetic products (see CosIng, entries in Annex II of the Cosmetics Directive). Unfortunately, these screening mechanisms are not perfect so that many fragrances with a known sensitizing effect are present in cosmetics and other consumer products (see above). Thus, it is necessary to monitor contact allergies in post-marketing surveillance programs in order to detect problematic substances and to carry out the necessary interventions. The latter are primarily the limitation of the maximum-allowed concentration applied and, if this measure is not sufficiently effective, the ban of the substance in question. In an effort towards self-regulation, the industry, through its research institute IFRA (www. ifraorg.org), has developed numerous standards for problematic substances. However, these standards are nonbinding, cover most but not all companies, and do not always adhere to clinicoepidemiological findings with sufficient consistency and timeliness. Therefore, the SCCS opinion found it necessary to limit the concentration of 12 individual substances, which were considered to be particularly problematic (Table 1, bold print), to the above-mentioned generic maximum concentration. For natural extracts, a limitation of the concentration did not seem feasible because of lack of data and varying composition; exceptions are the 12 above-mentioned problematic ingredients, even if they are used in extracts, when their concentration in the final product exceeds the proposed threshold value. It has been recommended to not use HICC and atranol-/chloroatranol-containing extracts from Evernia spp. in cosmetic products because previous efforts to limit the concentration were not sufficiently effective.
Successful secondary prevention is based on adequate diagnostic work-up. Only if the substances suspected to have caused the allergic contact eczema are (i) identified and (ii) tested on the skin, exposure to these agents can be avoided in the future. Thus, secondary prevention is based on information on ingredients -in this case, mainly cosmetic products, but in general these can also be, e.g., occupational substances. With regard to cosmetic products, the introduction of the INCI declaration has led to a significant progress -as long as allergists use exactly the INCI terminology to inform the patient, e.g., using an "allergy pass". When the INCI declaration was introduced, the individual fragrances (if used as a perfume and not as an antioxidant agent, like benzyl alcohol, or as an antimicrobial additive, like farnesol) were not included and globally denoted "perfume". The first step to limit this privilege of "non-information" was the introduction of the requirement for labeling of 24 fragrances and 2 extracts [29]. The current SCCS opinion has identified 71 further individual substances and 29 further extracts that (i) are "established allergens in humans" ( Table 1, 2), (ii) are shown to be sensitizing in the LLNA (Table 3), or (iii) have a high probability to be sensitizing agents (Table 4). For this reason, the requirement for labeling should be extended to 127 substances or extracts. How exactly this could be done, apart from or in addition to the current labeling policy on product packages, remains to be discussed. Furthermore, allergists and manufacturers of patch tests are facing the challenge of having to develop a relatively high number of new formulations to further optimize diagnostic work-up. Some extracts have already been made available, and three of them have been used by the German contact allergy group or the IVDK within the standard series; it was found that the tested allergens were frequent allergens [30]. An optimized diagnostic work-up is possibleat least theoretically -, if both the requirement for labeling and the range of available patch test substances includes further important fragrance allergens. Whether this level of diagnostic work-up can be made available in each dermatology practice or only in more specialized institutions, also at the currentlyreached stage (26 "Annex fragrances"), remains to be discussed elsewhere. | 2018-11-15T17:36:51.720Z | 2017-08-04T00:00:00.000 | {
"year": 2017,
"sha1": "43a4539fd748ae0b3a6bd02aa1d45a23017a7826",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc6040011?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "43a4539fd748ae0b3a6bd02aa1d45a23017a7826",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
56039458 | pes2o/s2orc | v3-fos-license | Optimization of Draft Tube Position in a Spouted Bed Reactor using Response Surface Methodology
Optimization of draft tube position in a spouted bed reactor used for treatment of wastewater containing low concentration of heavy metals is investigated in this paper. Response surface methodology is used to optimize the draft tube height, the draft tube width and the gap between the bottom of the draft tube and the inlet nozzle. It is observed that the draft tube with a height of 60 millimeter, width of 12 millimeter and the gap of 13 millimeter between its bottom and inlet nozzle, results in optimum value of minimum spouting velocity, measured 45 cubic centimeter per second (2.7 Liter per minute).
Introduction
Low concentration of heavy metals in contaminated wastewater results in low reaction rates over electrode surface area and thus special considerations are necessary for reactor selection and design.Some of the most important requirements of these reactors are [1]: • Large active surface area per unit reactor volume • High mass transfer rate • High current efficiency • High current density • Low cell voltage • Uniform distribution of electrode potential • Low maintenance cost The spouted bed electrode studied at Berkeley in a collaborative effort with PASMINCO, the Australian zinc company, may significantly improve the electrodeposition of heavy metals.The spouted bed consists of a vessel filled with relatively coarse particles.A jet of fluid is injected vertically through a small opening located centrally at the base of the vessel.If the jet velocity is high enough, it causes a stream of particles to rise rapidly in a central core within the bed.As the jet expands above the bed, the fluid velocity drops and the particles fall out onto the top of an annular region surrounding the central jet.The particles then move slowly down in the bed until they are again swept up in the central jet.A spouted bed may incorporate a ''draft tube'' to confine the spread of the central jet of fluid; in this way, spouted beds of large height-to-width ratio can also be operated.A spouting bed of conducting particles can then be made into an electrode by incorporating a current feeder and a diaphragm beyond which lies the counter electrode [2].
At low flow rates of electrolyte, there are no particles passing through the top of the draft tube and, therefore, no recirculation of particles.This is the ''fixed bed zone''; the particles in the annular region are motionless.At higher flow rates, beyond a minimum spouting flow rate, particles issue from the top of the draft tube and recirculation occurs.This is the ''stable spouted bed zone''.The particles descend smoothly in the annular region.At a yet higher flow rate, the bed starts to behave irregularly, particularly in its upper regions, and the movement of particles in the annular region is no longer uniformly downward.It is conjectured that this ''unstable spouted bed zone'' is incipient fluidization of the particles in the annular region [3].
Hydrodynamics of the spouted bed was investigated by Verma et al. [3], Piskova and Mörl [4], Duarte et al. [5], Shirvanian et al. [6,7] and Kazdobin et al. [8].The positive effect of draft tube existence on the performance of the spouted bed reactor used for waste water treatment is obvious.In this paper the draft tube position and height of the spouted bed of figure 1 is optimized via response surface methodology.
Experimental Set-up and Procedure
The dimensions of the spouted bed reactor of this study are shown in Figure 1.The draft tube (with rectangular cross section) was formed by vertical aluminum curved strips of different height in order to optimize the draft tube height (h), the draft tube width (d) and the gap dimension between the inlet nozzle and the bottom of the draft tube (g).The curvature of the bottom of the draft tube was designed due to gained results of the previous runs which confirm the positive effect of this curvature on decreasing the minimum spouting velocity.The inlet nozzle diameter was set to 4 mm based on previous runs in order to minimize the minimum spouting velocity as well as to create the stable spouting.
The reactor inlet flow enters from the inlet bottom nozzle after passing through a rotameter and exits from an opening inserted beside the reactor.The pressure drop was measured using a manometer.The Plexiglas construction and ''flat'' geometry of the reactor provided the observation of the spouted bed, including the interior of the draft tube.The reactor inlet flow increased gradually till the Copper particles (92.8% mesh 16-20, 7.15% mesh 20-30) were begun to sweep out of the apparatus from the ''fountain'' at the top of the draft tube.However this flow represents the minimum spouting velocity, but the result is not exactly reproducible as discussed by Epstein and Mathur [9].A more reproducible result is obtained by increasing the flow more than the minimum spouting velocity and then slowly decreasing the flow: The bed then remains in the spouted state until the flow represents the reproducible minimum spouting velocity of the reverse process.A slight reduction of flow at this point causes the spout to collapse [9].
The reproducible minimum spouting velocity of the reverse process was obtained for different height, width and vertical gap between the inlet nozzle and the bottom of the draft tube in runs designed by response surface methodology in order to optimize the height and the position of the draft tube.
Design of Experiment via Response Surface Methodology
The response surface methodology (RSM) is a statistical and mathematical technique used for modeling and optimization of the processes in which a response of interest is influenced by several variables.It specifies the effect of the independent variables on the process, either individually or collectively.Furthermore, the experimental methodology generates a mathematical model describing the processes [10].
The design procedure of the response surface methodology is as follows [10,11]: • Determination of independent variables and their levels.
• Development of the best fitting mathematical model of the second order response surface.
• Determination of the optimal sets of experimental parameters that produce a maximum or minimum value of response.
• Obtaining the response surface plot and the contour plot of the response as a function of the independent parameters.The total number of experiments required for this methodology is determined by [12]: where k is the number of independent variables and i is the number of random replications at the design center to evaluate the pure error.
The responses are related to variables by quadratic models, where η is the response, x i and x j are coded variables, β 0 is the constant coefficient, β j , β jj and β ij are the interaction coefficients of linear, quadratic and the second-order terms, respectively and e i is the error [13]: In this experiment some of the effective hydrodynamic variables such as draft tube height, draft tube width and the gap between the draft tube bottom and the inlet nozzle were considered as independent variables and the minimum spouting velocity was the response.Each independent variable was coded at three levels between -1 and +1 where the variables of the draft tube width (x 1 ), the draft tube height (x 2 ) and the gap between the bottom of the draft tube and inlet nozzle (x 3 ), were changed in the ranges 12-24 mm, 13-23 mm and 60-100 mm.The critical ranges of selected parameters were determined by preliminary experiments based on literature experiences, our previous experiments and physical limitations.
Eighteen experiments were augmented with four replications at the design center as represented in Table 1.First four columns show run number and experimental conditions of the runs.
The result was related to the independent variables according to (2) using Design-Expert 7. 1. 3. program including ANOVA.The coefficients of determination R-Squared (R 2 ) and Adj R-Squared (R 2 adj ) expressed the quality of fit of the resultant polynomial model, and statistical significance was checked by F-test in the program.For optimization, a module in Design-Expert software searched for a combination of factor levels that simultaneously satisfy the requirements placed on each of the responses and factors.The desired goal was selected as minimum spouting velocity.
Optimization of Draft Tube Position via RSM
The experimental results of the designed experiments shown in Table 1 were related to the independent variables as shown in (3): ANOVA results of this quadratic model are presented in Table 2.In the table, model F-value of 148.51 implies that the model is significant.Prob > F is less than 0.05 for all terms, indicating that terms are significant for the equation.Adeq Precision of 48.508, which indicates an adequate signal to noise ratio, also confirms the model validity.The Pred R-Squared of 0.9298 is also in reasonable agreement with the Adj R-Squared of 0.9874.
The results were optimized by Design-Expert software using the approximated function in (3).Optimized conditions under specified constraints were obtained for minimum height (6o mm), minimum width (12 mm) and minimum vertical gap (13mm) of the designed draft tube.Under these optimized conditions, observed minimum spouting velocity was 45cm 3 /s.Equation (3) has been used to visualize the effects of experimental factors on responses in 3D graphs of The expected dependence of the draft tube height and minimum spouting velocity is shown in Figure 2. When the draft tube height is increased, the minimum energy required by the particles to sweep out of the draft tube is increased due to higher vertical distance.Consequently, the minimum required velocity for creating spout is increased.However decrease in the draft tube height has a positive effect on minimum spouting velocity and thus total energy requirements, buts it is limited by the filled bed height, which is six centimeter in this experiment.
Minimum spouting velocity is increased via increase of the draft tube width when the gap between the inlet nozzle and the draft tube is constant in its optimum value (x 3 = -1).The important role of the draft tube in the spouted bed reactor is to separate inner fluidized bed zone and outer packed bed zone.When the draft tube width is increased, the fluidization zone expands, but has no effect on the spouting zone created by the fluid jet.This means the bed operation approaches fluidized bed which requires more fluidization velocity to be fluidized.Despite the positive effect of decreasing the draft tube width on minimum spouting velocity, more decrease of the draft tube width, causes some of the agglomerated particles to stick in the tube and the bed stability to dissipate.
The gap between the inlet nozzle and the draft tube also has a noticeable effect on minimum spouting velocity as shown in Figure 3. Upon increasing the gap between inlet nozzle and draft tube, the fraction of the inflowing liquid diverted from the draft tube by passing up through the annular region is increased.Consequently, the internal jet power is deceased that must be compensated by higher spouting velocity.
Conclusion
In this paper, the dependence of minimum spouting velocity and the draft tube height, the draft tube width and the gap between the bottom of the draft tube and the inlet nozzle was investigated through experiments designed by response surface methodology.The mathematical model fitted by Design Expert software and validated by ANOVA, was used in order to optimize the mentioned variables.It was observed that the optimized draft tube height, draft tube width and the gap between the draft tube and the inlet nozzle are 60 mm, 12 mm and 13 mm, respectively which represents the minimum spouting velocity of 45 cubic centimeter per second (2.7 Liter per minute).
Figure 2 .
Figure 2. The effect of the draft tube height and width on minimum spouting velocity when the gap between inlet nozzle and draft tube is optimum.
Figure 3 .
Figure 3.The effect of the draft tube height and the gap between inlet nozzle and the draft tube on minimum spouting velocity when the draft tube width is optimum.
Table 2 . ANOVA results of the predicted quadretic model. Source Sum of squres Mean squre F-Value Prob>F
A: Width | 2018-12-11T21:03:43.228Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "041d6e9e73ece6c2d53b1b819c9070c45031d5e9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4236/ampc.2012.24b059",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "041d6e9e73ece6c2d53b1b819c9070c45031d5e9",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
255213238 | pes2o/s2orc | v3-fos-license | Lactose Mother Liquor Stream Valorisation Using an Effective Electrodialytic Process
The integrated electrodialysis (ED) process supports valorisation of a lactose-rich side stream from the dairy industry, creating an important source of milk sugar used in various branches of the industry. This work focuses on the optimization of the downstream processes before the crystallization of lactose. The process line includes a pre-treatment and desalination by ED of the industrial waste solution of the lactose mother liquor (LML). The LML was diluted to 25% total solids to overcome hydraulic issues with the ED desalination process. Two different levels of electrical conductivity reduction (70% and 90%) of the LML solutions were applied to decrease the mineral components and organic acids of the LML samples. The ED performance parameters such as ash transfer rate (J), the specific capacity (CF) of the ED and specific electric energy consumption (E) were determined and the influence of the LML solution on the monopolar ion-exchange membranes has been investigated. A higher degree of desalination is associated with higher electric energy consumption (by 50%) and lower specific capacity (by 40%). A noticeable decrease (by 12.8%) in the resistance of the anion exchange membranes was measured after the trials whereas the resistance of the cation exchange membranes remained practically unchanged. Any deposition of the alkaline earth metals on the membrane surface was not observed.
Introduction
The dairy industry produces various nutritious products with high market value. Through their production a number of nutrient-rich by-products are generated, such as whey and ultrafiltration permeates. Lactose, also referred to as milk sugar, is naturally found in the milk of mammals and belongs to a group of reducing disaccharides including glucose and galactose. This type of disaccharide is also a substrate for an uncountable number of chemical reactions, including reduction, oxidation, hydrolysis, isomerisation and biotransformation, each resulting in a different product of high interest, for example, lactitol, lactulose, or glucose-galactose syrup [1,2]. The utilization of lactose can be typically seen in food (meat, dairy, infant formula, diabetes-specific formula), confectionery, cosmetics and other industries, including synthetic fiber or glass production. Pure lactose has wide application in the pharmaceutical industry [3] as a filler or a binder to give pills suitable properties. Chemistry, mainly analytical and close natural science disciplines (biochemistry, microbiology), utilize lactose as well. Due to lactose's plentiful usage, the global market is expected to grow substantially to 2026. Essential products driving the global lactose market are infant food and pharmaceutical drugs [4].
A number of by-products from the dairy industry, such as sweet whey, cottage cheese whey and casein whey [5][6][7] are sources of high-quality lactose. Production of high-purity
Materials and Chemicals
The lactose mother liquor (LML) for demineralization experiments was taken directly from the lactose production line in Dairyfood GmbH (Riedlingen, Germany). Following LML withdrawal, the LML was diluted with RO water in a ratio of 1:2.25 due to the high viscosity of the original LML. The obtained suspension was diluted to approximately 25% total solids (TS).
The deionized water (DW) was produced in Dairyfood GmbH (Riedlingen, Germany) by reverse osmosis (RO). The chemicals used in the experiments were of analytical grade and purchased from Merck s.r.o. (Prague, Czech Republic).
Membranes for Electrodialysis
Commercial food-grade anion and cation exchange membranes (AMH and CMH Ralex ® ) were used in the electrodialysis process. These are heterogeneous membranes based on polyethylene (PE) as polymer matrix and sulfonic acid groups (R-SO 3 − ) as cation exchange, and quaternary ammonium groups (R-(CH 3 ) 3 N + ) as anion-exchange groups. Furthermore, both membranes were reinforced with polyester (PES) fabrics. The membranes were produced by MEGA a.s. (Stráž pod Ralskem, Czech Republic).
Electrodialysis
The ED experiments were conducted in Dairyfood GmbH (Riedlingen, Germany) using an ED unit (MemBrain s.r.o., Stráž pod Ralskem, Czech Republic). The ED unit was equipped with an ED stack consisting of 10 pairs of CMH and AMH in CMH-AMH-CMH configuration. Polyethylene spacers with a mesh and thickness of 0.8 mm were used to hydraulically separate, dilute and concentrate the chamber. Polyethylene spacers with a mesh and thickness of 1.0 mm secured hydraulic separation of the electrolyte solution on endplates of the ED stack, inserted between CMH and electrode. Distributors with a greater thickness were placed in separate electrolyte chambers due to more efficient gas extraction created close to the electrode surface. The diluate cylinder was filled with 1.0 kg of diluted LML, while concentrate and electrolyte cylinders contained 0.5 kg of deionized water and 0.25 kg of 1% (wt./wt.%) sodium nitrate solution, respectively. Throughout the experiments, the temperature of the diluate and concentrate was maintained at 15 • C by immersing stainless steel helices into process solutions, which were connected to the cryostat unit Julabo CF41 (Seelbach, Germany). Circulation flow rates of diluate, concentrate, and electrolyte solutions were set to 55 L·h −1 , 55 L·h −1 and 50 L·h −1 , which corresponds to linear velocity 4.8 cm·s −1 for diluate and concentrate, and 17.4 cm·s −1 for the electrolyte solution. A proportional-integral controller automatically regulated the flow rates while continuously measured by a magnetic-inductive flow meter IFM SM4000 (Essen, Germany). Lactose production with integrated ED technology of LML recovery is presented in Figure 1.
The applied potential difference at the membrane stack was 10 V, corresponding to a potential of 1 V per membrane pair. The pH value and conductivity of the solutions were recorded every 10 min using an automatic recording system comprised of a Schneider Electric (Schneider Electric CZ s.r.o., Prague, Czech Republic) and Endress+Hauser Liquiline (Endress+Hauser Czech s.r.o., Prague, Czech Republic), CPS71D-7TP21 glass pH sensor and CLS82D conductivity stainless steel sensor (Endress+Hauser Liquiline), respectively. Values of the voltage and resulting current were recorded every 10 min throughout the experiments. The physicochemical properties of individual samples were determined just before starting the demineralization sequence and immediately after the termination of the demineralization sequence and discharge of the individual tanks. The conductivity, pH, and temperature of collected samples were measured using WTW Multi 3620 IDS equipped with conductivity probe Tetra Con ® 925 and pH glass electrode Sentix ® 940 (WTW, Weilheim in Oberbayern, Germany). The cleaning-in-place (CIP) procedure was performed after all ED experiments. The CIP procedure involved cleaning the ED membrane stack with 3% (w/w%) HNO 3 and NaOH for 20 min with water flushing for 20 min after both the acid and alkaline solutions. Conditions of the ED process setup are shown in Table 1.
Methods
Concentrations of inorganic ions, TS, ash content (i.e., total mineral content), and ash relative to the dry base (ODB%) were measured according to procedures described by [25]. The concentrations of cations were analyzed using optical emission spectroscopy with an inductively coupled plasma (ICP-OES) from Thermo Fisher Scientific GmbH (Munich, Germany).
The concentrations of inorganic anions were determined with ion chromatography using Dionex™ ICS-5000+ DC from Thermo Fisher Scientific GmbH (Munich, Germany), equipped with a conductometric detector.
Protein content in samples was calculated according to the molecular nitrogen content with the specific nitrogen-to-protein conversion factor of 6.38. For determination of nitrogen content, 2 g of sample was kept at 900 • C for 7 min. Products of combustion flow through a sorption column, where CO 2 and H 2 O are removed. After CO 2 and H 2 O removal, gas flows through a reduction column, where NO X substances are reduced to a molecular nitrogen N 2 . The molecular nitrogen content is subsequently evaluated with gas chromatography using a thermal conductivity detector (TCD). The values of the nitrogen content were obtained using rapid MAX N exceed (Elementar, Langenselbold, Germany).
Scanning electron microscope (SEM) images were obtained using a scanning electron microscope Quanta FEG 450 (FEI, Hillsboro, OR, USA) at 10 kV accelerating voltage, magnification at 300× and 80 Pa residual pressure. The thickness of each sample was taken as the average value of five points measured before the experiments by a Mitutoyo micrometer. A detailed description of ion exchange capacity measuring and calculation can be found elsewhere [34].
The citric acid (CA) and lactic acid (LA) concentrations were determined using the capillary isotachophoresis technique with a one-purpose analyzer [35].
The lactose (LAC) content was determined by a polarimeter from A. Krüss Optronic GmbH, model "P1000 LED" (Hamburg, Germany). Besides LAC, the optically active compounds were precipitated by adding potassium ferrocyanide and zinc sulphate solutions, followed by precipitate filtration [36]. The filtrate was subsequently subjected to polarimetric analysis.
The density of samples was measured by a digital density meter Densito 30Px (Mettler Toledo, Columbus, OH, USA).
The apparent permselectivity and resistivity of the membranes were measured according to methods described elsewhere [37]. The content of inorganic cation in membranes was measured by X-ray energy-dispersive (EDX) spectrometer Team Software Suite, coupled with the Octane Elect and Octane Elite at 10 kV on a scanning electron microscope (SEM) Quanta FEG 450, FEI, USA.
Calculations
The removal of ions R (%) in diluate during ED was calculated according to Equation (1).
where, V p (L) and V f (L) are the volumes of the diluate after and before ED, c p,i (mg·L −1 ) and c f,i (mg·L −1 ) are concentrations of the respective compound after and before ED. Considering a hypothetical compound A p B q , the dissociation equilibria are described by Equation (2).
where, K sp A p B q is the solubility product of a hypothetical compound A p B q ; a(A q+ ), a(B p+ ) and a(A p B q ) are the ionic activities of A q+ , B p+ species and a hypothetical compound A p B q , respectively and p + and q − are electric charges of respective ions. Conductivity reduction κ CUT (%) of diluate for each experiment was calculated by Equation (3).
where, κ CUT (%) is the conductivity reduction of diluate, κ D (mS·cm −1 ) is the final conductivity of diluate, and κ F (mS·cm −1 ) is the initial conductivity of diluate. Ash transfer rate for each demineralization degree was calculated by Equation (4).
where, J (g Ash ·m −2 ·h −1 ) is the ash transfer rate, m F (kg) and m D (kg) are mass of feed and diluate, respectively, w Ash,F (wt./wt.%) and w Ash,D (wt./wt.%) are mass fractions of ash in the feed and diluate respectively, N (-) is the number of installed membrane pairs, w (m) is the active width of the ion-exchange membrane, l (m) is the active length of the ion-exchange membrane, and ∆t (h) is the duration of the experiment. The specific capacity of the ED unit C F (kg F ·m −2 ·h −1 ) is calculated by Equation (5).
where, C F (kg F ·m −2 ·h −1 ) is the specific capacity of the ED unit, m F (kg) is the mass of the feed, N (-) is the number of installed membrane pairs, w (m) is the active width of the ion-exchange membrane, l (m) is the active length of the ion-exchange membrane, and ∆t (h) is the duration of the experiment. The specific consumption of electric energy for ion transport E (Wh·kg F −1 ) is calculated by Equation (6).
where, E (Wh·kg F −1 ) is the specific consumption of electric energy for ions transport, U Avg (V) is the average potential difference applied to the membrane stack, t 0 Idt (A·h) is the amount of transferred electric charge, and m F (kg) is the mass of the feed.
The specific water consumption for concentrate dilution m W (kg W ·kg F −1 ) is calculated by Equation (7).
where, m W (kg W ·kg F −1 ) is the specific water consumption for concentrate dilution, m W,i (kg) is the initial mass of water in the reservoir, m W,f (kg) is the final mass of water in the reservoir, and m F (kg) is the mass of the feed.
Specific consumption of concentrated acid for concentrate pH adjustment m HNO 3 ,65% (g Acid ·kg F −1 ) is calculated by Equation (8).
where, m HNO 3 ,65% (g Acid ·kg F −1 ) is the specific 65% (wt./wt.%) acid consumption for concentrate pH adjustment, m Acid,i (g) is the initial mass of 3% (wt./wt.%) acid in the reservoir, m Acid,f (g) is the final mass of 3% (wt./wt.%) acid in the reservoir, w HNO 3 ,dil. (wt./wt.%) is the mass fraction of diluted acid in the reservoir, w HNO 3 ,conc. (wt./wt.%) is the mass fraction of concentrated acid, and m F (kg) is the mass of the feed.
Using the equilibrium dissociation constants for an n-protic acid and balancing individual products of the dissociation, it is possible to derive the following equations for the molar fraction of individual species in a solution, demonstrated by Harris et al. [38].
where, α H n A , α H n−1 A and α H n−j A are molar fractions of H n A, H n−1 A and H n−j A species, [H + ] is the activity of protons, n and j denote the number of donating protons and number of equilibrium dissociation constants, respectively. K 1 , K 2 , . . . , K j represent the equilibrium dissociation constants of first, second, and up to j-th dissociation equilibria. D is a parameter dependent on the number of H + ions of the considered acid. In Equations (9)-(12), it is possible to substitute the activity of H + ions with a pH value to get the direct relation between the molar fractions of the respective compound in a solution and the pH of the solution. The thermodynamical definition of the equilibrium dissociation constant implies that it is a function of ionic strength and temperature. Thus, the equilibrium dissociation constant, which corresponds to the ionic strength and temperature of the processed solution, should be used to model molar species distribution, as demonstrated by Mizera et al. [39] and Reijenga et al. [40]. For the molar species distribution modelling of CA and LA, we have used equilibrium constant values demonstrated by Martell et al. [41], which are valid for infinitely dilute aqueous solutions (I = 0 mol·L −1 ) up to ionic strength I = 2.0 mol·L −1 and temperature of 25 • C. During the demineralization, the ionic strength of the processed solution continually decreases due to ion removal. Thus, the ionic strength of the feed and LML R90 was calculated using Equation (13) to evaluate the ionic strength shift. The average ionic strength value was used to select the equilibrium constant K i and calculate the molar species distribution.
where, I [mol·L −1 ] is the ionic strength of the solution, c i [mol·L −1 ] is the molar concentration of a specific ion, and z i [-] is the electric charge of the respective ion. Since the tabulated values are valid for the temperature of 25 • C, it was necessary to evaluate the equilibrium constant value at the temperature of 15 • C, at which the demineralization of LML was conducted. Using the Van't Hoff equation it is possible to calculate an equilibrium constant K 2 at temperature T 2 if the equilibrium constant K 1 at temperature T 1 is known (Equation (14)).
where (14), a simplification was adopted, and the ∆H is considered constant for the temperature range of 15-25 • C.
Statistical Analysis
Data for feed and product quality are averages (N = 4) with a corresponding confidence interval, calculated by the Student t-test with a confidence level α = 0.05. Figure 2a,b shows the changes in the conductivity and current during the desalination of the samples. As observed, differences in the rates of demineralization are mainly attributed to membrane fouling, which decreased the demineralization rate concerning a product with specific ions' removal. Besides that, minor differences in initial temperatures of the feed stream (diluate) due to practical experimental limitations could also have some minor influence on the rates of demineralization. During the demineralization of LML, specific phenomena were observed, characteristic of the ED process, such as non-zero current value although no voltage was applied to the membrane stack [8]. This phenomenon is explained by the chemical potential difference between the diluate and concentrate chambers. Thus, ions are spontaneously transported against the concentration gradient through the ion-exchange membrane by a diffusion mechanism, and a non-zero current value is observed. The value of the electric current increases in the first 20-30 min of demineralization, which relates to the concentrate conductivity increase due to ions transferred from the diluate chamber. After the electric current reaches its maximum value, a current decrease simultaneously with a diluate conductivity decrease is observed. This observation is explained by the further depletion of ions from the diluate to the concentrate. Further depletion of ions from the diluate increases diluate electric resistivity, while the concentrate electric resistivity decreases due to receiving ions. Figure 2a,b shows the changes in the conductivity and current during the desalination of the samples. As observed, differences in the rates of demineralization are mainly attributed to membrane fouling, which decreased the demineralization rate concerning a product with specific ions' removal. Besides that, minor differences in initial temperatures of the feed stream (diluate) due to practical experimental limitations could also have some minor influence on the rates of demineralization. During the demineralization of LML, specific phenomena were observed, characteristic of the ED process, such as non-zero current value although no voltage was applied to the membrane stack [8]. This phenomenon is explained by the chemical potential difference between the diluate and concentrate chambers. Thus, ions are spontaneously transported against the concentration gradient through the ion-exchange membrane by a diffusion mechanism, and a non-zero current value is observed. The value of the electric current increases in the first 20-30 min of demineralization, which relates to the concentrate conductivity increase due to ions transferred from the diluate chamber. After the electric current reaches its maximum value, a current decrease simultaneously with a diluate conductivity decrease is observed. This observation is explained by the further depletion of ions from the diluate to the concentrate. Further depletion of ions from the diluate increases diluate electric resistivity, while the concentrate electric resistivity decreases due to receiving ions. However, the conductivity and pH of the concentrate were maintained below 15.0 mS•cm −1 and pH 5.5 by adding deionized water and 3% (wt./wt.%) HNO3, respectively. Maintaining the conductivity and pH of the concentrate below the stated values avoids reaching the solubility limit, defined by the solubility product sp (Equation (2)), of low soluble salts in the general form of Mx(HyPO4)z and MCO3 (M=Ca 2+ , Mg 2+ ). Samples of the demineralized product were taken for analysis when the diluate conductivity reduction of 70% and 90% was reached in individual experiments. However, the conductivity and pH of the concentrate were maintained below 15.0 mS·cm −1 and pH 5.5 by adding deionized water and 3% (wt./wt.%) HNO 3 , respectively. Maintaining the conductivity and pH of the concentrate below the stated values avoids reaching the solubility limit, defined by the solubility product K sp (Equation (2)), of low soluble salts in the general form of M x (H y PO 4 ) z and MCO 3 (M=Ca 2+ , Mg 2+ ). Samples of the demineralized product were taken for analysis when the diluate conductivity reduction of 70% and 90% was reached in individual experiments.
Electrodialysis
The ED performance coefficient parameters of the two products are presented in Table 2.
Minerals and Organics Removal Efficiency
Individual ions in the electrodialysis process are transported from the diluate, through the ion-exchange membranes, to the concentrate compartment by diffusion, osmosis, electro-migration and electro-osmosis [42]. From a practical point of view, ions' removal using electrodialysis is beneficial for evaporator performance due to decreasing the scaling potential of low soluble inorganic salts on the heat exchange surface of an evaporator, such as calcium phosphate Ca 3 (PO 4 ) 2 (K sp = 10 −25.5 -10 −24.8 ). Minimizing the scaling potential results in an increase in the evaporator performance and reduces CIP frequency, as well as operational expenses related to purchasing CIP chemicals and wastewater management. Besides the ions of inorganic salts, dissociated organic acids and their corresponding salts are also reduced in the electrodialysis process. The removal of lactic acid and calcium lactate is essential in terms of downstream processing of the demineralized mother liquor in a further crystallization and spray-drying tower. It was investigated by Chandrapala et al. [43] that the presence of lactic acid and calcium negatively affects the effectivity of lactose crystallization and causes stickiness of the spray-dried products. Thus, implementing the electrodialysis process improves the effectivity of further mother liquor processing and the quality of the final product.
The removal efficiency of monovalent ions such as K + , Na + and Cl − is higher than multivalent ions such as Ca 2+ , Mg 2+ , SO 4 2− and PO 4 3− . Regarding the removal efficiency, the following order was observed for cation ions: K + > Na + > Mg 2+ > Ca 2+ and for anion ions: Cl − > SO 4 2− > PO 4 3− > LA − > CA 3− , which corresponds to the observation of the previous work [8]. The difference of removal efficiency could be the LML solutions, which have significantly different mineral and organic profiles, mostly in their lactates and citrates content. Moreover, the quality of the feed material and the technological strategy of lactose production may also impact the removal efficiency. These orders indicate that the K + ions were depleted the most, while the Ca 2+ ions were depleted the least. Concerning anions, the Cl − ions were depleted the most, while the CA 3− ions were depleted the least. This phenomenon is explained by a smaller hydrodynamic radius and higher diffusion coefficient of monovalent ions compared to the multivalent ions [44,45]. Overall, the total content of inorganic ions (ash) was decreased by 65.0% ± 1.5% (wt./wt.%) and by 82.3% ± 0.3% (wt./wt.%) in the case of desalted LML R70 and LML R90, respectively. Furthermore, for organic acids, the removal efficiency depends on the pH of the processed solution, which affects the ionization of organic acids to related species in a solution. Using dissociation constants of specific acid, the distribution of ionized species in solution is calculated at defined pH, ionic strength and temperature, Figure 3a,b [40]. The most efficient removal of organic acids is reached when the pH of the processed solution is in a range where the ionized form of organic acids prevails in the solution [11]. Figure 3a shows that the citric acid is wholly dissociated with the corresponding species in the operating pH range of diluate (pH 5.12-4.76). In contrast, lactic acid (Figure 3b) is presented in non-dissociated form, ranging from 3% at pH 5.12 to 8% at pH 4.76. However, the removal efficiency of citrates was not as significant as that of lactates, which is explained by the higher hydrodynamic radius and lower diffusion coefficient of the citrate anions compared to the lactate anion. In comparison, the absolute content of lactates was reduced by 42.0% ± 3.0% and 78.4% ± 2.2% for the LML R70 and LML R90, respectively, while the absolute citrate content was reduced by 15.7% ± 6.8% and by 34.0% ± 1.8% for the LML R70 and the LML R90, respectively. The average composition of diluted LML Figure 3a shows that the citric acid is wholly dissociated with the corresponding species in the operating pH range of diluate (pH 5.12-4.76). In contrast, lactic acid (Figure 3b) is presented in non-dissociated form, ranging from 3% at pH 5.12 to 8% at pH 4.76. However, the removal efficiency of citrates was not as significant as that of lactates, which is explained by the higher hydrodynamic radius and lower diffusion coefficient of the citrate anions compared to the lactate anion. In comparison, the absolute content of lactates was reduced by 42.0% ± 3.0% and 78.4% ± 2.2% for the LML R70 and LML R90, respectively, while the absolute citrate content was reduced by 15.7% ± 6.8% and by 34.0% ± 1.8% for the LML R70 and the LML R90, respectively. The average composition of diluted LML (feed), demineralized LML R70 and LML R90 are presented in Table 3. Table 3. Conductivity, pH, mineral and organic composition of initial and desalted LML samples.
Parameter
Initial The removal of the main components of final products LML R70 and LML R90 is shown in Figure 4. The removal of the main components of final products LML R70 and LML R90 is shown in Figure 4.
Membrane Properties and Energy-Dispersive X-Ray Spectroscopy (EDX).
After the ED process, a decrease in resistance and permselectivity of AMHs is observed (Table 4)
Membrane Properties and Energy-Dispersive X-ray Spectroscopy (EDX)
After the ED process, a decrease in resistance and permselectivity of AMHs is observed (Table 4), which can be attributed to the expansion of pores and ionic channels under the water flow pressure. For CMHs, on the contrary, permselectivity increases while the resistance remains within the margin of error. The increase in permselectivity may be due to contamination of the membrane surface or pores by organic molecules, such as proteins or inorganic low soluble salts, such as calcium or magnesium phosphates of the composition M x (H y PO 4 ) z . However, according to SEM micrographs, the membrane surface on the diluate side is smooth with no prominent salt deposits ( Figure 5), and EDX spectroscopy of the CMHs surface shows no evidence of calcium or magnesium phosphates (calcium and magnesium atomic contents are 0.4 ± 2.0% and 0.1 ± 9.4%). This may be due to the fact that in the CIP process, the salt deposits are dissolved. Thus, the most likely increase in the permselectivity of the CMHs after electrodialysis is due to the fouling of the membrane surface by proteins, which under conditions of pH~5.1 are presented as positively charged molecules since their isoelectric points are below this pH value.
Conclusions
This study shows one possible utilization of lactose mother liquor (LML), which is a waste by-product in the process of lactose crystallization, using an effective electrodialysis process. Demineralized LML can be recycled in the process of lactose manufacture or spray-dried and used for animal feed due to its high lactose content and proteins. Electrodialysis decreases the ash content in the dry matter of the LML solution, which is similar
Conclusions
This study shows one possible utilization of lactose mother liquor (LML), which is a waste by-product in the process of lactose crystallization, using an effective electrodialysis process. Demineralized LML can be recycled in the process of lactose manufacture or spraydried and used for animal feed due to its high lactose content and proteins. Electrodialysis decreases the ash content in the dry matter of the LML solution, which is similar to sweet whey (approx. 7-8 ash, ODB%). The partially demineralized mother liquor can be mixed with the sweet whey, resulting in an increase of total solids and lactose as well as the improvement of the lactose yield during the crystallization process.
The chemical composition of products with different degrees of demineralization was investigated, as well as the performance parameters of the electrodialysis process, such as ash transfer rate, specific capacity, electric energy, water and chemical consumption. The absolute ash content in the LML was reduced by 65.0 ± 1.5% and 82.3 ± 0.9% for the LML R70 and LML R90, respectively. Moreover, the efficiency of removal of organic acids was investigated. The absolute content of lactic acid was reduced by 42.0 ± 2.9% and 78.4 ± 2.2% for LML R70 and LML R90, respectively, which is of great practical importance in increasing lactose crystallization yield and improving the properties of the spray-dried products. It was demonstrated that the removal efficiency of organic acids depends on the pH, temperature, composition and concentration of individual components in the processed solution.
Furthermore, the energy demands for the salt transfer to produce LML R70 and LML R90 are 20.60 ± 1.17 Wh·kg F −1 and 30.03 ± 3.41 Wh·kg F −1 (DC), respectively. The consumption of demineralized water for the concentrate conductivity make-up, to prevent precipitation of low soluble salts, was 2.62 ± 0.09 kg W ·kg F −1 and 3.48 ± 0.30 kg W ·kg F −1 for LML R70 and LML R90, respectively. The physicochemical properties of the IEMs were investigated and compared before and after the demineralization trials. It was found that the specific electric resistivity and permselectivity of the AMHs were decreased, contributing to pore and ionic channel expansion due to water flow pressure. On the contrary, the specific electric resistivity of the CMHs remains within the margin of error while the permselectivity increases, which is explained by the membrane surface fouling by positively charged proteins. The EDX spectroscopy showed no evidence of calcium and magnesium phosphates on the surface of the CMHs.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2022-12-29T16:16:43.626Z | 2022-12-26T00:00:00.000 | {
"year": 2022,
"sha1": "a3df3a0b0ba5e5df9844fb8c2eb532c54996616e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/13/1/29/pdf?version=1672047833",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef823c27061043a235e0ad296e8c993bc36efe5b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
204941445 | pes2o/s2orc | v3-fos-license | Hepatomas are exquisitely sensitive to pharmacologic ascorbate (P-AscH-)
Rationale: Ascorbate is an essential micronutrient known for redox functions at normal physiologic concentrations. In recent decades, pharmacological ascorbate has been found to selectively kill tumour cells. However, the dosing frequency of pharmacologic ascorbate in humans has not yet been defined. Methods: We determined that among five hepatic cell lines, Huh-7 cells were the most sensitive to ascorbate. The effects of high-dose ascorbate on hepatoma were therefore assessed using Huh-7 cells and xenograft tumour mouse model. Results: In Huh-7 cells, ascorbate induced a significant increase in the percentage of cells in the G0/G1 phase, apoptosis and intracellular levels of ROS. High doses of ascorbate (4.0 pmol cell-1), but not low doses of ascorbate (1.0 pmol cell-1), also served as a pro-drug that killed hepatoma cells by altering mitochondrial respiration. Furthermore, in a Huh-7 cell xenograft tumour mouse model, intraperitoneal injection of ascorbate (4.0 g/kg/3 days) but not a lower dose of ascorbate (2.0 g/kg/3 days) significantly inhibited tumour growth. Gene array analysis of HCC tumour tissue from xenograft mice given IP ascorbate (4.0 g/kg/3 days) identified changes in the transcript levels of 192 genes/ncRNAs involved in insulin receptor signalling, metabolism and mitochondrial respiration. Consistent with the array data, gene expression levels of AGER, DGKK, ASB2, TCP10L2, Lnc-ALCAM-3, and Lnc-TGFBR2-1 were increased 2.05-11.35 fold in HCC tumour tissue samples from mice treated with high-dose ascorbate, and IHC staining analysis also verified that AGER/RAGE and DGKK proteins were up-regulated, which implied that AGER/RAGE and DGKK activation might be related to oxidative stress, leading to hepatoma cell death. Conclusions: Our studies identified multiple mechanisms are responsible for the anti-tumour activity of ascorbate and suggest high doses of ascorbate with less frequency will act as a novel therapeutic agent for liver cancer in vivo.
Introduction
Vitamin C, also known as ascorbic acid (L-ascorbic acid, Asc), is an essential micronutrient for humans acting as a cofactor for various biosynthetic enzymes, especially those involved in scavenging reactive oxygen species (ROS) [1,2]. Asc is also a weak acid (pKa of 4.1 -4.2) and at physiologic pH (7.4) it exists predominantly in the anionic form (AscH -). Over 50 years ago Cameron and Pauling suggested Ivyspring International Publisher Asc (or AscH -) could be a potential drug for the treatment of cancer [3,4]. However, its efficacy was refuted by subsequent double-blind studies that failed to show improvements in the survival of patients receiving oral vitamin C compared to placebo [5,6]. Several more recent studies have shown IV ascorbate was an effective anticancer treatment for solid tumours when compared with orally administered ascorbate [1,7]. Preclinical and clinical studies in cholangiocarcinoma (CC) [8], colon cancer [9,10], glioblastoma multiforme (GBM) [11], melanoma [12], non-small cell lung cancer (NSCLC) [11], ovarian cancer [13], pancreatic cancer [14], and sarcoma [15] have revealed pharmacologic levels of ascorbate achievable by IV but not oral administration selectively kills cancer cells but not normal cells. Pharmacologic ascorbate (P-AscH -) in vivo (> 1.0 mM) could be reached in patients by IV injection (at an average dose of 0.5 g/kg) to kill cancer cells, without side effects [1,10,16]. Thus, identification of tumour types that are exquisitely sensitive to high doses of ascorbate in preclinical models can advance clinical testing.
The efficacy of vitamin C treatment could not be judged from clinical trials if using only oral dosing, and only high intravenous doses of vitamin C produced high plasma concentrations that might have antitumor activity, moreover pharmacokinetic data at high intravenous doses of vitamin C in cancer patients are sparse [17]. Dr. Levine noticed when 1.25 g of vitamin C was given intravenously; plasma concentrations were significantly higher than when the vitamin was given orally [18]. At extracellular concentrations > 1.0 mM vitamin C was toxic to some cancer cells, possibly because high concentrations of vitamin C act as a pro-drug for hydrogen peroxide formation in plasma [18,19]. In addition, the elucidation of mechanisms of cancer-selective cell death induced by ascorbate may also provide insight into liver cancer therapy. Rouleau verified that the extracellular formation of H2O2 by high doses of ascorbate was a prerequisite for cancer cell death via increased cytosolic calcium, which in turn promoted mitochondrial calcium uptake and oxidative metabolism in cancer cells [20]. Current clinical evidence on the therapeutic effect of high-dose IV ascorbate is ambiguous. We proposed a hypothesis that extracellular H2O2 formation is a key mediator of cell death by pharmacologic ascorbate, and that H 2 O 2 can cause death by multiple, distinct mechanisms in the same cell type. Only high doses of ascorbate have been described to possess anticancer effects, but the potential mechanisms of action are unclear.
Hepatocellular carcinoma (HCC) is the third most common cause of cancer-related mortality worldwide and is usually diagnosed at a late stage [21,22]. Although alternative strategies with sorafenib, lenvatinib and regorafenib might improve survival in patients with advanced HCC, the only potentially curative treatment for HCC is tumour resection. Moreover, only approximately 15% of HCC patients are amenable to operative treatment, and the chance that treatment for HCC will be curative remains low [23,24]. HCC is therefore a clinical problem in urgent need of novel and effective anticancer approaches. Because there is an abundance of iron in liver and pharmacologic ascorbate kills various cancer cells by producing extracellular hydrogen peroxide via Fenton chemistry [7,[25][26][27] involving redox-active labile iron, we hypothesized that hepatoma cells might be more sensitive to pharmacologic ascorbate. However, a difficulty of using pharmacologic ascorbate is dosing frequency intervals which to date have not been described [28].
In this study we first investigated ascorbateinduced cytotoxicity towards Huh-7, HCCLM9, MHCC97L and LO2 cells and demonstrated that Huh-7 cells were the cells most sensitive to ascorbate and hydrogen peroxide via mitochondrial dysfunction. We further assessed the effects of P-AscHon mice with HCC in vivo and found the tumour growth was significantly reduced after IP injection of ascorbate at 4.0 g/kg/3 days compared to the tumour growth in the PBS control group. Gene array analysis identified the upregulation of AGER, DGKK, ASB2, TCP10L2, Lnc-ALCAM-3, and Lnc-TGFBR2-1 which were validated by qPCR. Peroxide induced mitochondrial dysfunction in HCC was also detected leading to cell death. Thus, our dose-response studies of ascorbate in cells, a xenograft tumour mouse model and the two case studies using high dose ascorbate treated HCC patients identified a big theoretical advantage of ascorbate in cancer treatment via multiple mechanisms.
Materials and Methods
Cell lines and cell culture. Human hepatic cells (LO2, HepG2, MHCC97L, and HCCLM9 cells) were cultured in RPMI-1640 medium, and Huh-7 cells were maintained in DMEM (Dulbecco's Modified Eagle Medium, Thermo Fisher) supplemented with 10% foetal bovine serum (FBS, Biological Industries) and 1% penicillin and streptomycin and cultured in a humidified atmosphere of 37℃ with 5% CO2. Human LO2 and Huh-7 cells were purchased from the cell bank of the Shanghai Institute of Biochemistry and Cell Biology (Shanghai, China). MHCC97L and HCCLM9 cells were derived from the same host cell line MHCC97L as described previously [29].
Cell viability and soft-agar assay. Human hepatic cells were seeded on a 96-well plate (1.0 x 10 4 cells/well). The ascorbate dose gradient was set as 4.0 mM at the highest, with double dilution to 15.63 μM. Cells were incubated with ascorbate for 24 hrs, followed by removal of the ascorbate from the cultured medium. Then, 10 μL of 5 mg/mL MTT (Thiazolyl Blue Tetrazolium Bromide, Sigma) was added to each well for 4 hrs, and 150 μL DMSO was added to each well to dissolve the formazan. The relative absorbance (λ = 570 nm) for each well was detected in an ELx808™ Absorbance Microplate Reader (BioTek, Winooski, VT, USA). The LD50 was determined from the plots of the percentage viability vs. the dose of compound added as described previously [30]. For the soft-agar assay, the five hepatic cells (1.0 × 10 3 ) were exposed to ascorbate for 1 hr then washed and re-suspended in RPMI 1640 culture Medium (ThermoFisher Scientific, Waltham, MA) supplemented with 10% FBS (ThermoFisher Scientific) for 0.3% ultra-pure agarose (Sigma-Aldrich, St. Louis, MO). The suspension was then poured over 3 mL of pre-solidified 0.6% agar base (Sigma-Aldrich) in 60 mm 2 dishes, and the plates incubated at 37°C with 5% CO 2 atmosphere saturation. Plates were incubated for 14-21 days at 37°C, 5% CO 2 , and the growth medium on top of the agarose layer replaced each week. After the growth period, cells were fixed with 70% ethanol and stained with Coomassie Blue.
All the colonies were counted in the fields (n = 9) of view which were photographed using an inverted phase microscope (Olympus CKX4SF; Tokyo, Japan) at 40× magnification. The plating efficiency and surviving fraction were determined by the clonogenic survival fraction vs. dose of ascorbate. Cell cycle and apoptosis analysis by flow cytometry. Huh-7 cells (1 x 10 5 cells / 6 cm plate) were exposed to ascorbate 0-10 pmol cell -1 for 24 hrs. The treated cells were harvested and fixed overnight with 70% ethanol at 4°C. The fixed cells were washed twice with cold PBS containing 0.1% Triton, suspended in 20 µg/µL propidium iodide (PI), 200 μg/mL RNase and 0.1% Triton X-100, and incubated for 20 min in the dark at 37°C. Cell cycle distribution was determined by fluorescence-activated cell sorting analysis of PI-stained ethanol-fixed cells using a Guava EasyCyte (Guava Technologies, Hayward, CA). For apoptosis analysis, the cells were measured using the FITC Annexin V Apoptosis Detection Kit with PI (Cat # : 640914, BioLegend Inc., San Diego, CA, USA) according to the manufacturer's instructions. As mentioned above, Huh-7 cells (1 x 10 5 cells / 6 cm plate) were exposed to ascorbate and H 2 O 2 for 24 hrs. Adherent Huh-7 cells were collected and washed with BioLegend's Cell Staining Buffer, then suspended in a mixture staining buffer comprised of 45 μL Annexin V binding buffer, 2.5 μL FITC-Annexing V, and 2.5 μL PI for 15 min in the dark at 37°C. Apoptosis was measured by a FACSCalibur flow cytometer (BD Biosciences, San Jose, CA, USA). Data were analysed by FlowJo™ version 7.6.3 software (TreeStar, Ashland, OR, USA) and R software.
Mitochondrial respiration and glycolysis stress test. The dynamics of mitochondrial oxidative phosphorylation or glycolysis were respectively assessed by comparing oxygen consumption rate (OCR) and extracellular acidification rate (ECAR) using a Cell Mito Stress Test Kit (Cat#: 103015-100, Seahorse Bioscience Santa Clara, California, USA) or Glycolysis Stress Test Kit (Cat#: 103020-100, Seahorse Bioscience Santa Clara, California, USA) on an XFe-24 Extracellular Flux Analyzer (Seahorse Bioscience) as described previously [31]. In brief, Huh-7 cells (5.0 x 10 4 cells / well XFe-24 cell culture microplate) were grown in DMEM supplemented with 10% foetal bovine serum and treated for 1 h with or without ascorbate (1.0, 4.0, 8.0, or 10 pmol cell -1 ). Cells were then washed twice with basal assay medium and pre-incubated for 1 h in a CO 2 -free incubator in 500 μL/well of assay medium containing 10 mM glucose, 1.0 mM pyruvate and 2.0 mM L-glutamine. An XFe-24 Sensor Cartridge containing 1.0 mL/well of Seahorse XFe Calibrant was preincubated overnight at 37°C in a non-CO2 incubator. For Mito Stress Test, ten-fold concentrated compounds in the kit of oligomycin (Complex V inhibitor), carbonyl cyanide-4 (trifluoromethoxy) phenylhydrazone (FCCP, mitochondrial uncoupler), and a mixture of rotenone (complex I inhibitor) and antimycin A (complex III inhibitor) were loaded into the XFe-24 Sensor Cartridge to produce final concentrations of 1.0 μM, 1.0 μM, 0.5 μM and 0.5 μM, respectively. For the Glycolysis stress test, ten-fold concentrations of compounds from the kit containing glucose (fuel for glycolysis), oligomycin (Complex V inhibitor) and 2-Deoxy-D-Glucose (2-DG, competitive inhibitor of hexokinase) were loaded into the matched cartridge to produce final concentrations of 10 mM, 1.0 μM and 50 mM, respectively. After a 30-min calibration of the XF sensor with the preincubated sensor cartridge, the cell plates were separately loaded into the analyser, and mitochondrial respiratory and glycolysis parameters were analysed under basal conditions followed by the sequential injection of the complex inhibitors oligomycin, FCCP. For the Mito stress test a mixture of rotenone and antimycin A was added, and for the Glycolysis stress test, glucose, oligomycin and 2-DG were added. Moreover, in Mito stress test, ATP production was evaluated by the difference between the basal OCR and the OCR after oligomycin injection; spare respiratory capacity (SRC) was determined as the difference between maximal and basal OCRs; whereas, in Glycolysis Stress Test, glycolysis was evaluated as the difference in ECAR before oligomycin injection and glucose injection; non-glycolytic acidification was determined as the last ECAR prior to glucose injection. Data were analysed using Seahorse Wave 2.4 Software (Seahorse Bioscience) and normalized with cell number or proteins (Pierce BCA Protein Assay Kit,Thermo Fisher Scientific) loaded in each well. Quadruplicates of each cell treatment were analysed.
Extracellular ascorbate oxidation analysis. The five hepatic cells (5.0 x 10 4 cells / well XFe-24 cell culture microplate) were grown in DMEM supplemented with 10% foetal bovine serum overnight. The rate of oxygen consumption of the five hepatic cell lines upon addition of ascorbate (3.0 mM) to the complete culture mediums was determined using a XFe-24 Extracellular Flux Analyzer (Seahorse Bioscience). The OCR represents the rate of H2O2 production. The accumulation of H 2 O 2 is determined through the addition of catalase (250 units mL −1 ) (bovine liver, Sigma C-1345) as described previously [32].
Intracellular ATP analysis. Intracellular ATP levels in Huh-7 cell were measured using the ATP Assay Kit (Beyotime Biotechnology, Shanghai, China). Huh-7 cells (1 x 10 5 cells / well) were exposed to ascorbate for 1 hr in six-well plate and washed by PBS. Then, 200 μL of lysis buffer was added to lyse the cells. After centrifugation at 12,000 for 5 min, 100 μL of ATP assay reagent was added to initiate the luminescence reaction. After 10 min, 20 μL of the supernatants was added to the ATP assay reagent and luminescence was measured on a Lumat LB 9507 tube luminometer (Berthold Technologies GmbH & Co. KG, Bad Wildbad, Germany). ATP standard curves with concentrations between 0 and 1000 μM were generated for each experiment. The ATP concentration was determined from the corresponding standard curve and converted to an intracellular concentration using the cell number/well as counted on a hemocytometer and normalized using protein concentration/well (Pierce BCA Protein Assay Kit, Thermo Fisher Scientific).
Intracellular reactive oxygen species (ROS) level detection. Dihydroethidium (DHE) was used to measure the intracellular level of ROS. Huh-7 cells (1 x 10 5 cells / 6 cm plate) were incubated with concentrations of ascorbate (0 -10 pmol Cell -1 ) for 1 hr, washed and then incubated with ascorbate-free medium for 5 hrs for ROS analysis. After treatment, the cells were washed twice with PBS and incubated with 2.0 μM DHE at 37℃ in the dark for 30 min. Stained cells were washed, resuspended in PBS, and analysed using a FACStar flow cytometer and FlowJo analytical software.
Measurement of catalase activity. Catalase activity was measured in Huh-7, HepG2, HCCLM9, MHCC97L and LO2 cell lysates using a spectrophotometric-based assay kit (Beyotime Biotechnology, Shanghai, China). Briefly, cells (1.0 × 10 6 ) were harvested in 5.0 mL PBS. Cells were counted with the hemacytometer, so a well-defined number of cells was used in the assay. After cell lysis and centrifugation at 12,000 rpm for 5 min, the supernatants and catalase assay buffer were added to yield H2O2. The decomposition of H 2 O 2 was measured at 520 nm in an ELx808™ Absorbance Microplate Reader (BioTek, Winooski, VT, USA).
Glucose uptake assay. Glucose uptake per cell was measured using the Glucose uptake cell-based assay kit (Cayman Chemical, Ann Arbor, MI, USA). 1 x 10 5 cells / mL were grown in 2.0 mL culture medium in 6-well plates overnight, and were then treated with different ascorbate for 1 hr. After medium was removed, the cells were rinsed in PBS, and incubated at 37°C in 2 mL culture medium for 5 hrs. The cells were then incubated in triplicate with 20 for 20 min in 5% CO 2 , and washed with FBS free DMEM medium with 4.5 g/L D-glucose for 5 min. Finally, the cells were washed twice with prechilled PBS and 100 μL of cell lysis buffer added. After centrifugation at 12,000 × g for 5 min at 4°C, fluorescence of aliquots from supernatants was measured by a fluorescence microplate assay [33]. At the same time, a standard curve was generated by measuring the fluorescence of 2.5-50 μM 2-NBDG in lysis buffer. Xenograft tumour mouse model and bioluminescence imaging. All mice were handled in strict accordance with good animal practice and institutional guidelines using an animal protocol approved by the Institutional Animal Care and Use Committee of the Shanghai Public Health Clinical Center. For the xenograft tumour model, we developed Huh-7 cells bearing lentivirus-luciferase, which was further modified to stably express the firefly luciferase gene by lentivirus transfection to facilitate the in vivo monitoring of tumour development. These cells were cultured to 80% confluence, harvested by trypsinization, washed twice by PBS, resuspended to a final concentration of 2 x 10 6 cells / 100 μL in sterile PBS, and injected subcutaneously into the right flank of 6-week-old male nude mice. When the tumour volume reached 25 -50 mm 3 , the nude mice with subcutaneous Huh-7 cell tumours were randomly divided into three groups: two treatment groups given IP injections of 200 μL and 400 μL of Vitamin C respectively (Cat # : H310211486; 5 mL: 1g; Shanghai Harvest Pharmaceutical Co. Ltd.) (2.0 g/kg or 4.0 g/kg ascorbate) and a control group given an equivalent volume of normal saline (PBS) (7 mice in each group). The weight of the mice was measured every two days.
Bioluminescence imaging analysis. Tumour growth was assessed by bioluminescence imaging. Before they were imaged, the mice in each group were anaesthetized with 3% sodium pentobarbital via intraperitoneal injection. Then, 50 mg/kg pentobarbital sodium was injected intraperitoneally into each mouse after anaesthesia; mice were then given an IP injection of 150 mg/kg D-luciferin, and the image was captured after 30 min. The anaesthetized mice were placed in the heated imaging platform of an IVIS-100/Spectrum optical imaging system (Xenogen / Caliper, Mountain View, CA). Signal intensity was quantified as the sum of all detected photons within the region of interest per second. Tumours were fixed with formaldehyde and histologically evaluated to verify the accuracy of the bioluminescence data. Acquired images were analysed by the Living Image 3.1 software (Xenogen/Caliper, Alameda, CA). Fluorescence contrast, defined as radiance, was quantified using identical size regions of interest. The mice were killed humanely after 6 weeks, and the tumour was harvested for RNA, protein and immunohistochemical (IHC) analysis.
RNA extraction and purification, gene expression profiling and data analysis. Total RNA was extracted from each group of cells using Trizol reagent following the manufacturer's instructions. RNA integrity and concentration were assessed using an Agilent Bioanalyzer 2100 (Agilent technologies, Santa Clara, CA, US) and Nano Drop ND-2000 (Nanodrop). For gene expression profiling, 1.0 μg RNA was amplified and labelled by the Low Input Quick Amp Labelling Kit, One-Color (Agilent Technologies, Santa Clara, CA, US) for one-colour processing, following the manufacturer's instructions. The method used T7 RNA Polymerase Blend, which simultaneously amplified the target material and incorporated Cy3-CTP. Labelled cRNA was purified by a RNeasy mini kit (QIAGEN, GmBH, Germany). Then, 600 ng Cy3-labelled cRNA was hybridized to the Agilent SurePrint G3 Human Gene Expression Microarray (8 x 60K) for 17 hrs, washed with the Gene Expression Wash Buffer Kit, and scanned by Agilent Microarray Scanner (Agilent Technologies, Santa Clara, CA, US), following the manufacturer's instructions. Data were extracted with Feature Extraction software 10.7 (Agilent Technologies, Santa Clara, CA, US) and analysed by R software. Gene Ontology (GO) analysis and KEGG (Kyoto Encyclopedia of Genes and Genomes) were performed to construct meaningful annotation of differentially expressed genes in HCC tumor tissue from mice treated with IP injection of ascorbate [34].
Quantitative reverse transcription-polymerase chain reaction (qRT-PCR). Five micrograms of RNA were used for the RT reaction cDNA synthesis step. The qPCR reactions were performed by the ABI StepOne Cycler. SYBR ® Premix Ex Taq TM Ⅱ was purchased from TAKARA. The cycling procedure was as follows: 94°C for 30 s, 60°C for 30 s (40 cycles). CDK4, CDK6, c-Myc, Casp3, AGER, DGKK, ASB2, TCP10L2, Lnc-ALCAM-3, and Lnc-TGFBR2-1 expression was assayed in xenograft tumour mice by qRT-PCR and the primer sequences were listed in Table S1. The individual Ct of the target gene was obtained from three different samples in each group, standardized by the Ct of the internal reference 18S. The fold change in transcriptional level relative to the control group was standardized by the ΔΔCt method.
Immunohistochemical staining. Immunohistochemical (IHC) staining for DGKK (Cat # : ab103681, Abcam) and AGER/RAGE (A-9) (Cat # : sc-365154, Santa Cruz Biotechnology, Inc.) was performed in 18 formalin-fixed, paraffin-embedded HCC mouse tumour tissue samples. The study was approved by the Shanghai Public Health Clinical Center institutional review boards. The DGKK and AGER staining results were independently evaluated by two expert pathologists (Feng Y and Li Z).
Western blots. Protein samples were lysed in RIPA buffer supplemented with protease inhibitors. Thirty micrograms of total protein were loaded per lane separated on a 10% sodium dodecyl sulfate-polyacrylamide gel by electrophoresis, and proteins transferred onto nitrocellulose membranes. The membranes were blocked with 5% milk in PBST and then incubated with a rabbit anti-Glut1 (Cat # : D160433, BBI Solutions) or rabbit anti-Glut 3 (Cat # : D260435, BBI Solutions), rabbit anti-human Casp 3 (Cat # : D220074, BBI Solutions.) or c-Myc rabbit polyclonal antibody (Cat # : YT0991, Immunoway) or HSP 90 (C45G5) rabbit mAb antibody (Cat # : 4877, Cell Signaling Technology) at 4 °C overnight. After washing with PBST, the blots were treated with a horseradish peroxidase (HRP) conjugated anti-rabbit IgG or anti-mouse antibody for 1 hr at room temperature. Detection of proteins was performed using enhanced chemiluminescence and autoradiography.
Statistical Analysis. The data were presented as the means ± SD from three independent experiments and evaluated through t-tests. Values of p < 0.05 were considered statistically significant. The analyses were performed by GraphPad Prism 5 software (GraphPad Software, San Diego, CA, USA).
Ascorbate-induced cytotoxicity and cell cycle arrest in hepatic cells
The MTT assay was used to detect the ascorbate-induced cytotoxicity in cancerous and non-cancerous hepatic cell lines. As demonstrated in Figure 1A-D, different liver cancer cells showed differential sensitivity to ascorbate ( Figure 1A) and H2O2 ( Figure 1B). The LD 50 values of ascorbate on Huh-7, HCCLM9, HepG2 and MHCC97L cells were found to be 200 μM, 300 μM, 950 μM and 2000 μM, respectively, while the normal hepatocyte LO2 cells maintained 80% viability even at 4.0 mM ascorbate. At the same time, the LD50 values of H 2 O 2 on HCCLM9, Huh-7, HepG2, MHCC97L and LO2 cells were found to be 10 μM, 55 μM, 200 μM, 320 μM and 450 μM, respectively. In the soft agar assay, P-AscHmoles per cell were used to specify dose [32]. Consistent with our MTT assay after one hour ascorbate treatment a differential sensitivity to ascorbate was detected in the five hepatic cell lines, Huh-7, HCCLM9, HepG2, MHCC97L, and LO2 ( Figure 1C and Figure S1).
Because Huh-7 cells had the lowest LD50 for ascorbate, we characterized these cells more extensively. Cell cycle changes induced by ascorbate were measured by PI staining (Figure 1D-F). Ascorbate at concentrations of 2.0 pmol cell -1 induced a significant increase (from 23.62% to 31.16%) in the percentage of cells in the G0/G1 phase and an even larger decrease (from 53.53% to 30.69%) in the S phase ( Figure 1F) compared to no treatment ( Figure 1D). qRT-PCR analysis showed that the expression of the cell cycle-related genes CDK4 and CDK6 in Huh-7 cells were increased by the 2.0 pmol cell -1 Asc treatment ( Figure 1G-H). Annexin V-FITC and PI staining was applied to detect apoptosis in Huh-7 cells. As shown in Figure 1I-K, after treatment with ascorbate, apoptosis of Huh-7 cells occurred in a concentration-dependent manner. Compared with the control group, the group treated with 2.0 pmol cell -1 ascorbate exhibited a significantly higher percentage of apoptotic cells ( Figure 1K). 24 hrs after Asc treatment apoptosis induction occurred in 24% of treated cells compared to 1% in control cells. In addition, apoptosis-related genes c-Myc and Casp3 (caspase-3) were analysed by qRT-PCR. As shown in Figure 1L-M, the expression of c-Myc and Casp3 increased in a concentration-dependent manner. Immunoblot analysis also demonstrated that Casp3 increased its expression when Huh-7 were treated with increased concentrations of ascorbate. c-Myc protein expression also increased after treatment with 4.0 pmol cell -1 ascorbate but decreased with 10 pmol cell -1 ascorbate ( Figure 1N). These findings are consistent with previous studies which showed oncogenic signals such as Ras and c-myc are involved in the process of hepatocarcinogenesis and regulate the expression of metabolic enzymes which induce cancer cell death by apoptosis [35][36][37].
Because Asc oxidation can generate H2O2, which is cyctotoxic to cancer cell [32], we hypothesized that the sensitivity of hepatic cancer cells to ascorbate was due to their lower capacity to remove extracellular H 2 O 2 . An XFe-24 Extracellular Flux Analyzer was used to measure the amount of H 2 O 2 in complete culture medium with five hepatic cells after dissolution of ascorbate (Figure 2A-C). The rate of oxygen consumption (OCR) upon addition of ascorbate to DMEM cell culture medium provides information on the productionof H2O2 [32,38]. In our experimental setting in which 10,000 cells were treated with 3.0 mM Asc in 100 μL of DMEM medium. Addition of ascorbate to culture medium resulted in an increase in the background rate of oxygen consumption rate (Last rate measurement before ascorbate injection) -(Minimum rate measurement after ascorbate). The metabolic rates of oxygen consumption by Huh-7 and HCCLM9, MHCC97L, HepG2 and LO2 cells were 35.03, 30.03, 41.09, 48.74 and 73.28 pmol min -1 cell −1 , respectively ( Figure 2B), which represents the rate of H 2 O 2 production from the oxidation of ascorbate. Next, addition of catalase indicated an accumulation of H 2 O 2 (Maximum rate measurement after catalase) -(last rate measurement after ascorbate) in the medium over the course of an experiment. The assay revealed that the amount of H 2 O 2 in the presence of cells, especially LO2 (1.62 pmol min -1 cell −1 ) and HCCLM9 cells (0.72 pmol min -1 cell −1 ), was less than that in HepG2 (2.25 pmol min -1 cell −1 ), Huh-7 (4.00 pmol min -1 cell −1 ) and MHCC97L (6.43 pmol min -1 cell −1 ) ( Figure 2C). These findings indicate that ascorbate oxidation produces extracellular H 2 O 2 , the sensitivity of these hepatic cells to P-AscHis possibly due to their lower capacity to remove extracellular H 2 O 2 . Indeed, the differential sensitivity to Asc across different cancer cells was also reflected by catalase activity. We measured intracellular catalase activity in Huh-7, HepG2, HCCLM9, MHCC97L and LO2 cell lysates. We verified the catalase activity of LO2 and HepG2 cells (0.075 Unites cell -1 ) were higher than that of Huh-7 (0.045 Unites cell -1 ) and HCCLM9 (0.053 Unites cell -1 ) ( Figure 2D), which was consistent with our results showing ascorbate oxidation and H 2 O 2 -induced cytotoxicity ( Figure 1A-C). Our results demonstrated that extracellular ascorbate oxidation and intracellular catalase levels could be corelated to the sensitivity to P-AscHacross different hepatic cells. However, we also noted that catalase activity in MHCC97L cells was less than that in other hepatic cells, which suggested other metabolic pathways might also be involved in hepatic cell sensitivity to P-AscH -.
Ascorbate modifies the mitochondrial energetics of HCC cancer cells.
We then hypothesized that ascorbate cytotoxicity on Huh-7 cells is mediated by altering mitochondrial respiration. To test this possibility, we compared the changes in oxygen consumption rate, spare respiratory capacity (SRC), and extracellular acidification rate (ECAR) in Huh-7 cells in response to different ascorbate doses. Mitochondrial parameters in Huh-7 cells were monitored using a Seahorse XFe-24 oxygen and proton flux analyser after treatment with 0 -10 pmol cell -1 ascorbate for 1 hr. A biphasic phenotype of mitochondrial energetics was induced by the different ascorbate concentrations ( Figure 3A). Comparing to the control cells, low-dose (1.0 pmol cell -1 ) ascorbate significantly increased in OCR at 25.41 pmol min -1 (p = 0.029) (Figure 3Bi) and ATP production (p = 0.027) (Figure 3Biv (Figure 3Biv). ATP analysis of Huh-7 cells treated with ascorbate (0 -10 pmol cell -1 ) also verified that P-AscHdecreased the intracellular concentration of ATP in Huh-7 in a dose-dependent manner ( Figure 3C). Surprisingly, in Huh-7 cells ascorbate appeared to have no effect on glycolysis; neither low-nor high-dose ascorbate significantly altered the extracellular acidification rate (ECAR) in Mito stress test (Figure 3Aii) and Glycolysis stress test ( Figure 4A). Noticeably, ascorbate could selectively kill KRAS mutant cancer cells by downregulating Glut1 expression to affect glucose consumption, but had no significant effect on the glycolytic rate in KRAS cells or BRAF Wild type (WT) cells [35,36], similar to our results of ECAR in Huh-7 cells treated with different P-AscHwhich have demonstrated that the glycolysis (Figure 4 Bi) and extracellular acidification rate (Figure 4 Bii) and were not significantly difference in ECAR (all p > 0.05). Previous studies have shown there was a remarkably low K-Ras rate or no BRAF mutation in hepatocellular carcinoma cells [39,40] and the mutations of KRAS and BRAF were not present in Huh-7 cells (http://amp.pharm.mssm. edu/Harmonizome/gene_set/HUH7/CCLE+Cell+Li ne+Gene+Mutation+Profiles), which might explain why glycolysis, as measured by our ECAR assay, was unaffected by P-AscHin Huh-7 cells. As illustrated in Figure 4C, glucose uptake was reduced in Huh-7 cells treated with P-AscHand the fluorescence intensity decreased gradually when the ascorbate concentration was gradually increased from 4.0 pmol cell -1 to 10 pmol cell -1 with respect to the cell controls without ascorbate. To determine whether the changes in glucose transport occurred at the level of protein expression, we analysed the expressions of Glut1 and Glut3. Immunblots revealed that the expression levels of Glut1 and Glut3 were decreased in Huh-7 cells treated with P-AscH -, when compared to controls without ascorbate ( Figure 4D).
. Glycolysis assay, ROS production and glucose uptake analysis in Huh-7 cells treated with different doses of ascorbate (0 -10 pmol cell -1 ). A) ECAR curve by Glycolysis stress test; B) Glycolysis parameters analysis in Glycolysis stress test including glycolysis capacity (i) and Non-glycolytic Acidification (ii); C)
Fluorescence intensity changes of 2-NBDG in cells for glucose uptake analysis; D) Immunoblots analysis of Glut1 and Glut3 expression; E) ROS analysed in Huh-7 treated with P-AscH by DHE; One-way ANOVA p values are shown to determine the significance across different doses. The significance between CTRL and other time points was determined by subsequent unpaired t-tests. *p<0.05, **p<0.01, and ***p<0.005 As integrated mitochondrial respiration is closely associated with the production of reactive oxygen species (ROS), modifications of OCR and/or mitochondrial coupling (proton leak) may change the dynamics of ROS generation. We thus examined the effect of ascorbate on intracellular superoxide production using the oxidation-sensitive fluorescent probe DHE. As shown in Figure 4E, after treatment with ascorbate, the intracellular superoxide production was dramatically increased in Huh-7 cells treated with 8.0 pmol ascorbate per cell (p = 0.012) in a concentration-dependent manner. Collectively, these data indicated that ascorbate alters the mitochondrial bioenergetics of Huh-7 cells in a dose-dependent manner. Low-dose ascorbate boosts and high-dose depresses oxidative phosphorylation, which implied that the dose-dependent effects of ascorbate on the activation/depression of oxidative pathway might be associated with ROS generation.
Pharmacologic ascorbate inhibited HCC growth in a mouse model
For the xenograft tumour mouse model in vivo studies, we chose Huh-7 cells because of their sensitivity to ascorbate.
To determine if pharmacologic ascorbate inhibits HCC growth, we treated the xenograft tumour model of Huh-7 cells bearing lentivirus-luciferase with either PBS, 2.0 g/kg/3 days and 4.0 g/kg/3 days of ascorbate, respectively. Tumour volumes were then measured after 5 weeks of treatment by bioluminescence imaging. We found tumour growth was significantly reduced with IP injection of ascorbate at 4.0 g/kg/3 days on Day 28 ( Figure 5D) and Day 35 ( Figure 5E) (compared to the tumour growth in the PBS control group ( Figure 5F). Huh-7 tumour volume in mice treated with a higher dose of ascorbate (4.0 g/kg/3 days IP) was decreased by 48.09% compared to the tumour volume in control mice (PBS) (p < 0.05). However, Huh-7 tumour volume in mice treated with IP injection of ascorbate at 2.0 g/kg/3 days was increased by 17.53% compared to the volume of control tumours (PBS) (p < 0.05) ( Figure 5E-F). HCC mice given IP injection of ascorbate at 4.0 g/kg/3 days weighed significantly more than HCC mice given IP injection of ascorbate at 2.0 g/kg/3 days and their controls (PBS) (23.5 g vs. 22.6 g, respectively, p < 0.05; 23.5 g vs. 22.1 g, respectively, p < 0.05) after 6 weeks. However, HCC mice given IP injection of ascorbate at 2.0 g/kg/3 days weighed no significantly differently to their controls (PBS) (22.6 g vs. 22.1 g, respectively, p = 0.69) ( Figure 5G).
Gene expression profiling and pathway analysis of HCC tumour tissue from mice given IP injection of ascorbate at 4.0 g/kg/3 days
To further corroborate the possible molecular mechanism of HCC treated with ascorbate, we analysed the genome-wide mRNA expression profiles in nine HCC tumour tissue samples from three mice treated with IP injection of ascorbate at 4.0 g/kg/3 days, three mice treated with IP injection of ascorbate at 2.0 g/kg/3 days, and three control mice (only PBS, without ascorbate IP). Using a normal cut-off value greater than two-fold and a P value at or below 0.05, we identified changes in transcript levels of 1632 and 1501 genes/ncRNAs in HCC tumour tissue from mice treated with IP injection of ascorbate at 4.0 g/kg/3 days and 2.0 g/kg/3 days compared with expression in their controls, respectively (Data not shown). Subsequently, a Kyoto Encyclopaedia of Genes and Genomes (KEGG) analysis was performed to determine the top 30 pathways of the differential mRNAs. Nineteen common pathways were identified in mouse HCC tumour tissue treated with IP injection of ascorbate at 4.0 g/kg/3 days and 2.0 g/kg/3 days, including Type II diabetes mellitus, Type I diabetes mellitus, TGF-beta signalling pathway, Staphylococcus aureus infection, Rheumatoid arthritis, Protein digestion and absorption, Prion diseases, NOD-like receptor signalling pathway, Malaria, Leishmaniasis, Intestinal immune network for IgA production, Fatty acid degradation, ECM-receptor interaction, Dilated cardiomyopathy, Complement and coagulation cascades, Antigen processing and presentation, Amoebiasis, Allograft rejection, and African trypanosomiasis ( Figure S2A-B). In HCC tumour tissue from mice treated with IP injection of ascorbate at 2.0 g/kg/3 days compared with expression in their controls. Comparative analysis of the genome-wide mRNA expression profiles from HCC mouse HCC tumour tissue treated with IP injection of ascorbate at 4.0 g/kg/3 days and 2.0 g/kg/3 days identified several metabolism and cell differentiation pathways, such as Vitamin digestion and absorption, Toll-like receptor signalling pathway, Renin-angiotensin system, Phagosome, Osteoclast differentiation, Maturity onset diabetes of the young, Linoleic acid metabolism, Glycosphingolipid biosynthesis, Glycosaminoglycan biosynthesis, Fat digestion and absorption, alpha-Linolenic acid metabolism were only included in the top 30 of GO analysis of mouse HCC tumour tissue treated with IP injection of ascorbate at 2.0 g/kg/3 days ( Figure S2A). Importantly, 192 genes/ncRNAs were uniquely differentially expressed in HCC tumour tissue obtained from mice treated with IP injection of ascorbate at 4.0 g/kg/3 days (not at 2.0 g/kg/3 days) when compared with expression in controls ( (Table S3). Although seven genes (SCNN1A, PPP1R14C, BAD, INPP5D, DHDDS, FMO3, and SPTSSB) were represented in the signalling pathways, each of these seven genes is functionally involved in insulin receptor signalling and metabolism and has been implicated to mitochondrial respiration in the treatment of HCC with high-dose ascorbate [41,42].
AGER, DGKK, ASB2, TCP10L2, Lnc-ALCAM-3, and Lnc-TGFBR2-1 expression were altered in HCC tumour tissue from mice treated with high-dose ascorbate
We validated the gene expression levels of AGER, DGKK, ASB2, TCP10L2, Lnc-ALCAM-3, and Lnc-TGFBR2-1 in HCC tumour tissue samples from mice treated with high-dose ascorbate by qPCR. Consistent with array data, qRT-PCR analyses of AGER/RAGE (Advanced Glycosylation End-Product Specific Receptor; Receptor For Advanced Glycosylation End Products) and DGKK mRNA expression levels revealed a 5.08-fold (p = 0.020) and 2.24-fold (p = 0.005) increase in AGER/RAGE and DGKK gene expression, respectively, in HCC tumour tissue samples from mice treated with high-dose ascorbate showed when compared to controls, and a 11.35-fold (p < 0.001) and 2.47-fold (p = 0.001) increase in AGER/RAGE and DGKK gene expression, respectively, in HCC tumour tissue samples treated with low-dose ascorbate compared to controls respectively, when compared to controls ( Figure 6A-B). Lnc-ALCAM-3 and Lnc-TGFBR2-1) gene expression levels were increased in HCC tumour mice treated with high-dose Asc by (2.19 fold) (p = 0.035) (2.73 fold) (p < 0.001) respectively, when compared to controls, and by 2.05-fold (p = 0.111) and 3.61-fold (p < 0.001) respectively in HCC tumour mice treated with low-dose ascorbate compared with controls were (Figure 6C-D). The repression of ASB2 and TCP10L2 gene expression levels in HCC tumour mice treated with high-dose ascorbate compared to control was also verified. (Figure 6E-F). To further investigate the expression of AGER/RAGE and DGKK proteins in HCC tumour mice treated with high (4.0 g/kg/3d) and low-dose ascorbate (2.0 g/kg/3d), we performed immunohistochemical (IHC) staining of 18 tumour tissue samples (Figure 7) and observed weak membranous staining of AGER/RAGE in hepatoma cells and low-dose ascorbate tissue samples (Red arrow) (2.0 g/kg/3d IP), with the yellow arrow representing the area of necrosis ( Figure 7C-E). The diffuse and strong membranous staining of AGER/RAGE in high-dose ascorbate samples (4.0 g/kg/3 days IP) showed higher expression of the protein ( Figure 7F). Weak cytoplasmic and nuclear staining of DGKK in hepatoma cells and in the low-dose ascorbate samples (2.0 g/kg/3d IP) is shown by the red arrows, whereas the yellow arrows represent the area of necrosis ( Figure 7H). DGKK was strongly expressed in the cytoplasm of cancer cells and co-expressed in the nucleus and cytoplasm of few hepatoma cells in high-dose ascorbate samples (4.0 g/kg/3 days IP) ( Figure 7I). Taken together, stronger staining with both anti-AGER/RAGE and -DGKK antibodies was observed in HCC cancer cells treated with high-dose ascorbate (4.0 g/kg/3 days IP) ( Figure 7).
Discussion
Pharmacological ascorbate may represent an easily implementable drug as non-toxic adjuvant to conventional HCC treatments. To date there have been two case studies involving HCC patients and high dose IV ascorbate. One was a patient with a hepatocellular carcinoma with a remarkable response to high dose IV ascorbate that we received from clinicians in Malaysia (Dr. Raymond Ngeh and Dr. Robert Luk Clinical notes). The patient was a middle-aged woman with massive primary HCC. The cancer specialist suggested no further treatment as she will die in a few weeks. On her pre-treatment CT scan, the patient was given high dose IV vitamin C (1.5 -2.0 gram per kg body weight) and low dose chemotherapy which included 3 generic chemotherapy drugs at 1/3 usual dose (HiCLoChemo), PET/CT scan images of the patients after 4 cycles of HiCLoChemo treatment, compared with pre-treatment PET/CT scan were presented in the Figure S3. The HCC cancer is now in remission, and the patient remains otherwise healthy. In the second case report the patient presented with a metastatic disease to the left 8th rib and his medical oncologist determined that the patient was a candidate for standard of care chemotherapy with sorafenib along with 75 grams IV ascorbic acid administered three times per week. Before high dose IV vitamin C treatment, he had a left 8th rib metastasis (74 × 44 mm), along with a 2.8 × 2.2 cm ablation cavity from premetastatic treatment in the right lobe of the liver After 16-week cycles of ascorbate and sorafenib, his rib lesion was markedly smaller (43 × 28 mm), with no change in the liver function and no suggestion of additional metastasis. They also demonstrated that ascorbate could act synergistically with sorafenib in killing HepG2 cells involving the dysregulation of cellular calcium homeostasis [20]. The above two cases indicate that high dose IV ascorbate shows strong promise for the treatment of HCC in humans, and highlighted a plausible mechanism of the anti-tumour activity. Organismal Injury and Abnormalities ACAN, B4GALNT1, CHGA, CLDN7, CPS1, CST2, DEFB121, EDN1, FAM81A, HSPA2, KCNJ1, KCNJ8, KRTAP10-3, MITF, MOV10 L1, NUPL2, PSMG2, PTOV1, PYGO2, RGPD4 (includes others), RNF175, SLC8A2, SMTNL2, SPATA8, STARD8, STX19, TBK1, TCP10/TCP10 L2, TMCC2, TMF1, TSFM, UBC, Although most animals and plants can synthesize ascorbate through a sequence of enzyme-driven steps involving conversion of monosaccharides to ascorbate ( Figure S4), some animals and human cannot synthesize ascorbate because of the lack of the functional L-gulonolactone oxidase (GULO) [43]. Importantly, ascorbate produces extracellular hydrogen peroxide involving redox-active labile iron and induces DNA damage and ATP depletion in CC, colon, NSCLC and GBM cancer cells [8,10,11,44], which implies that cancer cells treated with high-dose ascorbate can utilize a metabolic pathway in the hypoxic environment of hepatoma. We therefore compared the changes in OCR, ECAR and SRC in Huh-7 cells in response to ascorbate treatment. Interestingly, low-dose (1.0 pmol cell -1 ) ascorbate promoted Huh-7 mitochondrial respiration with significant increases in OCR and ATP production but decreases in extramitochondrial OCR when compared to control untreated cells. These results are in accordance with previous studies where patient fibroblasts treated with ascorbate exhibited significantly higher ratios of complexes I-III and II-III in comparison to the same cell lines without ascorbate treatment [45]. High-dose (4.0 pmol cell -1 ) ascorbate however produced biphasic phenotype in mitochondrial energetics: mitochondrial respiration, ATP production and SRC were significantly reduced, but proton leakage, extramitochondrial OCR and intracellular ROS were increased. Due to essentiality of extracellular H2O2 and independent of mitochondria dysfunction, peroxide may trigger gene expression that leads to cell death; these data indicate high doses of ascorbate served as a pro-drug to kill hepatoma by altering mitochondrial respiration.
Our main findings of the present study are that a high dose of ascorbate (4.0 g/kg/3 days) given by IP injection significantly repressed tumour growth, while a lower dose of ascorbate (2.0 g/kg/3 days) did not repress tumour growth of HCC. Given the pharmacokinetics of ascorbate, our results demonstrated that the pharmacological dose of ascorbate by IP injection was of critical importance with high doses of ascorbate inhibiting tumour growth whereas low doses act as a tumour growth factor. Our results of IP injection are in accordance with previous studies where the oral supplementation of ascorbate had no anticancer effect [5][6][7], and the parenteral administration of high-dose ascorbate can significantly inhibit tumour growth in TLT (TREM-like transcript)-bearing mice [46]. However, the early phase research of ascorbate treatment was inappropriately performed, and the various parameters in these clinic studies (doses, routes of administration, optimal schedule) were not correctly defined or standardized, leading to mixed results and controversy [47]. Several papers have studied pharmacologic ascorbate in hepatoma using in vitro models [7,20,48]. In mice with the hepatoma parenteral administration of ascorbate either IV or IP 1.0 g/kg of sodium ascorbate achieved pharmacologic concentrations of ascorbate in blood (>1.0 mM), whereas oral administration of the same dosage did not [7]. Only in parental administration was the growth rate of a murine hepatoma decreased, resulting in pharmacologic concentrations of ascorbate that exhibit interesting anticancer properties. A key advantage of IV ascorbate over conventional chemotherapeutic agent is its lack of toxicity. Because of these properties in vivo IV treatment would require less frequency. One studies where ascorbate frequency of one or two treatments with intravenous administration of ascorbate per week was similar to or less than ours [16]. Most animal tumours need daily or even twice daily treatment. If treatment is needed only every three days, this may mean that hepatoma in humans will be responsive to ascorbate at much less frequent dosing than is currently used for other tumours. The dosing frequency is a real clinical problem. If we have a cancer that requires less frequency, i.e. once weekly vs 3 x weekly, this means the treatment is more likely to be tested in people and actually used: the special sensitivity of human hepatoma to IV ascorbate and the need for less frequent treatment is definitely an advantage.
To determine the mechanistic basis of this phenotype, we analysed the gene expression profiles of HCC tumours from mice treated with IP injection of ascorbate at 4.0 g/kg/3 days using gene arrays and identified changes in transcript levels of 192 genes/ESTs. The upregulation in AGER and DGKK gene expression was validated by qRT-PCR and IHC analysis. RAGE, a member of the immunoglobulin protein family, is a multiligand receptor able to bind to advanced glycation end products (AGEs), which are heterogeneous, reactive, and irreversibly crosslinking molecules formed primarily by the non-enzymatic reaction of sugars with proteins [49,50]. AGEs can induce the apoptosis of endothelial progenitor cells (EPCs) by activating RAGE. Dysregulation of the AGEs/RAGE axis in EPCs may promote atherosclerosis and the NADPH/ROS/JNK signalling axis may serve as a potential target of the clinical treatment of atherosclerosis [51,52]. AGE and LPS increase IL-6 secretion depending on NF-κB activation and ROS production through RAGE [53]. RAGE stimulation also induces the generation of ROS mainly through the activity of NADPH oxidases. ROS accumulation has been linked to the formation of AGEs in various diabetic tissues [50]. When cells are unable to properly adapt to ROS generation, RAGE activation induces oxidative stress, leading to neuronal not cancer cell death [52]. DGKK (Diacylglycerol Kinase Kappa) was repressed in preputial tissues in carriers of the risk allele rs1934179 [54,55]. Treatment with H2O2 can result in the generation of diacylglycerol (DAG) and IP3, which could increase intracellular calcium and activate several forms of PKC, leading to Ras and Raf activation [56][57][58].
Pathway analysis of our gene array data identified five canonical signalling pathways dysregulated in cancer, namely, Insulin Receptor Signalling, Dolichol and Dolichyl Phosphate Biosynthesis, UDP-N-acetyl-D-glucosamine Biosynthesis II, Ceramide Biosynthesis, and IL-3 Signalling pathways. Ascorbate attenuated upstream hepatic insulin action and affected hepatic insulin signalling by impairing the phosphorylation and activation of the insulin receptor and its subsequent substrates, which was found in rats [59]. Ascorbate suppresses the IL-3-induced phosphorylation of MAP kinase, which may modulate IL-3-mediated cytokine responses and therefore play a role in controlling inflammatory responses [60]. Ceramides can induce cell apoptosis by altering the cellular redox status to be responsible for signal transduction with ROS of many extracellular agents [61]. The ascorbate-mediated production of oxidative stress has been shown to retard the tumour growth of HCC, which remains the predominant mechanism behind the cellular effects of ascorbate [20,62]. Ascorbate can induce the production of ROS, leading to oxidative stress and cell death, a death pathway that is interesting if we consider the multiple apoptotic defects usually exhibited by cancer cells.
In summary, our in vitro studies have shown hepatic cancer cell death from pharmacologic levels of ascorbate are mediated by hydrogen peroxide, and our in vivo studies have shown IV ascorbate slows tumour growth via mitochondrial damage and multiple gene transcription changes. Our findings support the need for further detailed dose-response studies of ascorbate in an appropriate animal cancer model. The clinical efficacy of high-dose ascorbate in HCC treatment also needs to be explored. | 2019-10-31T06:31:46.493Z | 2019-10-18T00:00:00.000 | {
"year": 2019,
"sha1": "f8e4811072033576f79a1f5a130ce6ee8c4c3e2b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7150/thno.35378",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8e4811072033576f79a1f5a130ce6ee8c4c3e2b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
259146166 | pes2o/s2orc | v3-fos-license | Heart Failure in a Patient With Metastatic Well-Differentiated Neuroendocrine Tumor
Patients with neuroendocrine malignancy with liver metastases are at risk for carcinoid heart disease which, if left unchecked, can lead to heart failure. This case study demonstrates a clinical situation in which an advanced practitioner performed a thorough workup consisting of lab work and imaging studies, including echocardiogram, cardiac MRI, and dotatate PET/CT, as well as outside record review and comprehensive physical exam. Early detection, intervention, and control of disease are paramount to prevent potentially life-limiting carcinoid heart disease.
HISTORY
Mrs. R is a 47-year-old Hispanic female with no significant past medical history who initially presented in Mexico with a 6-month duration of epigastric abdominal pain, abdominal bloating, watery diarrhea, and facial flushing. She underwent a CT of the chest/abdomen/pelvis with contrast in March 2019, which revealed diffuse, hypodense distribution of metastatic deposits in the liver with undetermined primary origin. She underwent a liver biopsy in April 2019, which confirmed grade 2 moderately differentiated neuroendocrine tumor ("atypical carcinoid") with probable intestinal origin. The diagnosis was confirmed by immunohistochemistry, positive for both chromogranin and synaptophysin. Liver function tests and complete blood count were within normal limits, as was carcinoembryonic antigen, alpha fetoprotein, and cancer antigen 125. There was no record of biochemical testing results (chromo-granin A, 24-hour urine, or plasma 5-hydroxyindoleacetic acid [5-HIAA]) near the time of diagnosis, as unfortunately, medical records sent from Mexico were incomplete.
Approximately 3 months after diagnosis (July 2019), she was seen by an oncologist in Mexico and began long-acting octreotide acetate 20 mg intramuscularly monthly. She continued long-acting octreotide until May 2021 and stated that during the around 23 months that she received monthly injections, she had only mild improvement in her symptoms; in fact, she admitted to a 40-lb weight loss. She was last seen by her oncologist in Mexico in June 2021, at which time her chromogranin A level was 66,320 ng/mL.
PRESENTATION
Mrs. R then relocated to the US and presented to our institution's emergency department the following month with complaints of persistent Abstract Patients with neuroendocrine malignancy with liver metastases are at risk for carcinoid heart disease which, if left unchecked, can lead to heart failure. This case study demonstrates a clinical situation in which an advanced practitioner performed a thorough workup consisting of lab work and imaging studies, including echocardiogram, cardiac MRI, and dotatate PET/CT, as well as outside record review and comprehensive physical exam. Early detection, intervention, and control of disease are paramount to prevent potentially life-limiting carcinoid heart disease.
Heart Failure in a Patient With Metastatic Well-Differentiated Neuroendocrine Tumor
DIFFERENTIAL DIAGNOSIS
SEE BACK FOR ANSWER epigastric abdominal pain, watery, non-bloody diarrhea of 5 to 10 stools per day, abdominal bloating, and weight loss. She stated that her last long-acting octreotide injection was in May 2021 and since that time her watery diarrhea had worsened. She stated she was last told by her oncologist in Mexico that the cancer had damaged a valve in her heart. Mrs. R admitted to progressive dyspnea on exertion, bilateral lower extremity edema, intermittent chest discomfort with rest and exertion, and fatigue. A physical exam was consistent with a systolic ejection murmur, 2+ lower extremity edema, + jugular venous distention, and hepatomegaly extending four fingerbreadths below costal margin.
WORKUP
B-type natriuretic peptide (BNP) was 1,326 pg/ mL. Imaging revealed near-complete replacement of the liver with soft-tissue masses consistent with metastatic disease and large-volume abdominopelvic ascites and anasarca ( Figure 1). A transthoracic echocardiography revealed a severely dilated right atrium, torrential tricuspid regurgitation due to incomplete coaptation and fixed open leaflets, and mild/moderate pulmonic stenosis. Left ventricular ejection fraction was 68%. Cardiology and cardiothoracic surgery teams were consulted for consideration of valve replacement. A cardiac MRI confirmed severe tricuspid regurgitation with regurgitant volume 89 mL/ cycle with regurgitant fraction of 66% and dilation of the right atrium and right ventricle. The cardiology and cardiothoracic surgery team ultimately decided that her severe liver dysfunction precluded standard cardiac intervention, and she was referred to the interventional cardiology service for percutaneous options.
While inpatient, she was administered longacting octreotide 30 mg intramuscularly, which only mildly improved her symptoms of watery diarrhea and facial flushing, so she was administered lanreotide 120 mg subcutaneously 2 weeks later. Mrs. R also underwent repeat liver biopsy, which confirmed well-differentiated neuroendocrine tumor.
WHAT IS THE CORRECT DIAGNOSIS FOR MRS. R?
A Infiltrative cardiac metastasis B Infectious endocarditis C Carcinoid heart disease
DISCUSSION
A Infiltrative cardiac metastasis. Cardiac metastasis from primary neuroendocrine tumors is exceedingly rare and usually found incidentally on dotatate imaging (also called octreoscan). The incidence of intracardiac metastasis is approximately 4% in all patients with metastatic carcinoid tumors (Kinney et al., 2020). Nonetheless, the advanced practitioner should rule out infiltrative cardiac metastasis by dotatate imaging as the cause of heart failure in this patient population. In this case, infiltrative cardiac metastasis was not supported by dotatate PET/CT and cardiac MRI. B Infectious endocarditis. Infectious endocarditis is an inflammation of one or more valves of the heart usually caused by a bacterial infection or, more rarely, a fungal infection such as Candida (Vyas, 2020). Cancer patients are frequently at risk for infections due to central line insertions and immunosuppression from cancer-directed therapies. Mrs. R had no known history of endocarditis prior to her presentation at our institution, and blood cultures were subsequently negative. Echocardiogram did not support leaflet vegetation. Therefore, infectious endocarditis was ruled out as the cause of valvular dysfunction.
C Carcinoid heart disease (correct answer).
Mrs. R's presentation and workup were most consistent with carcinoid heart disease (CHD). She had extensive liver metastatic disease, an elevated BNP, and her echocardiogram revealed tricuspid regurgitation, fixed/open leaflets, and mild/moderate pulmonary valve stenosis. Carcinoid heart disease occurs almost exclusively in the presence of extensive liver metastasis (Ram et al., 2019). Liver dysfunction from metastatic tumor burden affects the breakdown of serotonin metabolites in the circulation. It is this excessive exposure of serotonin that causes plaque formation on cardiac valves (Grozinsky-Glasberg et al., 2015). Carcinoid heart disease most commonly affects right-sided heart valves (tricuspid and pulmonic; Bober et al., 2020). Due to inactivation of serotonin in the lung vasculature by pulmonary monoamine oxidase, left-sided heart valves are spared exposure to excessive serotonin while the right-sided heart valves are not (Figure 2; Jin et al., 2020).
CLINICAL OUTCOME
Due to significant tumor burden and substantial delays in obtaining adequate funding, it was determined that cardiac intervention would be deferred, and systemic chemotherapy was initiated with the plan to reevaluate for response after four cycles and re-refer to interventional cardiology for percutaneous valve replacement if she had an adequate response to treatment. She began carboplatin dosed at area under the curve 6 administered on day 1 followed by etoposide 100 mg/m 2 Figure 2. This illustration is consistent with carcinoid heart disease. Note the metastatic tumors in the liver, endocardial plaques within the right ventricle, and thickening of tricuspid and pulmonary valves (Mayo Clinic, 2015). Used with permission from the Mayo Foundation for Medical Education and Research. All rights reserved.
WHAT IS THE CORRECT DIAGNOSIS FOR MRS. R?
A Infiltrative cardiac metastasis B Infectious endocarditis C Carcinoid heart disease (correct answer) on days 1 to 3 every 21 days. The pre-treatment chromogranin level was 97,420 ng/mL. She was then discharged home after cycle 1 with close oncologic and cardiac follow-up. Mrs. R went on to receive a total of three cycles of systemic therapy with carboplatin/etoposide, which she tolerated well with minimal toxicity.
Mrs. R presented to the emergency department in September 2021 with complaints of worsening abdominal pain, nausea without vomiting, progressive weakness, and diminished oral intake. She was cachectic with lower extremity edema, and the abdomen was distended and diffusely tender. Lactate was 13 mmol/L and BNP 20,093 pg/ mL. On the following day, Mrs. R became acutely unresponsive with fixed, dilated pupils. Orders for do not resuscitate/do not intubate were obtained, and she passed away after cardiac arrest.
CONCLUSION
While the correct diagnosis was likely the most obvious, it is essential that the advanced practitioner pursue a thorough workup to include baseline lab assessment (BNP, chromogranin A, and urine 5-HIAA) in addition to imaging studies with a transthoracic echocardiogram, cardiac MRI, and dotatate PET/CT. Carcinoid heart disease is more common in patients with metastatic liver disease who exhibit symptoms of carcinoid syndrome and have a high 5-HIAA and serotonin metabolites. If liver metastases are identified, serial BNPs and echocardiograms are warranted to evaluate for the development of CHD. Proper evaluation and early recognition are essential to circumvent the manifestations of heart failure in patients with metastatic neuroendocrine tumors. l
Disclosure
The authors have no conflicts of interest to disclose. | 2023-06-14T05:06:38.809Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "1b0c79009ce286ea82f3a50c67bc4c947abd2c3e",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1b0c79009ce286ea82f3a50c67bc4c947abd2c3e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252418970 | pes2o/s2orc | v3-fos-license | Antioxidant and anti-inflammatory activities of lycopene against 5-fluorouracil-induced cytotoxicity in Caco2 cells
5-fluorouracil (5FU) is widely used to treat colorectal cancer (CC) and its main mechanisms of anticancer action are through generation of ROS which often result in inflammation. Here, we test the effect of Lycopene against 5FU in Caco2 cell line. Caco2 cells were exposed to 3 l g/ml of 5FU alone or with 60, 90, 120 l g/ml of lycopene. This was followed by assessment of cytotoxicity, oxidative stress, and gene expression of inflammatory genes. Our findings showed that Lycopene and 5FU co-exposure induced dose-dependent cytotoxic effect without compromising the membrane integrity based on the LDH assay. Lycopene also significantly enhanced 5FU-induced SOD activity and GSH level compared to control for all mixture concentrations ( p < 0.01). Lycopene alone and combination with 5FU-induced expression of IL-1 b , TNF- a , and IL-6 . Furthermore, IFN- c expression was significantly enhanced by only mixture of lyco- pene (90 l g/ml) and 5FU ( p < 0.05). In conclusion, Lycopene supplementation with 5FU therapy resulted in improvement in antioxidant parameters such as catalase and GSH levels giving the cell capacity to cope with 5FU-mediated oxidative stress. Lycopene also enhanced IFN- c expression in the presence of 5FU, which may activate antitumor effects further enhancing the cancer killing effect of 5FU. (cid:1) 2022 The Author(s). Published by Elsevier B.V. on behalf of King Saud University. Thisis an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Introduction
One of the most common forms of cancer that significantly impacts global burden of cancer deaths is colorectal cancer (CC). Being the third on global cause of mortality, prognosis for colorectal cancer still remains unpredictable for more than 50 % of patients affected with the cancer (Ferlay et al., 2015, Sung et al., 2021. Primarily, availability of chemotherapeutic agents and advancement in treatment modalities have significantly improved patients overall survival for CC especially for those in early disease stages. However, patients with advanced stage disease with metastasis have considerably poor prognosis for overall survival because conventional therapies are unsuitable for treating metastatic tumor cells (Simon, 2016).
One major problem with chemotherapy in cancer treatment is the associated toxicity despite the significant increase in the patients' overall survival. Oxidative stress because of reactive oxygen species (ROS) is a major factor responsible for CC pathogenesis (Basak et al., 2020). Generated ROS includes hydroxyl radicals (HO ), superoxides (O 2À ) and hydrogen peroxide (H 2 O 2 ). These ROS mediate genetic alterations due to DNA oxidation resulting in DNA damage that may be critical to propagation and progression of cancers like CC (Bazhin et al., 2016). ROS can oxidize DNA bases causing lesions that create either a single or double strand breaks. When such modifications occur in genes of important proteins such as the tumour suppressor, p53 protein, this is the pathogenesis of cancer development.
5FU is a conventional drug that is widely investigated and it is the first line treatment of choice for patients with CC (Peng et al., 2020). One of the mechanisms of anticancer action of 5FU is the generation of ROS such as HO and O 2À which attach cancer cells in several different ways. Furthermore, these surge in intracellular ROS generation by 5FU is the main fator responsible for the major side effects associated with 5FU therapy such as cardiotoxicity, hepatotoxicity and nephrotoxicity (Refaie et al., 2021, Elghareeb et al., 2021. This is especially because this mechanism of action is not target specific as seen in immunotherapy, thus affecting nearby normal healthy cells. Several studies have investigated the use of antioxidants, specifically dietary antioxidants, to suppress the side effects of 5FU (Rtibi et al., 2021). One such useful natural antioxidant that has been widely investigated to suppress chemotherapy induced side effects of drugs such as cisplatin is lycopene (Kulhan et al., 2019). Lycopene is a carotenoid compound that is abundantly present in tomatoes, guava and watermelon (Moroni et al., 2021). There is well-established report within the literature documenting the antioxidant activity of lycopene in ameliorating chemotherapy induced toxicity. Moroni et al. (2021) showed that lycopene suppressed skin toxicity induced by panitumumab in patients with metastatic CC. Furthermore, lycopene was found to significantly suppress inflammatory responses in CC cells by inhibiting pro-inflammatory cytokines expression like cyclooxygenase-2 (COX-2), interleukin 1b (IL-1b), IL-6 and tumor necrosis-a (TNF-a), (Cha et al., 2017). Here, we investigated the role of lycopene in mediating antioxidant and anti-inflammatory effects against 5FU mediated oxidative stress and inflammatory responses.
Cytotoxicity assay
The cytotoxic effect of both drug 5FU and Lycopene on colorectal cancer cell line was evaluated in Caco2 using MTT assay. Cells were grown and then exposed to different concentration of 5FU and Lycopene for 48 hrs. The IC50 value were determined for each drug and was 6.1 lg/ml for 5FU and 183.7 lg/ml for lycopene .
Lactate dehydrogenase assay (LDH)
LDH assay (Sigma Aldrich, St Louis, USA) was employed to assess cell viability. Cells were exposed to 3 lg/ml of 5-FU; 60, 90, 120 lg/ml of Lycopene, and mix of the two compounds (3 lg/ml of 5-FU plus 60 lg/ml of lycopene), (3 lg/ml of 5-FU mixed with 90 lg/ml of lycopene), (3 lg/ml of 5-FU plus 120 lg/ ml of lycopene). A 100 ll of culture medium without cells were collected and kept on ice after 24 h exposure. Mixture of 50 ll of sample culture medium and 50 ll of master mix (2 ll LDH substrate and 48 ll assay buffer) added per well. In new 96 well plate and incubated as above with shaking for 15 mins following absorbance measurement at 450 nm.
Afterwards, the cells were then treated as follows: 3 lg/ml of 5FU; 60, 90, 120 lg/ml of Lycopene, and mix of two compounds as (3 lg/ml of 5-FU plus 60 lg/ml of lycopene), (3 lg/ml of 5-FU mixed with 90 lg/ml of lycopene), (3 lg/ml of 5-FU plus 120 lg/ ml of lycopene), then incubated again for 24 h at 37°C. After, the cells were collected and washed with 1x PBS then turn to lysate by using the cold buffer with sonicating for 5 min with 1 ml of (210 mM mannitol, 1 mM EGTA, 20 mM HEPES and 70 mM sucrose pH 7.2). Cells were centrifuged for 5 mins at 1500 rpm and 4°C to collect the supernatant. The SOD assay (Cayman SOD kit No.706002) was performed by mixing 200 ll diluted radical detector with 10 ll sample per well of 96-well plate. Reaction initiation was done by 20 ll diluted Xanthine Oxidase in each well, and followed by 30 mins incubation at room temperature with shaking. Absorbance scan was done at 440-460 nm range. SOD activity was evaluated by:  0:23 ml=0:01 ml  sampledilution: 2.4. Glutathione (GSH) 1  10 6 /ml of cell was seeded with incubation for 24 h at 37°C. Afterwards, cells were exposed the above then incubated again for 24 h at 37°C. The cells were detached by using scrapper and washed in 1X PBS then sonicated for 5 mins with 1 ml of PBS. After sonication, the cells were then centrifuged 1500 rpm for 5 mins at 4°C. Collected supernatant was transferred into new 1.5 ml tube. The GSH assay (Cayman GSH kit No.703002) was using manufacturer's instructions. A 50 ll of sample and 150 ll of cocktail 11.25 ml (2-(N-morpholino) ethanesulfonic acid MES Buffer, were mixed and 0.45 ml reconstituted to a cofactor, 2.1 ml reconstituted enzyme, 2.3 ml H 2 O, and 0.45 ml of 5,5 0 -dithio-bis-2-nitrobenzoic acid. The plate was incubated for 25 mins in the dark, and absorbance measured at 410 nm.
Catalase activity
A 1 Â 10 6 cells /ml was exposed to the drugs as above then incubated again for 24 h at 37°C. After incubation, the cells were rinsed with 1X PBS then sonicated in catalase assay buffer for 5 min. The cells were then centrifuged at 4°C, 10 mins at 10000 rpm, and supernatant was collected. Catalase assay (Bio vision, Catalog No. K773-100) was performed by adding 2 ll of sample to assay buffer up to 78 ll in each well of 96 well plate and incubated for 5 min at 25°C. Afterwards, 50 ll developer mix (46 ll catalase assay buffer, 2 ll OxiRed TM Probe, 2 ll HRP lyophilized solution) was added per well and incubated at 25°C for 10 mins and absorbance measurement done at 570 nm.
Intracellular ROS generation
ROS generation was analyzed by using ROS assay Sigma-Aldrich (St Louis, MO, USA). Cells were grown on 96 well black microplate as well as 6 well plate at 5 Â 10 5 cells/ml in 10 % cultured medium incubated for 24 h at 37°C. Afterwards, cells were treated as previously and incubated for 24 h. The media was aspirated, then 100 ll of new media was mixed with 5X concentration of deep red solution to per well of the 96 well plate while a 1 ml of same solution was added per well of 6 well plate. After addition, cells were incubated for an hour at 37°C. Absorbance was then read at 485-528 nm range. For imaging, the media was aspirated from the 6 well plate, cells were washed several times with 1X PBS and imaged under fluorescence microscope (DMLB, Leica, Germany).
Gene expression
Analyses of TNF-a, IL-1a, IL-1b, IL-6, IL-27, IL-33, INF-c, Cox-1, and Cox-2 mRNA levels were done by RT-PCR (PE Applied Biosystems, Foster City, California). Briefly, Caco2cells were treated as previously and incubated as above. Total RNA was extracted with TRIzol RNA Reagent (Cat.No.79306) then converted into cDNA by utilizing cDNA kit, (Thermofisher Cat. No. 4368814. USA). A 1000 ng of purified RNA sample was used to prepare the complementary single strand of cDNA. The total volume of one cDNA reaction was 20 ll containing10 ll of master cDNA mix + 10 ll of purified RNA). After CDNA synthesis, 2.5 ll of cDNA was mixed with 17.5 ll of Go Taq qPCR Master Mix (SYBR Ò green). The master mix of each sample were prepared as follows; 10 ll of Go Taq Green Master, 0.8 ll of forward bactin, 0.8 ll of reverse b-actin, and 5.9 ll of Nuclease-free water. The fold change level determination for target gene including the exposure and non-exposure cells was calculated using the DDCT method as follows: Normal expression ratio = 2e ÀDDCT [where DCT = Ct (target) À Ct (bactin) and DDCt = DCt (treated sample) À DCt (Untreated sample)].
Results
Based on determined IC50 for drugs, 3 lg/ml of 5FU and 60, 90, 120 lg/ml of lycopene were selected for further experiments.
Effect on cell membrane integrity
LDH is an oxidoreductase enzyme catalyzing conversion of pyruvate into lactate. In pathological condition such as cancer or damaged tissue, cell release LDH into bloodstream due to damage to the plasma membrane, implying toxic effect of cell damaging compounds. Caco2 cells were exposed to 3 lg/ml of 5FU; 60, 90, 120 lg/ml of Lycopene, mix of 3 lg/ml of 5FU with either 60, 90, or 120 lg/ml of Lycopene for 24 h. Assessment of the toxic effect of these compounds as evaluated by the LDH assay indicated that 120 lg/ml of Lycopene as well as mixture 3 lg/ml of 5FU and either 60 or 90 lg/ml of Lycopene mixed induced significant toxicity on the cell line compared to control. However, 60 lg/ml of Lycopene seem to suppress 5FU induced toxicity (p < 0.05) (Fig. 1).
Generation of ROS
Caco2 cell line were exposed to varying concentrations of 5FU, lycopene and mixture of both as outlined earlier to evaluate the impact of these exposure on generation of intracellular ROS. Our results indicated the cells treatment with 60 lg/ml lycopene significantly increased ROS generation (* p < 0.05), as well as the L-90, L-120, 5FU-L-60, 5FU-L90, and 5FU-L120. Furthermore, L60 and L120 seemed to enhance 5FU-induced ROS generation ( Fig. 2A & B).
Activity of oxidative stress markers
Superoxide dismutase (SOD) is a metalloenzyme facilitates the disassociation of superoxide anion into oxygen molecule and hydrogen peroxide. This disassociation process is one of the various cellular anti-oxidant defense systems protecting the cell from oxidative damages (Cecerska-Heryć et al., 2021). Exposure of Caco2 cell line to different concentrations of Lycopene and/or 5FU to understand influence of these compound on dissipation of oxidants showed that there was a significant increase SOD activity for all treatments compared to control (p < 0.01) while 5FU-L60, 5FU-L90, and 5FU-L120 increased 5FU mediated suppression of 5FUinduced SOD activity (p < 0.01) (Fig. 3).
Catalase is an intracellular enzyme that enhance the disassociation of H 2 O 2 into H 2 O and O 2 in order to suppress associated oxidative stress (Tehrani and Moosavi-Movahedi, 2018). Caco2 cell line that were exposed to L90 and L120 with or without 5FU were found to significantly increase catalase activity (p < 0.01, p < 0.05) (Fig. 4). Furthermore, L90 also significantly increased catalase activity compared to 5FU alone (p < 0.05).
Reduced glutathione (GSH) is a tripeptide that functions in scavenging of many reactive species either within or outside of the cell (Schmidt & Dringen., 2012). Because of the ROS levels, SOD and catalase activities, the level of GSH in Caco2 cells was evaluated after exposure to the drugs. The results show a significant increase in cells exposed to 5FU-L60 when compared with 5FU (p < 0.05). Similarly, 5FU-L90 and 5FU-L120 also induced significant increase in GSH levels in exposed cells compared to control untreated cells (p < 0.01) (Fig. 5).
Expression of inflammatory markers
Because oxidative stress is tightly associated with inflammatory responses, we evaluated the gene expression of different proinflammatory cytokines (Fig. 6). Analysis of INF-c expression showed that 5FU-L90 significantly increased the gene expression (p < 0.05) whereas a treatment with L120 significantly reduced the expression of INF-c gene (p < 0.01). TNF-a expression was found to significantly decrease after 24 h exposure to L-90, L-120 and 5FU-L120 (p < 0.05, and p < 0.01). Furthermore, all other exposures resulted in a considerable but non-significant decrease in the gene expression of TNF-a. Assessment of IL-27 expression in the Caco2 cell line showed that L-120 exposure resulted in significant Fig. 1. Effect of Lycopene, 5FU, different mixtures of both on cell viability of Caco2 cancer cell line by using LDH assay. Data represents mean ± SD where * p < 0.05 and ** p < 0.01 shows significance compared to control and # p < 0.05 indicates significant effect of mixed drugs compared to 5FU. decrease in IL-27 (p < 0.05). Contrastingly, co-exposure of the cells to 5FU and L120 caused a significant increase of IL-27 expression (p < 0.05). Cox-1 and Cox-2 are involved in prostaglandins syntheses from arachidonic acid and their expression is known to be linked to oxidative stress. Exposure of Caco2 cells to all single exposures of the compounds or combinations did not significantly influence Cox-1 expression. However, considerable reduction in Cox-1 expression was found upon exposure to 5FU, L-60 and L90 while combination of 5FU and L120 slightly increased Cox-1. As for Cox-2, 5FU-L60, 5FU-L 90, and 5FU-L120 significantly increased Cox-2 expression compared to control (p < 0.05). Furthermore, the exposure with 5FU-L-90 has also a significant increase in Cox-2 expression (p < 0.05) compared to 5FU. Other treatment 5FU and Lycopene has not any significant effect in Cox-2 expression. Assessment of IL-6 expression indicated that 5FU significantly increased expression of the gene (p < 0.05) and similarly for cells exposed to 5FU-L-120 (p < 0.01) compared to control. All single exposures to Lycopene were found to not influence IL-6 expression even at the higher dose of L120. In a similar trend for cells exposed to 5FU, there was significant increase in IL-1a expression while exposure to all single doses of Lycopene resulted in significant reduction in IL-1a expression compared to control group (p < 0.01, p < 0.05). However, it was found that combination of 5FU and all doses of Lycopene causes an increase in the gene expression when compared to control. We also found that L120 significantly increased 5FU-mediated expression of IL-1a (p < 0.05). All exposures resulted in significant reduction in IL-1b expression compared to control (p < 0.05) except for combinational exposure to 5FU-L120 which significantly increased IL-1b expression (p < 0.05) compared with control as well as compared with 5FU (p < 0.05). Expression of IL-33 in Caco2 cells showed that 5FU significantly increased IL-33 expression while L90 and L120 exposures Fig. 3. Effect of Lycopene, 5FU, mix of them in increasing the activity of SOD protein in Caco2 cancer cell line post-24 h exposure to drugs. Data represents mean ± SD where * p < 0.05 and ** p < 0.01 shows significance compared to control and # p < 0.05 and ## p < 0.01 indicates significant effect of mixed drugs compared to 5FU. Fig. 4. Increasing the catalase activity after 24 h exposure with Lycopene, 5FU, mix of them in Caco2 cell line. Data represents mean ± SD where * p < 0.05 and ** p < 0.01 shows significance compared to control and # p < 0.05 indicates significant effect of mixed drugs compared to 5FU.
N.M. Alhoshani, M. Al-Zharani, B. Almutairi et al. Saudi Pharmaceutical Journal 30 (2022) 1665-1671 resulted in reduction in the gene expression. When compared with the control untreated cells, L90 and L120 significantly reduced IL-33 expression while L60 did not influence expression of the gene when compared with control untreated cells. All the combination exposures caused a significant increase in IL-33 expression except for 5FU and L60 when compared to control cells (p < 0.05, p < 0.01). Interestingly, all doses of Lycopene significantly reduced 5FUmediated expression of IL-33 (p < 0.05).
Discussion
In this study, we investigated co-exposure of lycopene with the conventional anticancer drug, 5FU to understand its influence as an antioxidant and anti-inflammatory on 5FU mediated responses on Caco2 cells. 5FU-mediated ROS generation is reported to be associated with its cytotoxic effects on both cancer and normal cells, which is one of the setbacks to the application of 5FU as anticancer drug (Blondy et al., 2020). The body's anti-oxidant defense system is comprised of different factors such as enzymes like catalase and SOD or organic compounds like GSH. The balance between these factors and ROS or reactive nitrogen species within the cells dictates the development of oxidative stress or redox balance. Data represents mean ± SD where * p < 0.05 and ** p < 0.01 shows significance compared to control and # p < 0.05 indicates significant effect of mixed drugs compared to 5FU. Fig. 6. Effect of Lycopene, 5FU, mixtures of both on INF-c, TNF-a, IL-27, Cox-1, Cox-2, IL-6, IL-1a, IL-1b, and IL-33 mRNA expression in Caco2 cancer cell line. Data represents mean ± SD where * p < 0.05 and ** p < 0.01 shows significance compared to control and # p < 0.05 indicates significant effect of mixed drugs compared to 5FU. Furthermore, the activities of these factors are dependent upon the rate of ROS or RNS generation within the cells.
Oxidative stress occurs because of the instability of generated ROS, which attacks intracellular molecules inside tumor microenvironment. There is high generation of ROS such as hydroxyl radical, hydrogen peroxide and superoxide which are common in CC and other cancers (Cheung and Vousden, 2022). One of 5FU mechanisms in CC, is the activation of apoptotic signals that is transmitted into the nucleolus (Pagliara et al., 2016). The reports that chemotherapy with 5FU result induction of ROS during cancer treatment, is supported by our finding in this study. This increased ROS generation activates the anti-oxidant defense increasing the activities of SOD and catalase, which catalyzes superoxide to hydrogen peroxides; and hydrogen peroxide to oxygen and water respectively (Das and Roychoudhury, 2014). However, chemotherapeutics like 5FU cause redox imbalance by suppressing the activities of catalase and SOD as well as reduce GSH levels during high ROS generation (Kocer and Naziroglu, 2013). This is similar to the finding in this study where 5FU resulted in no increase in GSH level and catalase activity, creating a scenario where there are low activities of antioxidant that can handle the increase in ROS generation. This scenario is characteristic of the redox imbalance required for its cell growth inhibitory action.
Lycopene is a known antioxidant which we employed here to enhance the cell killing effect of 5-FU with reduced inflammation and oxidative stress. Previous investigations have demonstrated the ability of Lycopene to inhibit toxicity of different anticancer drugs (Karimi et al., 2005, Kulhan et al., 2019, Moroni et al., 2021. While we found Lycopene to induce ROS generation in this cell line, combinational treatment of Lycopene and 5FU was also found to increase SOD and catalase activities in addition to an increase in GSH level which will help the cell maintain redox balance. Also in support of our finding, some studies have shown that Lycopene suppresses toxic effect of chemotherapeutics due to the antioxidant effects. Atessahin et al. (2006) and Elsayed et al. (2021) showed that Lycopene respectively reduced testicular toxicity of induced by Adriamycin and hepatotoxicity induced by cisplatin through restoration of the depleted GSH level and reduction of malonaldehyde levels in rat models. However, Lycopene antioxidant activities appears to be a double-edged sword as a previous study have shown that the oxidant or antioxidant activities of Lycopene in Caco2 cell line is dependent on the concentration used (Makon-Sébastien et al., 2014). This may explain why we observed an increase in ROS generation in this study compared to some other stated earlier.
Oxidative stress and inflammation are linked. Inflammation is one of the body's defense systems during pathogenic infection. Increased ROS generation during these disease states result in oxidation of biomolecules like proteins and lipids causing activation or expression of inflammatory signals like COX1/2, IL-1b, IL-6 and TNF-a (Liu et al., 2017, Harijith et al., 2014. There is imbalance in immune response during low GSH levels and high ROS which aggravates inflammation that may drive cancer progression (Wu et al., 2014). One of the mechanisms used by cancer cells is to drive inflammation that result in chemo-resistance and this is a common phenomenon in 5FU (Wang et al., 2016). There are studies that have demonstrated that 5FU alters pro-inflammatory cytokines expression like TNF-a, IL-1b, and IL-6 (Logan et al., 2008, Raghu Nadhanan et al., 2012, Reers et al., 2013. IL-6 in particular may drive cancer progression by activating the JAK/STAT pathway resulting in unending loop of IL-6 mediated inflammation (Yusuf andCasey, 2019, Hu et al., 2021). While we found that 5FU induced increased expression of TNF-a, IL-1b, IL-1a and IL-6 which may drive tumorigenesis, all doses of Lycopene that we tested did not suppress the expression of these inflammatory cytokines. This may have been due to the concentration of Lycopene that was tri-aled in this study as we have highlighted earlier. As such adjusting these concentrations may help harness and improve Lycopene anti-inflammatory effect in the presence of 5FU. Interestingly, expression of INF-c was enhanced by Lycopene in cells coexposed to 5FU. IFN-c is a cytokine known to have antitumor activity by activating cellular immunity, subsequently stimulating antitumor immune response (Jorgovanovic et al., 2020). Based on its anti-proliferative, pro-apoptotic, and cytostatic roles, IFN-c has been highlighted to be potentially useful as an adjuvant immunotherapy in different cancer types.
Conclusion
We have shown here that Lycopene supplementation during 5FU therapy on Caco2 cell line resulted in improvement in antioxidant parameters such as catalase and GSH levels giving the cell capacity to cope with the oxidative stress mediated by 5-FU. We also showed that Lycopene enhanced IFN-c expression in the presence of 5FU, which may activate antitumor effects further enhancing the cancer killing effect of 5FU. While Lycopene concentrations used in this study may have resulted in the increased inflammatory cytokines expression like TNF-a, IL-1b, and IL-6, adjustment of the concentration may further improve this anti-inflammatory role of Lycopene on 5FU-mediated inflammation.
Availability of data and materials
The data generated or analyzed in this article are online publicly available without request. | 2022-12-04T06:11:47.879Z | 0001-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "897390382eb5b278702d48ef31089366e439cc30",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jsps.2022.09.011",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "897390382eb5b278702d48ef31089366e439cc30",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
4044955 | pes2o/s2orc | v3-fos-license | Cytotoxicity effects of metal oxide nanoparticles in human tumor cell lines
Metallic and metal oxide nanoparticles (Nps) have a wide range of applications in various settings including household, cosmetics and chemical industries, as well as for coatings. Nevertheless, an in-depth study of the potential toxic effects of these Nps is still needed, in order to fulfill the mandatory requirement of ensuring the safety of workers, patients and the general public. In this study, Quick Cell colorimetric assays were used to evaluate the in vitro toxicity of different metal oxide Nps [Fe(II,III)Ox, TiOx, ZnO and CeO2] in several cell lines. The ZnO Nps were found to be highly toxic, with a lethal dose ≤100 μg/ml for all the cell lines studied. Western blot was also used to test the ability of the different Nps to activate the complement pathway. However, no activation of this cascade was observed when the Nps were added. In addition, the aggregation state and charge of the Nps in culture media was studied by dynamic light scattering (DLS) and measurement of zeta potential. Transmission Electron Microscopy was used to analyze Np uptake and localization at the cellular level.
Introduction
The term nanoparticle (Np) applies to particles between 1 and 100 nm in at least two dimensions [1]. These particles have specific physicochemical properties that are not shown in the bulk form [2], most of the unique properties of Nps being defined by their high ratio of surface to volume, which implies that almost all the material is at the surface, and for the presence of quantum confining phenomena at the nanoscale. Nps offer, compared to bulk materials, a larger surface for adsorption, and sometimes a higher reactivity, interfering as catalizators in many processes [3]. The huge potential of Nps for various applications makes the in-depth analysis of their potential toxicity in humans essential. In this context, a new discipline called Nanotoxicology has arised, a branch of the toxicology and of the nanotechnology dealing with the interaction of nanomaterials, nanostructures and devices with biological molecules and organisms, aiming to understand possible toxicological side-effects of the Nps [2]. One of the scopes of nanotoxicology is the evaluation of the safety of Nps for industrial applications, for providing information about the undesirable effects of the Nps, and for developing the tools to prevent such effects.
Two main mechanisms might be responsible for an eventual toxicity of metallic and metal oxide Nps. First, the intrinsic catalytic activity of such Nps can disturb several processes and intracellular signaling pathways. Second, ions can also be released from the Np, affecting the finely regulated concentration of metallic ions inside the cell. The Nps could induce toxicity in several organs, even leading to systemic toxicity [1]. They could also affect the immune system, activating or inhibiting it, by inducing allergic responses or hypersensitivity, the generation of antibodies against the Nps or their coatings, or by activating cells from the endothelial reticulum system [4].
In this study, using different Np concentrations in several cell lines, in vitro cytotoxicity analyses and endocytosis assays were undertaken to evaluate the potential toxicity of metal oxide Nps.
Nps
The Fe(II,III)O x and TiO x Nps were supplied by PlasmaChem (Berlin, Germany), and the ZnO and CeO 2 Nps by Evonik Degussa (GmbH, Germany) (Fig.1). The concentration of Fe(II,III)O x Nps was 72 mg/ml, and for the remaining Nps, stock solutions of 72 mg/ml in PBS were prepared. The sterility of the Nps was preserved in all cases.
Cells
The toxicity assays were performed in the human tumoral cell lines A549, NCI-H460, SK-MES-1 and HeLa, all purchased from ATCC (American Type Culture Collection). All cell types were cultured in RPMI medium (GIBCO -Invitrogen Corp., Grand Island, NY), supplemented with 10% FBS, at 37ºC, 5% CO 2 .
Aggregation studies
For evaluating the behavior of the Nps in the physiological medium RPMI with 10% FBS, a 24-well plate was used, in which aggregation was assessed for the sonicated or unsonicated Nps. The 20 min sonication was carried out in a Branson ultrasound bath (Branson 1510, Danbury, CT), at low frequency (47 kHz), preventing Np exposure to potential contaminating agents. For these assays, the final concentration of the Nps was 8 µg/ml, in 1 ml final volume. The Nps were incubated overnight at 37ºC and the formation of aggregates was evaluated by optical microscopy in an inverted microscopy model IX50 from Olympus with 20× and 40× objectives (Olympus Optical Co, GMBH, Germany).
Dynamic light scattering (DLS)
A Malvern NanoZS device( UK) was used for DLS measurements.
Cellular proliferation colorimetric assay
To measure the effect of the Nps on cellular viability, the colorimetric kit Quick Cell Proliferation Testing Solution (GenScript Corporation, Piscataway, NJ, USA) was used following the instructions of the manufacturer. Once the optimal cell number for each cell line was determined, the viability assay was performed including the Nps at three different concentrations, 0.5 µg/ml, 50 µg/ml and 1 mg/ml, respectively. Briefly, cells were incubated with 200µl RPMI 10% FBS, both in the absence and in the presence of Nps, for 24 and 48 h. The plates were then centrifuged at 1000 g for 1 min, 100 µl of the supernatant were discarded and 50 µl of Quick Cell reagent added. Subsequently, the plates were incubated for 4h, centrifuged again and the supernatants were transferred to clean plates, to avoid possible interference due to the Nps. Finally, the absorbance was measured at 450 nm in an Envision multidetector (Perkin Elmer Inc., Norwalk, Connecticut, USA). As a positive toxicity control, cells were incubated with 5% Triton X-100 (Sigma-Aldrich, Steinheim, Germany), which was removed 2 hours before 24/48 h; in wells used as a control of cellular death, 100 µl of medium was added. As a negative control, RPMI and Nps alone were used. The results were then analyzed by the following equation: % cellular viability= Abs Cells+Nps-Abs Nps/Abs Cells-Abs RPMI×100 For measuring the lethal dose of the ZnO Nps, the same assay was performed, incubating the cells with intermediate concentrations of these Nps, ranging from 0.5 up to 500 µg/ml.
Analysis of apoptosis and necrosis
The FITC Annexin V Apoptosis Detection Kit I from BD Biosciences (San Diego, USA) was used according to the manufacturer's instructions. Briefly, cells cultivated in exposure media were stained with Annexin V-FITC, which binds to phosphatidylserine at the outer cell membrane and can, therefore, be used as a marker to detect apoptosis. Propidiumiodide (PI) was used to quantify necrotic cells. The dye stains DNA after the membrane becomes permeable for PI. Stained cells were then analyzed by flow cytometry (FACSCalibur, Becton Dickinson, Mountain View, USA) at an excitation wavelength of 488 nm. FITC and PI fluorescence were detected in the green and red fluorescence channels, respectively.
Complement activation
Western blot with an anti-C3 antibody was carried out to analyze the degree of degradation of this factor upon Np addition. A pool of human sera from healthy donors was incubated with two different concentrations of Fe(II,III)O x , TiO x, CeO 2 and ZnO Nps (0.5 and 50µg/ml). Cobra venom (CVF, Quidel Corporation, San Diego, CA, USA), and PBS were used as positive and negative controls, respectively. The membrane was revealed with an antibody specific for C3b from Abcam (Cambridge, UK).
Scanning electron microscopy
Cells were prefixed in 2.5% glutaraldehyde/0.1 mol/L sodium cacodylate (pH 7.4). After postfixation in 1% osmium tetraoxide (in 0.2 mol/l cacodylate buffer), cells were dehydrated in a series of increasing ethanol concentrations and at the critical point dried using carbon dioxide. After coating with gold, cells were examined with a JEOL JSM-6700F scanning electron microscope.
Results and discussion
Metal oxide Nps are widely employed in creams, implants, drug carriers or as contrast agents [5], but for the potential use of these Nps in biomedicine strict toxicity rules should be followed. The Nps should also be stable in physiological conditions, not forming aggregates that could obtrude the capillaries. To test aggregation, we immersed the Nps in physiological medium and found that all Nps were aggregated in these conditions (not shown), with the smallest aggregates corresponding to the ZnO Nps. To disperse the NPs these were sonicated, but this proved to be only partially successful (Figure 1a). To further address the aggregation status of the Nps, dynamic light scattering (DLS) experiments were carried out, dispersing the particles in different media, with some Nps being sonicated, while others were left unsonicated. Concurring with the results of previous studies, a strong aggregation effect was observed in all the Nps analyzed (Figure 1b and not shown). This clearly represents an important limitation for the in vivo use of these Nps. The use of in vitro assays of cellular viability in different cell lines is essential for evaluating the potential toxicity of the Nps. In this study, we employed toxicity studies based on colorimetric methods. For an Np to be considered non-toxic, we followed the criteria of the Nanotechnology Characterization Laboratory (NCL) in Frederick (Maryland, USA), especially that cell viability at 48 h should be higher than 75%. Using the Quick Cell assay, it was seen that ZnO Nps induced a massive toxicity in all the cell lines studied at 24 and 48 hours (Figures 2 a-d), in agreement with previous studies by other research groups [6][7][8]. This could be due to either the presence of small aggregates formed by ZnO Nps, which could have greater interaction with the cell causing its death, or to the release of Zn +2 ions from the Nps, which are also toxic.
These assays were optimized, as cellular metabolism varies significantly among cell lines and viability can be affected by factors, such as cell density, percentage of living cells with respect to dead ones, and different proliferation rates. Once the optimal amount of cells to be analyzed for all the cell lines studied was determined (data not shown), we were able to carry out dose-response viability assays with these Nps, and found that the lethal dose 50 (LD 50), able to cause the mortality of at least 50% of the cells, was ≥ 100 mg/ml for all the cell types studied (Figures 3a-d).
In addition, the population of necrotic and apoptotic A549 cells exposed to ZnO Nps, and stained with annexin V-FITC/PI was analyzed by flow cytometry. The percentage of viable and dead cells (apoptotic, necrotic and late apoptotic cells) as a function of exposure time and concentration are shown in Figure 3e. With higher Np concentrations and longer exposure times the number of dead cells increased, mainly late apoptotic and apoptotic ones, with the latter only seen after 24-h exposure time. Although 3 µg/ml ZnO Nps only had a small impact on A549 cells, 30 and 100 µg/ml ZnO Nps considerably reduced cell viability. Even after 12-h exposure at Np concentrations of 30 and 100 µg/ml, treated cells did not match the "Acceptance Criteria". Yet, ZnO concentrations of 3 µg/ml are considered non-toxic for A549 cells, even after 72-h treatment with the Nps. Another important limitation to be taken into account regarding the potential in vivo use of Nps is that they are quickly removed from the circulation by the endothelial reticulum system, which would make the use of these compounds as drug carriers or contrast agents more difficult. Nevertheless, covering the Nps with agents such as polyethilenglycol can improve their permanence in the circulation, thereby avoiding recognition by the cells of the endothelial reticulum system [9]. In our study, transmission electronic microscopy was used to evaluate the intracellular localization of TiO x Nps ( Figure 4). After 6-h incubation, a large amount of particles adhered to the cell membrane, and the cells appeared to have ingested some of these particles (Figure 4c). Most of the particles seemed to be confined inside vesicles distributed across the cytoplasm, not crossing into the nucleus. At 24-h incubation time with the Nps, particles were also seen to be attached to the cell membrane, suggesting that the cells had ingested a relatively large amount of these compounds. The vesicles moved towards the nuclear membrane, but few had penetrated inside the nucleus (Figure 4d).
Possible activation of the complement, a key component of the innate immune system consisting of a cascade of more than 30 factors, should also be taken into consideration before in vivo use of Nps. It is well established that, in vitro, many types of particle are able to activate the alternative pathway of the complement [10]. In our study, western blot was used to evaluate the ability of the Fe(II,III)O x , ZnO, CeO 2 and TiO x Nps to activate the complement. The use of a specific antibody against the C3 factor makes it possible to evaluate the degradation of this protein, which corresponds to activation of the complement. It was seen that none of the Nps assessed was able to induce complement activation ( Figure 5), as the C3 factor remained largely intact, with basal degradation similar to that of the negative control. At higher magnification, many particles can be seen attached to the membrane, observing clusters with the indented cell membrane in the process of endocytosis (top), and endosomes filled with particles.
Most of the Nps seem to be confined inside the vesicles distributed across the cytoplasm, not crossing into the nucleus (4c). At 24-h incubation time, particles are also observed attached to the cell membrane, suggesting that the cells have ingested a relatively large amount of particles, and two of the vesicles close to the nucleus seem to be in the process of fusing together. The vesicles moved towards the nuclear membrane, but few have penetrated inside the nucleus (4d). Bars: 2, 1 and 0.5 m.
Figure 5.
The complement system is not activated by the Nps. The degradation of the complement factor C3 was analyzed by Western blot and revealed with an anti-C3 antibody. A pool of sera from different healthy donors was incubated with the Nps indicated, at 0.5 and 50µg/ml, with cobra venom as a positive control of C3 degradation or with PBS as negative control. The bands of 115 kD and 43 kD correspond to the intact C3 factor and main degradation products of this protein, respectively.
Conclusion
In summary, taking into account the information available and the data obtained in this study, the Nps analyzed show a tendency to form aggregates in media containing serum. In addition, the ZnO Nps were found to be toxic, with a lethal dose ≥100 µg/ml for all the cell lines studied. The amount of dead cells increased with rising Np concentration and exposure time, but an increase of apoptotic cells was seen only after 24h. Moreover, most of the Nps seemed to be confined inside the vesicles distributed across the cytoplasm, not crossing into the nucleus and none of the Nps assessed were able to induce complement activation. Further in-depth research of these aspects is, therefore, still required to determine the potential toxic effects on human health. | 2018-03-23T18:45:20.728Z | 2011-07-06T00:00:00.000 | {
"year": 2011,
"sha1": "00039af15de237006c8d87f104e904d410590c04",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/304/1/012046",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3fcdb3565624b820e0a1423c19c9b07cd617b61b",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
252081635 | pes2o/s2orc | v3-fos-license | 11β-HSD1 participates in epileptogenesis and the associated cognitive impairment by inhibiting apoptosis in mice
Background Glucocorticoid signalling is closely related to both epilepsy and associated cognitive impairment, possibly through mechanisms involving neuronal apoptosis. As a critical enzyme for glucocorticoid action, the role of 11β-hydroxysteroid dehydrogenase 1 (11β-HSD1) in epileptogenesis and associated cognitive impairment has not previously been studied. Methods We first investigated the expression of 11β-HSD1 in the pentylenetetrazole (PTZ) kindling mouse model of epilepsy. We then observed the effect of overexpressing 11β-HSD1 on the excitability of primary cultured neurons in vitro using whole-cell patch clamp recordings. Further, we assessed the effects of adeno-associated virus (AAV)-induced hippocampal 11β-HSD1 knockdown in the PTZ model, conducting behavioural observations of seizures, assessment of spatial learning and memory using the Morris water maze, and biochemical and histopathological analyses. Results We found that 11β-HSD1 was primarily expressed in neurons but not astrocytes, and its expression was significantly (p < 0.05) increased in the hippocampus of PTZ epilepsy mice compared to sham controls. Whole-cell patch clamp recordings showed that overexpression of 11β-HSD1 significantly decreased the threshold voltage while increasing the frequency of action potential firing in cultured hippocampal neurons. Hippocampal knockdown of 11β-HSD1 significantly reduced the severity score of PTZ seizures and increased the latent period required to reach the fully kindled state compared to control knockdown. Knockdown of 11β-HSD1 also significantly mitigated the impairment of spatial learning and memory, attenuated hippocampal neuronal damage and increased the ratio of Bcl-2/Bax, while decreasing the expression of cleaved caspase-3. Conclusions 11β-HSD1 participates in the pathogenesis of both epilepsy and the associated cognitive impairment by elevating neuronal excitability and contributing to apoptosis and subsequent hippocampal neuronal damage. Inhibition of 11β-HSD1, therefore, represents a promising strategy to treat epilepsy and cognitive comorbidity.
Introduction
Epilepsy is a common neurological disorder characterised by recurrent seizures. In addition to seizures, it is associated with a range of psychological, cognitive and behavioural disorders [1], which are critical determinants of impaired quality of life [2]. Cognitive impairment in epilepsy appears to be the consequence of complex interactions among the aetiologies of epilepsy, the severity of seizures themselves, interictal epileptiform discharges, and anti-epileptic drugs [3]. However, there is evidence that some patients have pre-existing cognitive complaints prior to the onset of epilepsy, the severity of which can predict treatment response [4]. These raise the necessity of novel therapeutic approaches which suppress seizures and also improve the associated cognitive impairment.
Stress is a commonly reported precipitant for seizures [5]. It is well documented that approximately half of individuals with epilepsy report more seizures following acute stressful situations or periods of stress [5]. As major stress hormones, the relationship between glucocorticoids and epilepsy is bidirectional. On the one hand, seizures and epilepsy can lead to activation of the hypothalamic-pituitary-adrenal (HPA) axis [6]; on the other hand, excessive glucocorticoids occurring in times of stress can contribute to epileptogenesis by increasing neuronal excitability and lowering seizure threshold in many animal models of epilepsy [7,8]. In addition, hypersecretion of glucocorticoids can impair cognitive functioning [9]. As a key brain region involved in both epilepsy and cognition, the hippocampus contains a high density of glucocorticoid receptors [10] and this region is therefore particularly vulnerable to the detrimental effects of glucocorticoids [11]. For example, it was reported that aged people with significantly prolonged elevated cortisol level showed reduced hippocampal volumes and deficits in hippocampus-dependent cognitive functions compared to controls with normal cortisol levels [12]. Therefore, treatment targeting excessive glucocorticoid effects may be beneficial for both seizures and concomitant cognitive impairment.
Glucocorticoid activity is determined by the density of 'nuclear' receptors and intracellular metabolism by 11β-hydroxysteroid dehydrogenase (11β-HSD) enzymes, which act as intracellular gate-keepers of tissue glucocorticoid action [13]. Of these, 11β hydroxysteroid dehydrogenase 2 converts active glucocorticoids into inactive forms, and is mainly expressed in the target organs of mineralocorticoids, such as the kidney. In contrast, 11β-HSD1, found predominantly in the liver, adipose tissue and brain, catalyses circulating inert cortisone to active cortisol in humans (11β-dehydrocorticosterone to corticosterone in mice), and thus stimulates glucocorticoid action in these tissues [14]. In the brain, 11β-HSD1 is highly expressed in regions that underpin cognitive functions, including the prefrontal cortex and hippocampus [15]. Previous studies have shown that pharmacological inhibition or genetic knockdown of 11β-HSD1 protects against the cognitive impairment caused by excessive local steroid action in animal models of Alzheimer's disease and chronic stress [16,17]. Apoptosis of neurons has been suggested to be an important mechanism underlying hippocampal neuronal death induced by glucocorticoids, leading to cognitive impairment [18,19], and a previous study demonstrated that overexpression of 11β-HSD1 decreased cell proliferation and caused cell apoptosis [20].
Therefore, we hypothesise that 11β-HSD1 might participate in epileptogenesis, and that reducing glucocorticoid action in the brain by inhibiting 11β-HSD1 might suppress epileptic seizures, inhibit neuronal apoptosis induced by seizures, and protect against secondary cognitive impairment.
Animals
Forty adult male C57BL/6 mice (The Experimental Animal Center of Wenzhou Medical University, Zhejiang, China) weighing 20-25 g were used in this experiment. All mice were housed under controlled lighting conditions under 12/12 h light/dark cycle at a temperature of 22-26 °C, with water and food available ad libitum. Animals were housed for at least one week for acclimatisation before the stereotaxic procedure. All animals were used in compliance with the Institutional Review Board of Wenzhou Medical University (ethical batch number: 2021-0155), and experiments were performed based on the guideline of the National Institutes of Health for the care and use of laboratory animals.
Conclusions: 11β-HSD1 participates in the pathogenesis of both epilepsy and the associated cognitive impairment by elevating neuronal excitability and contributing to apoptosis and subsequent hippocampal neuronal damage. Inhibition of 11β-HSD1, therefore, represents a promising strategy to treat epilepsy and cognitive comorbidity. Keywords: 11β-hydroxysteroid dehydrogenase 1 (11β-HSD1), Epilepsy, Cognitive impairment, Patch clamp, Apoptosis
Primary cultures of mouse hippocampal neurons
Hippocampal neurons were prepared from C57BL/6 mice in accordance with established methodology [21]. Briefly, pregnant mice (E18) were sacrificed using CO 2 euthanasia followed by decapitation, and the embryos were removed and maintained in HBBS. Hippocampus tissue was isolated from the embryos, and digested with 0.125% Trypsin-EDTA (Gibco) for 25 min at 37 °C after being minced into small pieces. The tissue pieces were then mechanically ground and separated in 2 ml DMEM (meilunbio) containing 10% FBS (Sigma) using a sterile, flame-polished glass Pasteur pipette. The cell suspension was centrifuged at 2000 rpm for 5 min, and the cell pellet was resuspended to obtain a concentration of 2.5 × 10 5 cells/ml. Cells were then seeded on glass slides coated with Poly-l-Orithine (Sigma) in a 24-well plate and incubated for 4 h at 37 °C. After that, DMEM was discarded, and Neurobasal media supplemented with B27 (Gibco), and 0.5 mM glutamine (Gibco) was added.
Neurons cultured in vitro for three days (DIV3) were then transfected with the 11β-HSD1 overexpression plasmid. Briefly, 0.2 μg of plasmid diluted in 15 μl of Opti-MEM (Gibco) was mixed with 0.5 μl of Lipofectamine 2000 (Invitrogen) diluted in 15 μl Opti-MEM, and incubated for 25 min at room temperature to form DNA-Lipofectamine complexes. Next, about 30 μl of the complexes were added directly to each well containing cells, mixed gently and incubated at 37 °C for 48 h.
Electrophysiological recordings
Hippocampal neurons were recorded by whole-cell patch clamp during continuous perfusion of artificial cerebrospinal fluid (aCSF, 2 mL/minute) at room temperature. Glass pipettes (non-filament, Garner Glass Company) were pulled (Model P-97, Sutter Instruments) to obtain electrodes with resistances between 5 and 7 MΩ when filled with intracellular solution (in mM:130 K-gluconate, 10 KCl, 2 MgCl 2 , 10 HEPES, 10 EGTA, 2 Na 2 -ATP, 0.2 Na 2 -GTP. pH 7.2, adjusted with KOH). Electrophysiological data were recorded in a whole cell configuration, a gigabit ohm seal was formed between the cell and the glass pipette, and then brief suction was used to break the cell membrane. Action potential (AP) threshold and firing frequency were recorded under current-clamp mode (Multiclamp 700A, molecular device). Digitisation (DigiData 1322, molecular device) was used for quick access to original traces, and all measures were offline analysed with PClamp10.6 software.
Pentylenetetrazole (PTZ) kindling model
Mice were intraperitoneally injected with PTZ (35 mg/ kg, Sigma, St. Louis, MO, USA) once every other day for a total of 14 injections (from day 1 to day 28). In contrast, control mice received vehicle (saline) injections. We observed the behaviour of each mouse for 1 h after PTZ injection to assess the resultant seizure severity, as judged using the Racine scale (1972) [22]. Seizure stages were classified: stage 0, no response; stage I, ear and facial twitching; stage II, myoclonic jerks (MJs); stage III, clonic forelimb convulsions; stage IV, generalised clonic seizures with turning to a side position; and stage V, generalised tonic-clonic seizures (GTCSs) or death. Mice with at least three consecutive seizures of stage IV or V were regarded as fully kindled.
Virus construction and preparation
11β-HSD1 recombinant AAV expression vectors, along with the transgene for green fluorescent protein (GFP), were manufactured by Shanghai GeneChem Co., Ltd. (Shanghai, China). A universal scrambled sequence with mismatched bases was used as the negative control. The control shRNA targeting sequence is 5′-CGC TGA GTA CTT CGA AAT GTC-3′; the sequence of 11β-HSD1-shRNA was 5′-CCT GGC CTA CTA CTA CTA T-3′. The shRNA target sequences were inserted into the GV478 lentivector. The GV478 lentivectors containing the shRNA sequences were transfected into 293 T cells, and viral supernatants were harvested after 48 h. The final virus titer was 2.02E + 12 v.g./ml.
Intrahippocampal injections and grouping
Mice were anesthetised with 1% pentobarbital sodium (40 mg/kg, i.p.) and positioned in the stereotaxic apparatus. Mice were injected with AAV-11β-HSD1 (11β-HSD1-shRNA group) and AAV-blank (Con-shRNA group) bilaterally into the hippocampus as previously described [23]. Briefly, using the stereotaxic guidance, a micropipette was gently positioned in the hippocampus (bregma, − 2.2 mm; lateral, ± 2.2 mm; ventral, − 1.8 mm), and the shRNA was slowly infused into the region. The micropipette was held in place for an additional 10 min before being slowly withdrawn, and the incision was closed with sutures. Body temperature was maintained at 37 °C using a heating pad. To assess the efficiency of AAV-mediated knockdown, we randomly chose four mice from each group and sacrificed them on day 28 after AAV injection. The hippocampus was immediately isolated and prepared for either confocal scanning microscopy detection or western blot analysis. The remaining mice were intraperitoneally injected with PTZ from day 28 onwards to construct the epilepsy model described above.
To examine the expression of 11β-HSD1 in the hippocampus of epileptic mice, 20 mice were randomly subjected to intraperitoneal injection of PTZ or saline as a control, with 10 mice in each group. To investigate the effect of 11β-HSD1 knockdown on epileptogenesis and associated cognitive impairment, mice were divided into four groups: (1) the Control group (n = 10, mice received Saline injection only); (2) the Epilepsy group (n = 10, mice received PTZ (35 mg/kg) injection only); (3) the Con-shRNA group (n = 10, mice received successive injections of AAV-blank vectors and PTZ); and (4) the 11β-HSD1-shRNA group (n = 10, mice received successive injections of AAV-11β-HSD1 and PTZ).
Morris water maze
After PTZ kindling, all mice were subjected to a Morris water maze (MWM) to evaluate spatial learning and memory. The maze consisted of a stainless-steel circular tank filled with water (22 ℃), divided into four virtual quadrants, and placed in a room with external visual cues. A hidden platform was submerged in one of the quadrants (kept constant for each mouse), ~ 1.5 cm below the water's surface. A camera was located above the centre of the maze that relayed images to a videocassette recorder and an image analysis computer system (Dig-Behv, Jiliang Software Technology Company, Shanghai, China). Mice went through a series of trials, where they were placed in the pool, and attempted to locate the hidden platform. The mice were allowed to swim for a period of 60 s to find the hidden platform, but if they failed to do so during this time, they were guided and placed on the platform for 15 s. Mice conducted four trials per session with an intertrial interval of 30 min, with each trial starting at a different point, alternating amongst the four quadrants. Sessions were conducted on five consecutive days. For each trial, we recorded the time that mice spent trying to locate the hidden platform and the swim length and speed. On the sixth day, the last day of experiment, we removed the platform and allowed the mice to swim freely for 60 s. We then recorded the number of crossings through the zone which previously held the target platform for each mouse.
Immunofluorescence
On the 28th day after AAV injection, mice were anaesthetised and perfused through the left cardiac ventricle, the brains removed and postfixed overnight in 4% paraformaldehyde. Frozen sections were prepared using a cryostat microtome with a thickness of 7 μm. Next, the tissue sections were permeabilised with 0.25% Triton X-100 in PBS for 10 min. The tissue sections were washed three times with PBS before being blocked with 1% BSA for 30 min. After being rinsed with PBS, the tissue sections were incubated with anti-GFAP(1:50) and anti-11β-HSD1(1:50), or anti-NeuN(1:50) and anti-11β-HSD1(1:50) antibodies at 4 ℃ overnight. After being washed three times with PBS, the samples were incubated with secondary antibodies for 1 h at RT. After staining the nucleus with DAPI, images were captured using a confocal laser scanning microscope (A1tR, Nikon, Tokyo, Japan).
Nissl staining
Nissl staining was employed to detect surviving neurons. Two mice of hippocampal samples were embedded in paraffin and cut into 7 μm sections, and the sections were dewaxed and rehydrated according to the standard protocols. Next, the sections were stained in 1% cresyl violet at 50 ℃ for 5 min. After being rinsed with water, the sections were dehydrated in ethanol with increasing concentrations and mounted on the slides. The stained sections were then viewed under a microscope (Olympus Corporation, Tokyo, Japan).
Immunohistochemistry
Fresh tissue was fixed in 4% paraformaldehyde and embedded in paraffin. Five-micron sections were obtained, deparaffinised, and rehydrated as previously described. After antigen retrieval, endogenous peroxidase was blocked using 3% hydrogen peroxide at room temperature for 10 min. Sections were blocked with 5% BSA and then incubated with primary antibody (11β-HSD1, 1:100, Abcam, Cambridge, United Kingdom) in a humid chamber at 4 °C overnight, followed by incubation with an HRP-conjugated secondary antibody (1:200) at room temperature for 1 h. After colour development through incubation with diaminobenzidine, the sections were counterstained with hematoxylin. The developed tissue sections were visualised under a microscope (Olympus Corporation).
Statistical analysis
Statistical analysis was conducted using Graph-Pad Prism Version 8.0 for Windows (Graph-Pad Software, USA). All measurements of electrophysiological recordings were analysed offline using Clampfit software (v. 10.6 Molecular Devices). As previously described, we followed the methodology for obtaining electrophysiological parameters for active and passive membrane properties [24]. Statistical differences were determined using student t test or one-way analysis of variance with LSD post-hoc comparisons as appropriate. A p-value of less than 0.05 was considered statistically significant. Data were expressed as the mean ± standard deviation (SD).
Neuronal expression of 11β-HSD1 is increased in the hippocampus of epilepsy mice
To explore the role of 11β-HSD1 in epilepsy, we first observed its expression in mice subjected to PTZ kindling. Immunofluorescence staining showed significantly increased expression of 11β-HSD1 (red) in the CA1, DG and CA3 regions of the hippocampus in the epilepsy mice compared with the control group. 11β-HSD1 mainly colocalised with NeuN (a neuron marker, blue) rather than GFAP (an astrocyte marker, green) ( Fig. 1A-D), indicating 11β-HSD1 was primarily expressed in neurons rather than astrocytes. Immunohistochemistry staining also showed that the expression of 11β-HSD1 was significantly increased in the CA1 and CA3 regions of the hippocampus in the epilepsy mice (p < 0.01, Fig. 1E-G).
Western blot analysis further confirmed that the expression of 11β-HSD1 protein was significantly increased in the hippocampus of PTZ-induced epilepsy mice compared to controls (p < 0.01, Fig. 1H, I).
Overexpression of 11β-HSD1 elevates neuronal excitability in vitro
To examine how increased expression of 11β-HSD1 may directly influence neuronal excitability, we used wholecell patch clamping to examine the effect of 11β-HSD1 overexpression on the excitability of primary cultured hippocampal neurons by using an 11β-HSD1 overexpression plasmid (11β-HSD1+/+). First, we verified that our plasmid did indeed elevate the expression of 11β-HSD1 via Western Blot ( Fig. 2A, B), compared to control. As shown in Table 1, the resistance and voltage of resting membrane potential (Rm and Vm) did not significantly vary between the two groups, although Vm displayed a tendency to increase in 11β-HSD1+/+ treated neurons. However, the threshold voltage to trigger an action potential was significantly decreased, while the frequency of action potentials increased in the 11β-HSD1+/+ neurons compared to controls (Fig. 2C-F). These results indicated that 11β-HSD1 overexpression increases neuronal excitability, and thus might contribute to epileptogenesis.
11β-HSD1 knockdown exerts anticonvulsant actions in PTZ-induced epilepsy mice
Thereafter, in order to test whether downregulation of 11β-HSD1 has an anti-epileptic effect, we developed a hippocampal 11β-HSD1-knockdown mouse using AAV technology prior to PTZ kindling. On the 28th day after AAV injection, the AAV was expressed throughout the hippocampus (Fig. 3A), and western blotting showed that the expression of 11β-HSD1 was significantly decreased compared to the control group (p < 0.01, Fig. 3B, C), indicating successful hippocampal knockdown of 11β-HSD1 protein.
After developing this model, we first investigated the anticonvulsant effect of 11β-HSD1 knockdown. Seizure severity scores assessed by Racine scale, and the latency to achieve full kindling, were recorded. The mean seizure severity scores and the latency were not significantly different between the epilepsy group and the Con-shRNA group, indicating that the virus vector exerts no effect on PTZ kindling. However, there was a significant increase 20:406 in the latency to full kindling in the 11β-HSD1-shRNA group compared to the Con-shRNA group (p = 0.021, Fig. 4B). Although the mean seizure severity score was not significantly varied in days 1-27 of kindling, 11β-HSD1-shRNA treated mice displayed a trend of reduced seizure severity, which reached statistical significance on the last day (p < 0.01, Fig. 4A), compared to Con-shRNA. Following bilateral hippocampal injection, two mice injected with AAV (one with 11β-HSD1-shRNA and one with Con-shRNA) failed to survive, presumably because of the excessive anesthetic dose. Two additional mice were added as replacements to maintain the same total number of animals in the experimental groups. Following intraperitoneal injections of PTZ, 80% of the mice in the epilepsy group without AAV injection and the Con-shRNA groups survived, while 90% of the mice in the Control group (without PTZ or AAV injection) and the 11β-HSD1-shRNA groups were alive. We finally had eight animals for the epilepsy and Con-shRNA groups, and 9 for the Control and 11β-HSD1-shRNA groups.
11β-HSD1 knockdown alleviates spatial learning and memory deficits in PTZ-induced epilepsy mice
After completion of PTZ kindling, we performed the MWM test to assess the effect of 11β-HSD1 knockdown on spatial learning and memory. In the acquisition trials, the escape latency of all groups gradually decreased during the five training days. However, the epilepsy group showed significantly increased escape latency compared to the control group, while there was no statistical difference between the epilepsy group and the Con-shRNA group. Interestingly, the escape latency was significantly decreased in the 11β-HSD1-shRNA group compared to the Con-shRNA group (p = 0.043 for the 2nd, p < 0.01 for 4th and 5th day, Fig. 4C). In the probe trial, mice in the epilepsy group and the Con-shRNA group showed a decrease in the number of crossing the target quadrant and swimming velocity compared to the control group and the 11β-HSD1-shRNA group (p < 0.05, Fig. 4D-F). This data indicates that knockdown of 11β-HSD1 alleviated the impairment of spatial learning and memory induced by PTZ-kindling.
11β-HSD1 knockdown attenuates hippocampal neuronal damage
The hippocampus is known as a critical structure for spatial learning and memory, so we subsequently explored the effects of 11β-HSD1 knockdown on hippocampal neuronal damage induced by PTZ kindling using Nissl staining. Hippocampal CA1 and DG areas of mice in the epilepsy group exhibited considerable cell loss compared to the control group. However, the 11β-HSD1-shRNA group showed significantly attenuated hippocampal neuronal damage compared to the Con-shRNA group (Fig. 5A). The above data showed that knockdown of 11β-HSD1 ameliorated the hippocampal neuronal damage induced by PTZ kindling.
11β-HSD1 knockdown inhibited hippocampal apoptosis in the PTZ-induced epilepsy mice
As apoptosis is known to play a role in neuronal damage associated with epilepsy [3], we then examined molecular markers of apoptosis in the hippocampus using Western blot. The epilepsy group displayed a significantly increased ratio of Bcl-2/Bax (p < 0.05, Fig. 5B, D) and decreased expression of the apoptosis-related protein cleaved caspase-3, compared to the control group (p < 0.05, Fig. 5C, D). There were no statistical differences between the epilepsy group and the Con-shRNA group.
In contrast, the 11β-HSD1-shRNA group presented significantly increased Bcl-2/Bax ratio and decreased cleaved caspase-3 level compared to the Con-shRNA group (p < 0.05, Fig. 5B-D). The above data indicated that knockdown of 11β-HSD1 inhibited hippocampal apoptosis caused by PTZ kindling.
Discussion
In the present study, we found that 11β-HSD1 was mainly expressed in the neurons, and its expression was significantly increased in a mouse epilepsy model induced by PTZ. Overexpression of 11β-HSD1 significantly decreased the threshold voltage and increased the frequency of AP firing in primary cultured hippocampal neurons. Local knockdown of 11β-HSD1 in the hippocampus not only reduced seizure severity and kindling epileptogenesis but also alleviated the cognitive impairment of epileptic mice, and this was accompanied by reduced apoptosis and neuronal loss in the hippocampus of PTZ kindling epilepsy mice. HPA axis activation is involved in the regulation of a variety of life activities. It is well recognised that epilepsy is closely related to the HPA axis. Human epileptic seizures, especially generalised tonic clonic seizures and complex partial seizures, which are common in temporal lobe epilepsy (TLE), result in increased activation of the HPA axis [25][26][27]. Similarly, in animals, even a single evoked temporal lobe seizure in healthy (non-epileptic) rodents can activate the HPA axis, significantly increasing corticosterone levels [28,29]. However, as a critical enzyme of glucocorticoid metabolism, the expression patterns of 11β-HSD1 in epilepsy have, to date, not been explored. Since 11β-HSD1 is highly expressed in the hippocampus, a critical brain region involved in epilepsy, this study first examined the expression of 11β-HSD1 in the hippocampus of epilepsy mice. We found that 11β-HSD1 was mainly expressed in the neurons rather than astrocytes, and the expression of 11β-HSD1 was significantly upregulated in the hippocampus of PTZ-induced To test this hypothesis, we developed a hippocampal11β-HSD1 knockdown model, and then assessed its impact on our epilepsy model. We found that knockdown of hippocampal 11β-HSD1 significantly decreased the severity of seizures and increased the latency of complete kindling induced by PTZ. 4 11β-HSD1 knockdown alleviated spatial learning and memory deficits in PTZ-induced epilepsy mice. A, B 11β-HSD1 knockdown significantly decreased seizure score and prolonged the number of days (latent period) required to reach complete kindling (n = 8 for the epilepsy and the Con-shRNA groups; n = 9 for the Con-shRNA and the 11β-HSD1-shRNA groups). C-F 11β-HSD1 knockdown significantly decreased escape latency, number of crossing of the target quadrant, swimming length and swimming velocity. (*P < 0.05, epilepsy group vs control group, Con-shRNA group vs 11β-HSD1-shRNA group). The bars indicated the mean ± SD 11β-HSD1 affects glucocorticoid metabolism by catalysing circulating inert glucocorticoid to active hormone, and so increased expression of 11β-HSD1 would be anticipated to increase hippocampal steroid activity. Therefore, knockdown of 11β-HSD1 will inhibit glucocorticoid activity. As hyper-activation of glucocorticoids can reduce seizure threshold and increase neuronal excitability in TLE [7], and potentiate neuronal injury in the hippocampus of mouse epilepsy model [30][31][32], knockdown of 11β-HSD1 will result in decreased epileptic activity. These results are also consistent with previous studies of controlling epilepsy through intracranial glucocorticoid intervention [33,34] and proved the involvement of 11β-HSD1 in epileptogenesis.
Cognitive impairment is one of the major comorbidities of epilepsy [3]. Studies have shown that elevated hippocampal and neocortical 11β-HSD1 is observed during aging and causes cognitive decline and that experimental 11β-HSD1 deficiency prevents the emergence of cognitive defects associated with aging [35]. Since the expression of 11β-HSD1 was increased in the brain of epileptic mice, this may negatively impact cognition in epileptic mice. In the MWM test, we found that the escape latency of PTZ kindled mice was longer compared with the control group. Also, after the hidden platform was removed, the number of crossing the target quadrant was decreased in PTZ kindling mice, indicating that PTZ kindling impaired spatial learning and memory, which is consistent with previous studies [36,37]. However, hippocampal knockdown of 11β-HSD1 was able to rescue these cognitive deficits. A previous study found that elevated 11β-HSD1 in adults was associated with increased incidence of brain atrophy, leading to cognitive dysfunction [38]. Furthermore, in two randomised, double-blind, placebo-controlled crossover studies, short-term administration of a nonselective 11β-HSD1 inhibitor, carbenoxolone, improved verbal fluency and memory in a small cohort of adults with type 2 diabetes [15]. Our results align with these prior studies, and suggest that inhibition of 11β-HSD1 is a promising target for treating both epilepsy and the associated cognitive impairment. Regarding the pathological mechanisms, apoptosis plays a critical role in the pathogenesis of epilepsy. Repetitive epileptic seizures lead to neuronal apoptosis [37], and neuronal apoptosis aggravates seizures [39]. In addition, hippocampal neuronal apoptosis has been shown to contribute to impaired hippocampus-dependent cognitive function [40]. Therefore, researchers have attempted to use pharmacotherapy targeting neuronal apoptosis to improve cognitive impairment, for example caused by vascular ischemia or epilepsy [37,41]. Prior studies have shown that 11β-HSD1 overexpression causes apoptosis in insulinoma cells [20], and inhibiting the activity of 11β-HSD1 can alleviate apoptosis in spleen cells [42]. Thus, we proposed that inhibition of 11β-HSD1 activity might protect against the hippocampal neuronal damage of PTZ kindling epileptic mice by reducing apoptosis. To test this hypothesis, we examined the neuronal damage and expression of apoptosis-related proteins. We found that knockdown of 11β-HSD1 reduced neuronal injury in hippocampal CA1 and DG areas induced by PTZ kindling, preventing the increased expression of pro-apoptosis proteins Bax and cleaved caspase-3 and the reduced expression of anti-apoptosis protein Bcl-2 in PTZ kindled epilepsy mice. These data suggest that inhibition of 11β-HSD1 activity can reduce hippocampal apoptosis in epilepsy and thus exert a neuroprotective effect against neuronal damage induced by epileptic seizures, which may subsequently contribute to the improvement of both epilepsy and the associated cognitive impairment.
Conclusions
In this study, we demonstrated that 11β-HSD1 was expressed in hippocampal neurons and upregulated in the mouse epilepsy model induced by PTZ kindling. Plasmid-induced overexpression of 11β-HSD1 increased the excitability of primary cultured hippocampal neurons, and local knockdown of hippocampal 11β-HSD1 alleviated seizures and the associated cognitive impairment caused by PTZ, attenuated hippocampal neuronal damage and inhibited apoptotic cell death. Our results indicate that inhibition of 11β-HSD1 may be a promising strategy to treat both epilepsy and concomitant cognitive impairment. | 2022-09-06T13:29:27.103Z | 2022-09-05T00:00:00.000 | {
"year": 2022,
"sha1": "32e079010cf28b6137b18fd0651606f9be2ead85",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1d28f620aed59eb4c3db82f73ea1d00a0fb7bd94",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18663911 | pes2o/s2orc | v3-fos-license | Estimated physical activity in Bavaria, Germany, and its implications for obesity risk: Results from the BVS-II Study
Background Adequate physical activity (PA) is considered as a key factor in the fight against the obesity epidemic. Therefore, detailed description of the actual PA and its components in the population is necessary. Additionally, this study aims to investigate the association between PA and obesity risk in a representative population sample in Bavaria, Germany. Methods Data from 893 participants (age 13–80 years) of the Bavarian Food Consumption Survey II (BVS II) were used. In each participant, three computer-based 24-hour recalls were conducted by telephone assessing type and duration of PA in the domains occupation, sports, other strenuous leisure time activities (of mostly moderate intensity) as well as TV/PC use in leisure time and duration of sleeping. After assigning metabolic equivalents (METs) to each activity, estimates of energy expenditure (MET*h) and total daily PA level (PALest.) were calculated. In a subgroup of adults (n = 568) with anthropometric measurements logistic regression models were used to quantify the impact of PA on obesity risk. Results Estimated average PA in women and men was 38.5 ± 5.0 and 40.6 ± 9.3 MET*h/d, respectively, corresponding to PALest. values of 1.66 ± 0.22 and 1.75 ± 0.40. Obese subjects showed lower energy expenditure in the categories sports, occupation, and sleeping, while the time spent with TV/PC during leisure time was highest. This is confirmed in logistic regression analyses revealing a statistically significant association between obesity and TV/PC use during leisure time, while sports activity was inversely related to obesity risk. Overall, less than 1/3 of the study participants reached the recommended PAL of ≥ 1.75. Subjects within the recommended range of PA had an about 60 % (odds ratio = 0.43; 95% CI: 0.21–0.85) reduced risk of obesity as compared to inactive subjects with a PALest. <1.5. Conclusion Based on the results of short-term PA patterns, a major part of the Bavarian adult population does not reach the recommendations (PAL>1.75; moderate PA of > 30 min/d). Despite the limitations of the study design, the existing associations between sports activity, TV/PC use and obesity risk in this population give further support to the recommendation of increasing sports activity and reducing sedentary behaviour in order to prevent rising rates of obesity.
Background
Globally, there are more than 1 billion overweight adults, at least 300 million of them obese. These alarming facts published by the World Health Organisation (WHO) [1] demonstrate that obesity has reached epidemic dimension in developed as well as in developing countries. Consequences on health range from several non-fatal but debilitating disorders that reduce quality of life to increased risk of premature death because of serious chronic diseases. Besides genetic factors and food consumption patterns exceeding the individual energy need, a sedentary lifestyle with lack of physical activity (PA) is one of the key causes [2]. The relationship between obesity, PA and chronic diseases is close and several epidemiological studies could show that regular PA can prevent from obesity and related chronic diseases, such as type-2 diabetes, cardiovascular disease, hypertension, stroke, cancers of different sites, osteoporosis, and contribute to maintain mental health [1,3]. Thus, PA promotes health and well-being and has also enormous economic benefits considering the health care costs that could be attributed to obesity. However, the question of the adequate dose of exercise is still a matter of debate [4][5][6].
In order to provide a solid basis for obesity prevention strategies detailed knowledge of PA patterns in the target population is necessary. Therefore, we assessed short-term PA and sedentary behaviour of the Bavarian population by means of three unannounced 24-h recalls. Different activity domains contributing to total daily energy expenditure are described and their impact on obesity risk is quantified. Additionally, PA estimates in the Bavarian population are compared with current recommendations to prevent obesity and promote well-being and health.
Study Design
The Bavarian Nutrition Survey II (BVS II) is designed as a representative study of the Bavarian population to investigate dietary habits and PA. From September 2002 until June 2003, 1050 subjects aged 13-80 years were recruited by a three-stage random route sampling procedure from the German-speaking Bavarian population. This recruitment procedure included the selection of 42 communities as so-called sampling points (stratified by county and community characteristics), a random walk (every third household) with a given start address, and a random selection of one household member who meets the selection criteria. At baseline, subjects' characteristics, lifestyle, socio-economic and health status were assessed by means of a computerized face-to-face interview. Within the following two weeks, participants were contacted by telephone on two workdays and one weekend day for recalling their dietary intake as well as PA on the day before. Within six weeks after recruitment, all adult study subjects (=18 years) were invited to their nearest health office for blood sampling and standardized anthropometric measurements.
Participation rate in the whole study was 71 % (n = 1050). All adults that completed at least one 24-h dietary recall (n = 879) were invited to the health offices; from 65 % (n = 568) of those approached blood samples and anthropometric measurements could be obtained. For the present evaluation, 893 subjects who completed at least two 24-h activity-recalls were included. Within this group standardized anthropometric measurements were available from 552 subjects (61.8 %). All participants gave their written informed consent. The study was approved by the local ethical committee.
Assessment of Physical Activity
According to a method described and validated by Matthews et al. [14], information on the short-term PA of each subject was collected by means of three unannounced computer-assisted telephone interviews. Trained interviewers asked the study participants to recall the exact type and time spent in activities of the following 5 categories during the last 24 hours: occupation, sports, other strenuous leisure time activities (LTPA strenuous ), TV or PC use in leisure time and sleeping. In the categories sports and LTPA strenuous , the interviewers used a list of common activities laid on the screen in order to give examples to the participants and to fasten the interview process. Different types of walking (including walking for pleasure) were attributed to the category 'sports' since this type of activity is very important in older age; the category LTPA strenuous included mainly leisure time PAs of moderate and vigorous intensity, such as different types of gardening, homemaking and household activities, or child caring. Although the wording of the question ('strenuous') may imply vigorous activities only, we actually assessed mainly activities of moderate intensity by means of this question (see results).
Based on the results of their validation study, Matthews et al. [14] concluded that a series of three unannounced 24h PA recalls provides an assessment of PA comparable to other short-term PA assessments that utilize activity monitors (Actillume monitoring) or the Baecke questionnaire. Deattenuated Pearson correlation coefficients between results from the 24-h recalls and the Baecke questionnaire ranged from 0.34-0.68 (p < 0.01). A correlation coefficient of 0.64 (p < 0.01) was reported for the association between 24-h recall results (total MET*h/d) and the Actillume measures (counts*min -1 *d -1 ). They assessed four intensities of activity (light, moderate, vigorous, and very vigorous) in each of three activity domains (household, occupational, leisure-time) as well as sleeping time, and assigned 1.5 MET for light, 4.0 MET for moderate, and 6.0 MET for vigorous activities [14]. In our study, we more precisely assessed the time and type of activity spent in different PA categories and assigned individual MET values; however, except for TV/PC use, we did not actively assess the time spent with light activities during leisure time.
As described in the compendium of physical activities by Ainsworth et al. [7,8], multiples of the metabolic equivalent (METs) were used to estimate the relative intensity of each reported activity with one MET equal to the standard for resting energy expenditure (roughly 3.5 ml of oxygen consumed per kilogram of body weight/min) for the average adult. According to the assigned MET-values, all selfreported activities were classified as light (< 3 METs), moderate (3-6 METs) or vigorous (>6 METs) [4,8].
The MET-values of occupational activities were determined by a combination of self-reported work-intensity (ranging from mainly sitting to laborious physical workload or actually not working) and respective job-title. When a description of activities was missing or the provided information unclear standardized mean MET-values were assigned. In particular, if job activities of students and retired persons were reported that could not be classified, a MET-value of 1.85 representing light work was assumed to be applicable. Type and intensity of the activity of homemakers was also difficult to evaluate; only for this group all reported strenuous activities belonging to the area of household activities were considered as being included in occupational household work and, therefore, not attributed to LTPA strenuous . To acknowledge homemakers' activities as full occupation, we filled up the reported working time to at least 8 hours of work per weekday for all homemakers under 65 years. An intensity level of 2.5 METs representing "multiple household tasks all at once, light effort" [8] was assigned.
Energy expenditure estimates (MET*h) independent from body weight were calculated by multiplying the reported duration of any activity (h) by respective intensity (MET) [7,8]. By summing up all activities, participants' daily MET*h were obtained for the different activity domains, e.g. sports-MET*h per day. In order to estimate a total daily PA score, it was necessary to introduce a new activity domain, called non-reported PA during leisure time (LTPA non-reported ), according to a method described by Norman et al. [9,10]. The difference between 24 hours per day and the total duration of self-reported activity/inactivity was considered as LTPA non-reported . These unknown activities were multiplied by an estimated MET-value of 1.75, which is between the suggested values of 1.5 MET [14] and 2.0 MET [9,10]. The intensity factor corresponds to the mean of sitting (1.5 MET) and light home and selfcare activities (2.0 MET) [7,8]. Since our study participants mentioned also several light activities under the category LTPA strenuous -which were multiplied with the most exact MET value given by Aintsworth et al. -we tried not to overestimate the remaining non-reported time.
The single recalls were weighted for weekday or weekend day to calculate a subject's total daily short-term PA and its components. We also estimated the participants' shortterm PA level (PAL est. ) by dividing the individual total daily PA score (MET*h/d = kcal/(kg body weight*d) =~ 1 kcal/(kg b.w.*min) [7,8]) by the minimum score of 23.2 MET*h/d (assumption of 8 hours of sleep × 0.9 MET and 16 h being awake, but resting × 1.0 MET) [11]. Since 23.2 MET*h should reflect resting metabolic rate (RMR) expressed in units of MET*h, the resulting ratio gives the multiple of RMR [11], similar to the PAL value. However, it has to be emphasized that the calculated PAL est. values are of limited precision as compared to the PAL values mainly derived by means of the doubly labelled water method.
Case definition
To assess the prevalence of overweight and obesity, the subjects' body mass index (BMI) was calculated as measured weight divided by the square of measured height (kg/ m 2 ). Self-reported figures were used for subjects who did not undergo anthropometric measurements. Following the WHO-guidelines [12] participants were classified into six categories as being underweight (<18.5 kg/m 2 ), normalweight (18.5-<25 kg/m 2 ), overweight (25-<30 kg/m 2 ), obese grade I (30-<35 kg/m 2 ), obese grade II (35-<40 kg/ m 2 ) and obese grade III (≥ 40 kg/m 2 ). All obese subjects (n = 144) with BMI ≥ 30 kg/m 2 were considered as cases and all other study participants served as controls in the logistic regression analyses.
Statistical Analysis
The given descriptive results were weighted to correct for the deviation of the study group from the distribution of gender, age, and living area in the underlying Bavarian population. Since the PA data were not normally distributed, median and interquartile range are presented. Comparisons between gender and BMI groups were made by means of the Mann-Whitney U test. In order to examine the association between PA and obesity risk, logistic regression models were used. Risk calculations were conducted only for the subgroup with standardized measurement of weight and height. Additionally, subjects with an energy intake below 80% of the estimated basal metabolic rate (BMR, calculated by WHO-equations [13]) were excluded from risk estimations because of an increased likelihood of misreporting of PA. Thus, risk evaluation was conducted in a subgroup of 507 subjects. The activity estimates (MET*h/d) over each activity-domain as well as the total daily activity (MET*h/d and PAL est , respectively) were divided into four groups according to the distribution in the entire study population or by predefined cut points. Odds ratios (OR) and corresponding 95% confidence intervals (CI) are given for models adjusted for sex, age (< 18 y, 18-<30 y, 30-<40 y, 40-<50 y, 50-<65 y, ≥ 65 y), energy intake (kcal/100/d), smoking (never, former, current) and socio-economic status (low, low-medium, medium, medium-high, high). Categorization of socio-economic status is based on the value of three characteristics on a point-scale including household net income, educational level of the one who is being interviewed and career position of the principal earner. Tests on trend were calculated using the quartile-based PA scores as a continuous variable as well as using the continuous variables (in MET*h/d). All statistical analyses were performed by means of the SPSS 11.0 software package (SPSS Inc., Chicago, USA).
Baseline characteristics and prevalence of obesity
Baseline characteristics of the study participants are summarized in Table 1. Significant gender differences existed for BMI groups, socioeconomic status, employment level, smoking habits and marital status; also anthropometric measures as well as basal metabolic rate (BMR) and energy intake differed by gender. The proportion of obese subjects in the whole sample (n = 893) was estimated to 17.1% in women and 16.1% in men. Excluding subjects with self-reported weight and height, the prevalence of obesity was even higher with 19.6% in women and 20.4% in men (overall 20.0%).
Estimated Physical Activity
Estimates of PA by activity domain (MET*h/d) and intensity are given in Table 2, including also the corresponding duration of activities (h/d). Men as compared to women showed significantly higher values in total scores of sports activity, TV/PC use and total daily activity, while women reported a significantly longer sleeping time per day. This is also reflected in results by intensity sub-groups with men spending more time in PA with moderate or vigorous intensity. The most important intensity subgroup was occupational PA of light intensity showing the highest mean energy expenditure for both men and women. Nonreported time of PA in the 24-hour recalls was higher in women than in men. Total daily PA was estimated to 37.35 (5.58) MET*h/d (median, interquartile range) in Table 3 shows the results for the estimated PA by type and intensity level in different BMI categories. Obese subjects reported less participation in occupational (women) and sports (men) activities but performed more LTPA strenuous than non-obese women and men. On the contrary, the time spent with TV/PC use during leisure time was highest in overweight and obese subjects. Sleeping time was shortest among obese women while underweight subjects slept most. Total daily activity scores were lowest in obese and underweight subjects, thus, the difference between obese and non-obese subjects did not reach statistical significance.
Physical Activity and Risk of Obesity
Risk estimations in the subgroup with measured weight and height and after exclusion of suspected miss-reporters revealed a significant inverse association between obesity and sports activity ( overall PA of at least 30 min/d of higher than light intensity may not work in this population. Such recommendations should be focused on sport activities only, a category that includes also walking.
Discussion
The results of our investigation revealed that higher PA in the category sports and less use of TV/PC during leisure time were strongly and significantly associated with a decreased risk of obesity. Figure 1 shows the mean BMI of subjects with respect to categories of sports activity and TV/PC use in leisure time. The mean BMI in the groups with higher sports activity and less time spent for TV/PC is distinctly lower than in subjects who were not active in sports and spent a long time watching TV or using a PC during leisure time. In general, sports are mostly of moderate or vigorous intensity and are often executed in one bout without long interruptions, especially endurance activities like walking, running or cycling. These sports activities demanding high energy costs were most popular among active subjects in the present study. Even people of older age (≥ 65 years) were still active in endurance sports by being engaged in walking although PA was declining with rising age. In comparison, obese subjects are more likely to be engaged in activities of moderate intensity, but hardly perform activities of high intensity, such as many sports [28]. This contrasts to TV/PC use which is associ-ated with a very low energy expenditure. With increasing sedentary behaviour physical activities decreases [29]; moreover, especially television watching is associated with snacking, leading to high caloric intakes [30].
Similar associations as reported here were found in other studies. An European study [31] investigating the PA pattern in samples from 15 found. Similar associations between sedentary life-styles, mainly represented by TV watching, and PA have been shown by several previous studies [29,[32][33][34][35][36][37].
In the present study, we could not find distinct associations between PA in activity domains other than sports and TV/PC use and the risk of obesity. In contrast to the results reported by King et al. [38], occupational PA was unrelated to obesity risk. For unemployed subjects the lowest though not significant point estimate was found; this finding is possibly due to the fact that students and those who retired or were unemployed had more time left for sports or other recreational activities. In several [29,40] but not all studies [39] an inverse association between occupational activity and leisure time PA was observed. The questioning for strenuous activities in leisure time mainly assessed moderate physical activities and contributed on average to only about 3 to 4 % of total daily energy expenditure. Risk estimates for obesity decreased with increasing activity in LTPA strenuous but did not reach statistical significance. This result may be affected by recall bias since obese subjects may have reported more activities in this category (Tab. 3) because of rating their activities more demanding.
Two studies reported an inverse association between sleep duration and obesity [29,41]. Except for the group with >8 MET*h/d spent with sleeping, in our study risk estimates of obesity decreased with increasing time spent with sleeping; however, results were not statistically significant.
In the present study, also non-reported activities were not associated with obesity risk. Our questionnaire did not assess light-intensity activities of common life (e.g. eating, car driving, self-care, etc.). Consequently, the high proportion of time attributed to this PA domain -about half of estimated total daily energy expenditure -was almost expected. On the other hand this result supports the view that only a small part of daily energy expenditure is spent in demanding activities which should be remembered best [42].
The estimated level of total physical activity in terms of MET*h/d in the present study population was very similar to that reported in the NHAPS Study [43]. This study is one of the few assessing 24-h PA with computer-assisted telephone-interviews; they found in 7. [10,31,32] total PA of the Bavarian subjects was found to be inversely associated with obesity. Subjects with a PAL value =1.75 (Q3 + Q4) had a 57 % reduced risk as compared to subjects with a PAL value <1.5. These findings fit with the WHO-recommendation that a PAL of 1.75 or more is necessary to avoid excessive weight gain, a recommendation which is based on the review of 40 international studies [2]. Among normal-weight subjects, 35.1% met the recommendation, which is still low but clearly higher than the 22.8 % in the obese subjects (table 5). Overall, this WHO-goal has only been reached by a total of 31.4 % of the study participants.
The public health-recommendation from the Centers for Disease Control and Prevention (CDC) and the American College of Sports Medicine (ASCM) of at least 30 minutes of moderate PA per day [4] was met by a total of 55.9 %. An identical rate was even achieved by obese subjects, which might be astonishing at first sight, but if the recommendation was considered only in terms of sports activities, the percentage of sufficiently active obese subjects dropped to only 20.7 %. Taking into account that the recommendation of 30 minutes of moderate PA per day has minimum-character in the context of weight-management yet remembering the stricter guidelines of 60 minutes stated by the Institute of Medicine (IOM) [5], the data would turn out even worse. Nevertheless, considering diverging methods of assessment and PA recommendations these results are quite comparable with other studies. Brown and Baumann [18] found that the subjects' percentage of meeting the current CDC/ACSM-recommendation in 2 Australian surveys ranged between 51.6 % and 60.2 %. Weyer et al. [44] observed that 61.5 % of 109 obese Germans did not meet any recommendation. This is less than the 87% of 7124 adults, who were not adequately active in the German General Health Survey in 1998 [45].
The obesity rate in this Bavarian sample is higher than in a recent survey published by the Federal Statistical Office of Germany [23] in 2004, but comparable to other German studies conducted since 1998. Bramlage et al. [24] reported on the prevalence of obesity comparing rates from the German "Hypertension and Diabetes Risk Screening and Awareness" (HYDRA)-study in 2001 (19.5 % in men, 20.3 % in women) with the German General Health Survey (GHS) 1998 data (18.8 % in men, 21.7 % in women). In comparison to the results of a former representative study in the Bavarian population in 1995 (BVS I), the prevalence of obesity increased in the last years as found also for other western countries [25][26][27].
The information about the participants' short-term PA was collected by means of three 24-hour telephone recalls, a method validated by Matthews et al. [14] (see methods section). Other methods like behavioural obser-vation, use of motion sensors, physiological markers (e.g. heart rate) and calorimetry are less subject to bias in the assessment of mainly long-term PA and energy expenditure. Especially the double-labeled water method is regarded as 'gold standard' [15]. However, self-reported data obtained by means of diaries or recalls are most practical in large-scale population-based studies because of relatively low costs and low efforts for the participants [16]. In the present study, kind and duration of PA were assessed, but not the corresponding intensities (except for occupational PA). Instead, MET values were assigned to each specific activity. Consequently, some degree of error may have been introduced because of unclear description, misunderstanding or misidentification. In occupational PA, consideration of both self-reported job title and selfrated work-intensity at least reduced the great variability of subjects' individual performances within the same job title [17]. However, using mean MET values to express the intensity of a PA assumes that there are no individual differences in performing the same types of activities, an assumption which in practice does not hold true [7,8]. We further expressed PA in terms of MET*h/d and MET*h/24 h but avoided to express PA in terms of 'kcal' because the latter would have been strongly affected by body weight [7,8] thus resulting in misclassification of individuals [18]. Potential bias must also be considered due to typical problems of self-report. First, the BMI variable might be affected by overestimation of height and underestimation of weight [19,20] or in rare cases also by high muscle mass [21]. Using anthropometric measurements, valid BMI data could be obtained from a substantial part of the study subjects. Second, self-reported PA may be overestimated in order to create a more ideal picture of oneself [22]. And third, the quality of the survey is highly dependent on the respondents' memory, a source of bias that should be minimized due to the short recalling period of 24 hours [14]; this should be one of the major strengths of the current study, besides its representativeness and its relatively large sample size.
Conclusion
The overwhelming part of the Bavarian population did not reach current PA recommendations, and subjects meeting the recommendations showed a significantly lower risk of obesity. Our results strengthen the view of promoting sports activity in expense to TV/PC use in leisure time in order to counterbalance the rising prevalence of obesity in the Bavarian population. Other PA domains like occupation, LTPA strenuous , sleeping and LTPA non-reported showed weaker or no associations with obesity risk. However, due to the cross-sectional study design, no conclusion on causality can be drawn. Especially for the PA category sports activity, it remains unclear whether people are obese due to the low PA or the low PA is a consequence of their high body fat content. With respect to the weight development over time, probably both views are correct. | 2014-10-01T00:00:00.000Z | 2004-01-01T00:00:00.000 | {
"year": 2005,
"sha1": "a6f6c03a93e43c6c2ff645f287a0f1627e6265b2",
"oa_license": "CCBY",
"oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/1479-5868-2-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a6f6c03a93e43c6c2ff645f287a0f1627e6265b2",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245414303 | pes2o/s2orc | v3-fos-license | Efficient Mixing of Microfluidic Chip with a Three-Dimensional Spiral Structure
In this paper, a helical three-dimensional (3D) passive micromixer is presented. A three-dimensional spiral passive micromixer is fabricated through the 3D printing technology and the polymer dissolution technology. The main process is as follows: First of all, a high-impact polystyrene (HIPS) material was used to make a 3D spiral channel mold. Second, the channel mold was dissolved in limonene solvent. The mixing experiment shows that the single helix structure can improve the mixing efficiency to 0.85, compared with the mixing efficiency of 0.78 in the traditional T-shaped two-dimensional (2D)-plane channel. Different screw diameters, screw number structures, and flow rates are used to test the mixing effect. The optimal helical structure is 5 mm, and the flow rate is 2.0 mL/min. Finally, the mixing efficiency of the 3D helical micromixer can reach 0.948. The results show that the three-dimensional helical structure can effectively improve the mixing efficiency.
INTRODUCTION
Micromixer is widely used in micro/nanomanufacturing and is an important component in the microfluidic field. 1 It plays a key role in medical treatment, chemistry, and other fields. 2 The traditional micromixer is made of a planar single-layer structure. 3−5 Nowadays, with the improvement of 3D printing technology and the application of new materials, the production methods of micromixer are more diverse. 6 At present, two-dimensional micromixers are also widely used, 7 such as X, Y, Z, and T passive micromixers. 8 The special protruding structure is convenient for processing, but the overall mixing efficiency is low. Fortunately, the invention relates to a nature-inspired mini-channel mixer by preparing a bionic structure mixer with improved mixing efficiency. 9 Nevertheless, its chips are too bulky. The plane network structure is too complicated, and it is very difficult to make. Noticeably, the utility model relates to an electroosmosis pressurized combined micromixer. The micromixer was made by 3D printing polylactic acid material. 10 The mixing efficiency was improved by electroosmosis pressurization on the basis of a two-dimensional structure. However, it requires extra resources. Remarkably, a lost-wax casting method was used to fabricate three-dimensional channels. 11 However, the molten wax preparation channel requires high-temperature conditions, and it is easy to leave residual. The resulting channel is of poor quality. Dramatically, the utility model relates to a three-dimensional micromixing ultrafast laser internal processing of glass, 12 which realizes a three-dimensional micromixer by laser processing of glass. However, the preparation process requires potassium hydroxide to be treated, which is dangerous. Importantly, the micromixer was fabricated by the fused deposition modeling (FDM) 3D printing technology. 13 The production cost is low, and the production time is fast. No more processing steps are required. Despite all of these, the actual use process needs to adopt the matching rotating platform. Noteworthily, the glass microfluidic system of monolithic 3D micromixer with impeller is proposed. 14 However, the impeller structure is complicated and the processing time is more than 24 h. This paper presents a new micromixer with a threedimensional helical structure. Combined with the 3D printing technology and the polymer dissolution technology, the micromixer with a 3D spiral channel can be directly manufactured without a bonding process. To improve the mixing efficiency of the mixer, the mixing efficiency of the mixer was investigated by simulating six kinds of structures. Then, the effects of three kinds of screw pitch and three kinds of screw numbers on mixing efficiency were verified by experiments. And the effect of flow velocity on the mixing efficiency of the six structures was verified.
MATERIALS AND METHODS
High-impact polystyrene (HIPS) and acrylonitrile butadiene styrene (ABS) were purchased from Flashforge, China. Sylgard 184 silicone elastomer and curing agent were purchased from Dongguan Sanbang New Material Technology, China. A 3D printer (Flame, Flashforge) is employed to print pouring mold and microchannel mold with HIPS and ABS (Flashforge, China). Limonene solutions (Flashforge, China) are adopted to fabricate a microfluidic chip with the material of poly-(dimethylsiloxane) (PDMS). An injection pump (LSP02-2B, Longerpump) is utilized to implement a mixing experiment. The cross section of the microchannel is observed based on an inverted fluorescence microscope (OLYMPUS IX73, Japan). The theoretical simulation software used in this paper is COMSOL5.6 software. The specific modules are the low-Reynolds-number turbulence module and the alkene material transfer module. Figure 1 shows the fabrication schematic diagram of the 3D helical micromixer proposed in this paper. The specific production process and data are shown in Table 1. In the table, the time required for each step is presented. It is worth noting that due to the small size of the HIPS mold, the printing time is less than 1 min so step 2 takes 20 min.
A comparison was made with three materials, 15 as shown in Table 2. Among them, glass 16 and silicon 17 are adopted as chip materials, which have high-pressure resistance and good thermal stability. However, the production process requires photolithography and bonding technology, the manufacturing process is complex and costly, and the final chip has a complex structure and poor durability. Fortunately, the poly(methyl methacrylate) (PMMA) material 18,19 has the advantages of low melting point and high optical properties, but its production process requires laser cutting and bonding process, which cannot prepare conventional three-dimensional channels. The PDMS material used in this paper is integrated by curing. The monolithic chip requires no bonding. Therefore, the three-dimensional structure can be prepared and has certain flexibility compared with the other three materials and wider applicability.
RESULTS AND DISCUSSION
3.1. Mixing Efficiency of Planar and Three-Dimensional Structures. Figure 2 shows the mixing efficiency comparison diagram of a T-shaped channel 8 and a three- Ensure that the measuring points have the same length. In Figure 2(I), the average mixing index is taken, such as blue line and red line, where the average mixing efficiency of the three-position spiral channel is 0.809 and that of the planar T-shaped channel is 0.749. Figure 2 shows the specific mixing efficiency values at the relative positions of the two mixers. The mixing efficiency of the three-dimensional channel is generally about 12% higher than that of a two-dimensional channel. The same mixing efficiency is calculated by the following formula 1 20 where N is the number of selected points, X i is the grayscale of the ith point on the cross section, X i-unmix is the grayscale of each point in the case of complete unmixing, and X̅ is the average grayscale in the case of final mixing. Finally, the corresponding mixing efficiency value can be obtained. According to Figure 2(I), it can be seen that the mixing efficiency of the threedimensional spiral micromixer is significantly improved compared with that of the T-channel micromixer, and the mixing efficiency of the two liquids can be achieved up to 85.6%. Because the spiral structure has a complex channel shape compared with the plane structure, when two kinds of liquids flow in the channel, the three-dimensional structure can lead to more drastic changes in the liquid flow, enhance the eddy current effect, and lead to stronger mixing effect. The results show that the three-dimensional helical structure has a stronger mixing performance than the planar T-shaped structure. It can be improved by 7.1% in the same length of the channel. 3.2. Hybrid Chip Simulation. 3.2.1. Simulation of Spiral Structure Channel. Figure 3 shows the hybrid simulation diagram and local real experiment diagram of the two structures. Through the comparison of the two experiments and simulation screenshots, it can be seen that the main fluid form of the plane T-shaped mixer is laminar flow, where A, B, C, D, A 1 , B 1 , C 1 , and D 1 are the simulation and physical drawings of the two-channel structures, respectively. The main fluid form of the threedimensional spiral mixer is turbulence. 21,22 The mixing efficiency can be known by comparing the four positions. The four groups of positions are shown in Figure 2. In the end, the spiral structure has a better mixing effect. The T-channel structure involves the simulation equation as the Navier−Stokes eq 2 23 where p is the density (kg/m 3 ), u is the velocity (m/s), and μ is the viscosity (N·s/m 2 ). By contrast, we can see that the helical structure with the final number of turns can reach the maximum mixing efficiency of 0.856. In this paper, the finite element method is used for hydrodynamic analysis. By dividing the computational area into grids, a network partition is carried out for a single helix structure. The specific table data are shown in Table 3.
There is a nonrepeating control volume around each grid point: a set of discrete equations is obtained by integrating the differential equations to be solved for each control volume. Three-dimensional discrete equations are used in this paper. Low Reynolds number K−ε principle (eqs 3 and 4) μ t is the turbulent viscosity; n is the wall-normal coordinate; u is the flow rate; C 1ε , C 2ε , and C μ are empirical constants; σ k is the Prandtl number. The specific values are shown in Table 4. The governing equation diffusion coefficient includes turbulent diffusion coefficient and molecular diffusion coefficient. Turbulent Reynolds number should be introduced. On the basis of eq 3, model (4) is obtained by introducing coefficients f 1 , f 2 , f 3 , and f u .
μ t is the turbulent viscosity; n is the wall-normal coordinate; u is the flow rate; C 1ε , C 2ε , and C μ are the empirical constants; and σ k and σ ε are the Prandtl number corresponding to turbulent kinetic energy k and dissipation rate ε, respectively. The constant values of the formula are shown in Table 4. G k is the turbulent kinetic energy generation term caused by the average velocity gradient. Its calculation formula is shown in eq 5.
The coefficients f 1 , f 2 , f 3 , and f u are the modified parameters of the standard K−ε equation. The present structure produces a low Reynolds number. Its calculation equation is shown in eq 6. The micromixer has a mixing channel width of 200 μm. The same design was adopted for the numerical modeling of the mixing process using COMSOL Multiphysics 5.6. The data of the developed experiments are shown in Table 5. Figure 4(VIII) shows the comparative simulation trend chart of the mixing efficiency of three structures with different screw diameters. It is shown that the simulation mixing efficiency increases with the increase of the number of turns and screw diameter. The Navier−Stokes eq 1 is involved in the simulation.
The numerical definition parameters and boundary conditions involved in the simulation are shown in Tables 6 and 7. Table 6 shows the numerical simulation of boundary conditions. Table 7 shows the numerical simulation of variable parameters.
3.2.3. Velocity of Simulation. Figure 5(I−III), respectively, shows the ratio of simulation mixing efficiencies of two, three, and four turns of helical structure at four different flow rates. Figure 5(IV−VI) shows the ratio of simulation mixing efficiency of three kinds of helical diameter structures at four flow rates. The simulation set velocity is shown in Table 8, and the final simulation results show that the mixing efficiency increases with the increase of the velocity. When the screw diameter is 5 mm and the flow rate is 2 L/min, a maximum mixing efficiency of 0.95 can be achieved. Figure 6 reveals the mixing efficiency curves of the three screw diameters. Figure 6(I−III), respectively, proves the mixing efficiency values of helical structures with 3, 4, and 5 mm diameters. The 3D figure exhibits the 3D schematic diagram of the corresponding structure and the measuring position of mixing efficiency. After this, Figure 6(IV) indicates the trend comparison of the mixing efficiency of the three structures. In the end, Figure 6(V) provides the comparison of the final mixing efficiency values of the three structures. It is concluded that the mixing efficiency increases with the increase of winding number. The 5 mm diameter spiral structure can achieve the maximum mixing efficiency of 0.91. When the fluid passes through the channel and reaches the starting position of the spiral, turbulence occurs. As the screw diameter increases, the structure through which the fluid flows changes more, which is conducive to destroying the intermolecular force. Therefore, when the fluid moves in the spiral structure, increasing the contact angle of the fluid can fully destroy the intermolecular force inside the fluid. Finally, it is concluded that the mixing efficiency of the 5 mm helical structure is higher. It can reach 0.91. According to the kinetic energy principle K−ε at a low Reynolds number (eqs 7−10) (10) where ep is the turbulence dissipation rate; p is the stress; μ is the viscosity; Δ is the gradient operator; ρ is the density; μ T is the eddy viscosity; l* is the radius coefficient; and l w is the radius of the spiral. It can be seen that with the increase of radius, the radius coefficient increases and the eddy viscosity increases, and finally the kinetic energy mixing efficiency improves. It can be seen from Figure 6(V) that the turbulent kinetic energy increases with the increase of flow velocity. The flow is accelerated, creating a prolongation effect that further mixes the flow layer and improves the mixing quality. Figure 7 indicates the mixing efficiency curves of the three winding numbers. As can be seen, Figure 7 Therefore, when the fluid moves in the helical structure and passes through more helical structures, the intermolecular force within the fluid can be fully destroyed. Finally, it is concluded that the mixing efficiency of the four-loop spiral structure is higher. It can reach 0.90. Figure 8 demonstrates the mixing efficiency curves of three winding numbers at different flow rates. As can be seen, Figure 8(I−IV) reveals the mixing efficiency values of two, three, Figure 9 shows the mixing efficiency curves of three kinds of pitch at different flow rates. As can be seen from Figure 9(I−IV), with the increase of screw diameter (from 3 to 5 mm), the mixing efficiency increases from 0.90 to 0.90. Simultaneously, it can be seen from Figure 9(V) that the turbulent kinetic energy increases with the increase of flow velocity. The accelerated flow is creating a prolongation effect that further mixes the flow layer and improves the mixing quality. The mixing efficiency ranges from 0.90 to 0.95. To sum up, the 5 mm helical structure can reach the maximum mixing efficiency of 0.95 under the condition of 2.0 mL/min flow rate. Figure 9(VI) shows the physical drawing and size table under the microscope. The table in Figure 9(VII) shows channel size, mixed materials, and the corresponding final mixing efficiency at four flow rates.
Comparison of Different Micromixers.
With the progress of micromixer preparation methods, 24,25 this paper compares four micromixer production methods as shown in Table 9. For example, the surface modification method can only be used to fabricate the micromixer in two dimensions. 26 In view of this, a lost-wax casting method was used to fabricate threedimensional channels. 11 However, it needs a high temperature to prepare a 3D channel by melting wax, which will affect the channel forming. It is notable that a uniform three-dimensional micromixing mixer is used to process glass by laser processing. The mixer channel is of good quality. But even so, this process takes more than 24 h. It is worth mentioning that low-cost FDM printers manufacture micromixers. The micromixer was fabricated by the FDM 3D printing technology. 10 Its production time is reduced to 2 h. However, the production process requires the use of special equipment. It is not easy to prepare on a large scale. Fortunately, a 3D helical micromixer using 3D printing and polymer dissolution technology has been studied in this paper. It has the advantages of simple manufacturing method, short preparation time, and the ability to prepare complex 3D helical structures.
CONCLUDING REMARKS
A method of a three-dimensional helical micromixer with the advantages of high mixing efficiency and high complexity level of the structure is introduced in this paper. The conclusions are as follows: A Compared with that of the T-shaped straight channel with the same length and cross-sectional area of the channel, the mixing efficiency of the three-dimensional single helical channel increases from 0.78 to 0.85. Consequently, compared with the planar structure, the three-dimensional spiral structure can increase the mixing efficiency. B Under the premise of the same cross-sectional area, the mixing efficiency can be increased by adding the spiral structure. Specifically, the mixing efficiencies of the twoloop spiral structure, the three-turn helical structure, and the four-loop helical structure are 0.88, 0.89, and 0.91 respectively. C Under the same number of turns, the influence of pitch on the mixing efficiency of the 3D helical micromixer is obtained. Among them, the mixing efficiencies are 0.88, 0.89, and 0.92 for 3, 4, and 5 mm diameters, respectively. It is concluded that increasing the diameter of the spiral structure can increase the mixing efficiency of the micromixer. D According to the low Reynolds number K−ε principle, mixing efficiency experiments are carried out with different structures and different flow rates. It is concluded that the mixing efficiency of both structures increased with the increase of flow rate. The mixing efficiency of different turn numbers can reach 0.93 eventually. The mixing efficiency of helical structures with different diameters can reach 0.946.
In conclusion, compared with the conventional planar structure, the micromixer proposed in this paper can effectively improve the mixing efficiency. Finally, the micromixer has the best mixing performance when the screw diameter is 5 mm and the flow rate is 2.0 mL/min. | 2021-12-23T16:08:25.779Z | 2021-12-21T00:00:00.000 | {
"year": 2021,
"sha1": "f8b378e5bfc0ebec966f134728726e43fe173a6f",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c06352",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d3183b07677e9494c7018bb6324e1d51d20fd45",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270841594 | pes2o/s2orc | v3-fos-license | Loss of Hormone Receptor Expression after Exposure to Fluid Shear Stress in Breast Cancer Cell Lines
Following metastatic spread, many hormone receptor positive (HR+) patients develop a more aggressive phenotype with an observed loss of the HRs estrogen receptor (ER) and progesterone receptor (PR). During metastasis, breast cancer cells are exposed to high magnitudes of fluid shear stress (FSS). Unfortunately, the role for FSS on the regulation of HR expression and function during metastasis is not fully understood. This study was designed to elucidate the impact of FSS on HR+ breast cancer. Utilizing a microfluidic platform capable of exposing breast cancer cells to FSS that mimics in situ conditions, we demonstrate the impact of FSS exposure on representative HR+ breast cancer cell lines through protein and gene expression analysis. Proteomics results demonstrated that 540 total proteins and 1473 phospho-proteins significantly changed due to FSS exposure and pathways of interest included early and late estrogen response. The impact of FSS on response to 17β-estradiol (E2) was next evaluated and gene expression analysis revealed repression of ER and E2-mediated genes (PR and SDF1) following exposure to FSS. Western blot demonstrated enhanced phosphorylation of mTOR following exposure to FSS. Taken together, these studies provide initial insight into the effects of FSS on HR signaling in metastatic breast cancer.
Introduction
Metastatic cancer is associated with poor patient prognoses and drug resistance [1].This is especially critical in the histologically identified hormone receptor positive (HR + ) breast cancer subtype, which accounts for approximately ~70% of breast cancer cases and relies on the estrogen receptor alpha (ER) for proliferation and tumor growth [2].Although primary HR + tumors are known to be responsive to endocrine therapy, once metastasized, these tumors are more aggressive and less sensitive to standard of care endocrine therapies.The more aggressive behavior observed in HR + metastatic breast cancer is hypothesized to occur through multiple mechanisms including: (i) loss of the estrogen receptor (ER), (ii) acquisition of additional mutations, and/or (iii) alterations in estrogen and growth factor-mediated signaling cascades [3,4].There is a need to better understand the mechanisms driving HR + breast cancer growth and survival following metastasis as metastatic breast cancer accounts for 90% of cancer-related deaths [1].Development of macro-metastasis is an inefficient process, with only a minority of breast cancer cells successfully establishing at distal tissue sites [5].Growth at secondary sites requires the acquisition of abilities that promote survival in a new unfavorable microenvironment.Current studies suggest the regulation of growth and survival pathways as well as cytokine release [5].To date, it is unclear if these pro-survival adaptations were present in the primary tumor or acquired during metastatic spread in the vasculature.Current in vitro methods to study cancer metastasis are designed to interrogate cells during the initial stages of the cascade (e.g., invasion of basement membrane) [6].There are limited approaches to study the later stages of the metastatic cascade in vitro.Specifically, there is a lack of pre-clinical models that allow for the interrogation of biophysical forces exerted on circulating cancer cells, such as fluid shear stress (FSS), the unit area of force acting on a cell in the vasculature, which can cause genotypic and phenotypic alterations in cancer.Physiological shear stress magnitudes vary greatly depending on the location of the measurement from almost 0 dyn/cm 2 in the microcirculation to ~120 dyn/cm 2 in the great arteries of the heart.The average FSS magnitudes imposed upon circulating cells are 1-6 dyn/cm 2 and 15-20 dyn/cm 2 for venous and arterial flow, respectively.Most studies investigating how exposure to FSS changes cellular phenotypes involve flowing cells through micro-tubing using either a syringe pump or a peristaltic pump [7][8][9][10].While effective, the large diameter of tubing (~500 µm-10 mm) limits the ability to deliver uniform magnitudes of FSS to individual cells.Microfluidic devices are a superior alternative to overcome this limitation and provide a greater degree of control on the durations and magnitudes of applied FSS to single cells.Prior studies examined how FSS exposure affects endothelial cell elongation and proliferation [11,12], tumor-endothelial cell interaction [13], cellular deformation [14,15], drug toxicity [16], FSS-mediated epithelial to mesenchymal transition (EMT) and cancer stem cell (CSC) biology [17][18][19][20].Prior studies on breast cancer have utilized the triple negative breast cancer cell line MDA-MB-231 or the hormone receptor positive cell line MCF-7.Collectively, the influence of FSS on HR + breast cancer is understudied.There is a lack of studies designed to determine the influence of FSS on HR + breast cancer both immediately after exposure to FSS and following extended time points in culture.End point analysis of prior studies focused on FSS were primarily obtained directly after exposure to FSS, with one study prolonging growth to 1 week [17][18][19]21].We previously employed a microfluidic approach to examine FSS-induced deformation of breast cancer and confirmed significant heterogeneity in the single cell response [15].This study focused on biophysical changes in cells due to exposure to FSS and neglected both the role of FSS on intracellular signaling and the long-term impact.To address this limitation, we recently designed and utilized a modular microfluidic device that effectively and accurately recapitulates the fluid shear stress that metastasizing breast cancer cells experience while circulating through the human vasculature [22].The modular system consists of two separate microfluidic devices, including a shearing device containing a single fluidic channel capable of delivering uniform magnitudes of FSS to cells that mimic the human vasculature and a microwell trapping array capable of isolating and studying single cancer cells.A unique feature of this device was the ability to perform both on-chip single cell immunostaining (using the two devices) or off-chip bulk analysis with PCR or Western blotting (using only the shearing device).The study performed here expands on our initial work to better define how FSS alters intracellular signaling in HR + breast cancer.
Exposure to Fluid Shear Stress Enhances Phospho-Proteins Associated with Cell Death and DNA Damage Response in HR + Breast Cancer Cells
We have previously utilized the modular microfluidic platform to perform single cell analysis on FSS-induced changes in markers of proliferation or protein phosphorylation [22].We demonstrated that exposure to FSS elevated phospho-proteins in the AKT/mTOR signaling pathways immediately after exposure to FSS [22].To better understand the full scope of intracellular signaling cascades activated by FSS, we expanded on this work and exposed the HR + breast cancer cell line, MCF-7, to 10 dyn/cm 2 FSS using only the shearing device and performed total and phospho-proteomics immediately after exposure to FSS.Compar-isons were made to non-shear control MCF-7 cells that were maintained in suspension but not exposed to FSS. Results demonstrated 540 total proteins and 1473 phospho-proteins significantly changed following exposure to FSS (Figure 1a,b).Pathway analysis was performed in Enrichr [23][24][25] to determine trends in pathways altered by FSS exposure for both phospho-and total proteins changed.Shared pathways of interest for both phospho-and total proteins included mitotic spindle, G2-M checkpoint, and TGFβ signaling (Figure 1c,d).Unique pathways of interest for total protein changes demonstrated alterations to the p53 pathway, mTORC1 signaling, and oxidative phosphorylation.Overall, there was a trend for total protein changes to be associated with pathways commonly altered with cell death and DNA-damage respone.An in-depth evaluation of total proteins enhanced following exposure to FSS demonstrated increased total protein expression linked to these processes, such as BAK1, CBX1, DNTTIP2, FOS, H2AW, H2AX, MACROH2A1, POLG2, and TOP2A (Figure 1d).Significantly altered phospho-proteins demonstrated a similar trend favoring pathways commonly altered with cell death and DNA-damage response with observed changes in proteins association with the UV response and apoptosis pathways (Figure 1e).Similarly to total protein changes, phosphorylation of proteins that directly regulated DNA (p-ATRX, p-DDX21, p-H1-5, p-TP53BP1) was also observed; however, the phosphoproteomics also demonstrated enrichment of proteins commonly associated with growth factor and extracellular signaling responses (p-AKT1S1, p-ATP2B1, p-EIF4B, p-ERBB2, p-IL10RB, p-IL13RA1, p-LMNA, p-MIK67, p-PSEN1, p-RET, p-RPTOR) (Figure 1f).Notably, there was an enrichment in phospho-proteins associated with the mTOR signaling cascade and an increase in phospho-proteins associated with early and late estrogen responses, which was not observed in the total protein changes (Figure 1e).device and performed total and phospho-proteomics immediately after exposure to FSS.
Comparisons were made to non-shear control MCF-7 cells that were maintained in suspension but not exposed to FSS. Results demonstrated 540 total proteins and 1473 phospho-proteins significantly changed following exposure to FSS (Figure 1a,b).Pathway analysis was performed in Enrichr [23][24][25] to determine trends in pathways altered by FSS exposure for both phospho-and total proteins changed.Shared pathways of interest for both phospho-and total proteins included mitotic spindle, G2-M checkpoint, and TGFβ signaling (Figure 1c,d).Unique pathways of interest for total protein changes demonstrated alterations to the p53 pathway, mTORC1 signaling, and oxidative phosphorylation.Overall, there was a trend for total protein changes to be associated with pathways commonly altered with cell death and DNA-damage respone.An in-depth evaluation of total proteins enhanced following exposure to FSS demonstrated increased total protein expression linked to these processes, such as BAK1, CBX1, DNTTIP2, FOS, H2AW, H2AX, MACROH2A1, POLG2, and TOP2A (Figure 1d).Significantly altered phospho-proteins demonstrated a similar trend favoring pathways commonly altered with cell death and DNA-damage response with observed changes in proteins association with the UV response and apoptosis pathways (Figure 1e).Similarly to total protein changes, phosphorylation of proteins that directly regulated DNA (p-ATRX, p-DDX21, p-H1-5, p-TP53BP1) was also observed; however, the phospho-proteomics also demonstrated enrichment of proteins commonly associated with growth factor and extracellular signaling responses (p-AKT1S1, p-ATP2B1, p-EIF4B, p-ERBB2, p-IL10RB, p-IL13RA1, p-LMNA, p-MIK67, p-PSEN1, p-RET, p-RPTOR) (Figure 1f).Notably, there was an enrichment in phospho-proteins associated with the mTOR signaling cascade and an increase in phospho-proteins associated with early and late estrogen responses, which was not observed in the total protein changes (Figure 1e).
Exposure to Fluid Shear Stress Regulates HR Expression in MCF-7 Breast Cancer Cells
Proteomics data revealed alterations to proteins associated with late and early estrogen response; however, there were no observed changes to total ER protein levels immediately after exposure to FSS (Figure 2a) and phospho-ER was not detected at any known
Exposure to Fluid Shear Stress Regulates HR Expression in MCF-7 Breast Cancer Cells
Proteomics data revealed alterations to proteins associated with late and early estrogen response; however, there were no observed changes to total ER protein levels immediately after exposure to FSS (Figure 2a) and phospho-ER was not detected at any known phosphosite.To gain an increased understanding of the effects of FSS exposure on HR expression and function, MCF-7 cells were exposed to FSS as described above and then either collected immediately for gene expression analysis or seeded on tissue culture plastic (TCP) and maintained in culture.Non-shear control MCF-7 cells were maintained in suspension but not exposed to FSS; similarly to FSS-exposed cells, cells were collected immediately or seeded on TCP and maintained in culture.Shear and non-shear-exposed cells were collected at 24 h post exposure to FSS.Gene expression analysis was performed for ER and the ER-mediated gene PR.Results demonstrated no change in gene expression immediately after exposure to FSS or at 24 h post exposure to FSS compared to non-FSS-exposed control cells (Figure 2b,c).To determine if FSS altered estrogen-mediated ER-regulated genes in HR positive cells, we next exposed MCF-7 cells to FSS and then seeded the cells in 5% dextran stripped FBS media.After 24 h of culture, cells were treated with 17β-estradiol (E2) for 24 h to determine alterations in E2-mediated gene expression.After E2 treatment, control non-shear-exposed MCF-7 cells demonstrated the expected significant increase in expression of ER-mediated genes PR and SDF1 (Figure 2d), with increases in expression of factors of 3.48 ± 0.11 and 2.69 ± 0.27, respectively.In contrast, MCF-7 cells exposed to FSS did not demonstrate a significant change in gene expression for either PR or SDF1 (Figure 2e).Further, while pre-treatment with tamoxifen and ICI significantly repressed ER expression in the non-shear-exposed MCF-7 cells by 0.69 ± 0.01 and 0.59 ± 0.06, respectively, there was no change in ER expression with endocrine treatment in FSS-exposed cells (Figure 2d,e).Both non-shear and FSS-exposed MCF-7 cells demonstrated significant repression of ER-mediated gene expression for PR and SDF1 with treatment of endocrine inhibitors (Figure 2d,e).We next sought to determine if FSS altered proliferation of HR + breast cancer cells.MCF-7 cells were grown in media free of exogenous estrogens and, after 24 h, cells were exposed to FSS and then immediately plated in a 96 well plate.Cells were given 24 h to adhere and then treated with E2, tamoxifen, or ICI for 3 days.Despite the observed changes in ER-mediated genes post exposure to FSS, MCF-7 cells demonstrated no change in basal proliferation with endocrine treatment (Supplementary Figure S1).
Due to the observed alterations in endocrine-mediated HR expression after exposure to FSS, the long-term impact of FSS on HR expression in MCF-7 cells was evaluated next.To achieve this, MCF-7 cells were exposed to 10 dyn/cm 2 of FSS and then seeded in culture on TCP in normal growth media.Non-shear control MCF-7 cells were maintained in suspension but not exposed to FSS; similarly to FSS-exposed cells, non-shear MCF-7 cells were seeded on TCP and maintained in culture.Following a period of growth, shear-exposed and non-shear-exposed cells were collected at intervals of 48 h and 1, 2, and 3 weeks and analyzed for gene expression changes.At 48 h, gene expression for ER (0.62 ± 0.05), PR (0.58 ± 0.09), and SDF1 (0.52 ± 0.10) was significantly down-regulated in MCF-7 cells exposed to FSS compared to non-sheared control cells (Figure 3a).Repression of ER (0.65 ± 0.05) and PR (0.57 ± 0.02), but not SDF1 (0.86 ± 0.37) was sustained at 1 week post exposure to FSS (Figure 3a).When evaluated at 2 and 3 weeks post exposure to FSS, ER, PR, and SDF1 demonstrated no significant change in gene expression compared to non-sheared MCF-7 control cells (Figure 3a).Analysis of the non-genomic ER receptor, GPR30, demonstrated no significant change in gene expression after exposure to FSS at 48 h or longer periods in culture, suggesting that the impact of FSS exposure may be specific to ER.To further demonstrate that the observed repression in HR gene expression is mediated by the forces exerted by FSS, and not due to loss of adhesion while in suspension, ER and PR gene expression for non-shear suspended and shear-exposed MCF-7 cells was compared to that of MCF-7 cells maintained on TCP without a period of suspension.Results demonstrated that non-shear-exposed MCF-7 had gene expression levels like adherent MCF-7 cells, while FSS-exposed cells had a loss of ER and PR with changes in gene expression of factors of 0.44 ± 0.05 and 0.46 ± 0.05, respectively.We next evaluated HR and SDF1 gene expression in two additional cell lines, ZR-75 and MCF-7-Y537S, an MCF-7 variant with CRISPR-Cas9 genome editing to insert a constitutively active ER mutation [26].Results from the ZR-75 cell line demonstrated a significant repression of ER (0.68 ± 0.06) at 48 h.HR expression and SDF1 expression were not significantly different at any other time point (Figure 3c).The MCF-7-Y537S cell line demonstrated no change in HR expression at any time point.SDF1 gene expression was significantly enhanced at 2 days post exposure to FSS (Figure 3d).Taken together, this suggests that exposure to FSS mediates ER expression in cell lines with wild type (WT) ER but not in ER mutant lines.
increase in expression of ER-mediated genes PR and SDF1 (Figure 2d), with increases in expression of factors of 3.48 ± 0.11 and 2.69 ± 0.27, respectively.In contrast, MCF-7 cells exposed to FSS did not demonstrate a significant change in gene expression for either PR or SDF1 (Figure 2e).Further, while pre-treatment with tamoxifen and ICI significantly repressed ER expression in the non-shear-exposed MCF-7 cells by 0.69 ± 0.01 and 0.59 ± 0.06, respectively, there was no change in ER expression with endocrine treatment in FSSexposed cells (Figure 2d,e).Both non-shear and FSS-exposed MCF-7 cells demonstrated significant repression of ER-mediated gene expression for PR and SDF1 with treatment of endocrine inhibitors (Figure 2d,e).We next sought to determine if FSS altered proliferation of HR + breast cancer cells.MCF-7 cells were grown in media free of exogenous estrogens and, after 24 h, cells were exposed to FSS and then immediately plated in a 96 well plate.Cells were given 24 h to adhere and then treated with E2, tamoxifen, or ICI for 3 days.Despite the observed changes in ER-mediated genes post exposure to FSS, MCF-7 cells demonstrated no change in basal proliferation with endocrine treatment (Supplementary Figure S1).
Exposure to Fluid Shear Stress Induces Activation of mTOR Signaling in HR + Breast Cancer Cells
Our phospho-proteomics data demonstrated increased phosphorylation of growth factor-mediated extracellular receptors such as p-RET and p-ERBB2 and downstream signaling kinases p-AKT1S1, p-RPTOR, and p-EIF4B (Figure 1f), proteins which have a known association with altered ER signaling through AKT/mTOR crosstalk [27,28].Previous work using the modular microfluidic device demonstrated elevated p-AKT (Ser473) and p-mTOR (Ser2448), but not p-ER (Ser167), directly after exposure to FSS [22].Due to this, and the observed repression of HR expression following exposure to FSS, we next sought to determine if FSS induced long-term activation of the AKT/mTOR signaling axis.Western blot analysis was performed for p-AKT (Ser473), p-AKT1S1 (Thr246), and p-mTOR (Ser2448) in the HR + breast cancer cell lines MCF-7, ZR-75, and MCF-7-Y537S following exposure to 10 dyn/cm 2 of FSS using the microfluidic device followed by seeding in culture and growth on TCP.Non-shear control MCF-7 cells were maintained in suspension but not exposed to FSS; similarly to FSS-exposed cells, cells were seeded on TCP and maintained in culture.Following a period of growth, shear-exposed and non-shear-exposed cells were collected at intervals of 1, 2, and 3 weeks.MAPK activation was also evaluated by expression of p-ERK1/2 (Thr202/Tyr204), as it was previously demonstrated to be repressed by exposure to FSS [22].Following 1 week on TCP post-exposure to FSS, p-mTOR (Ser2448) was significantly elevated by a factor of 3.89 ± 0.23 in the MCF-7 cell line compared to the non-sheared control cells (Figure 4a).p-mTOR levels were normalized to that of the non-sheared MCF-7 cells at 2 weeks post exposure to FSS.There was no observed increase in either p-AKT (Ser473) or p-AKT1S1 (Thr246) in MCF-7 cells exposed to FSS after 1 week in culture; furthermore, p-AKT (Ser473) and p-AKT1S1 (Thr246) were significantly repressed by 0.56 ± 0.09 and 0.79 ± 0.001, respectively, at 3 weeks compared to non-sheared control cells (Figure 4a).There was no observed change in p-ERK1/2 (Thr202/Tyr204) at any time point in MCF-7 cells exposed to FSS.The ZR-75 cell line demonstrated no change in p-AKT (Ser473), p-AKT1S1 (Thr246), p-mTOR (Ser2448), or p-
Exposure to Fluid Shear Stress Induces Activation of mTOR Signaling in HR + Breast Cancer Cells
Our phospho-proteomics data demonstrated increased phosphorylation of growth factor-mediated extracellular receptors such as p-RET and p-ERBB2 and downstream signaling kinases p-AKT1S1, p-RPTOR, and p-EIF4B (Figure 1f), proteins which have a known association with altered ER signaling through AKT/mTOR crosstalk [27,28].Previous work using the modular microfluidic device demonstrated elevated p-AKT (Ser473) and p-mTOR (Ser2448), but not p-ER (Ser167), directly after exposure to FSS [22].Due to this, and the observed repression of HR expression following exposure to FSS, we next sought to determine if FSS induced long-term activation of the AKT/mTOR signaling axis.Western blot analysis was performed for p-AKT (Ser473), p-AKT1S1 (Thr246), and p-mTOR (Ser2448) in the HR + breast cancer cell lines MCF-7, ZR-75, and MCF-7-Y537S following exposure to 10 dyn/cm 2 of FSS using the microfluidic device followed by seeding in culture and growth on TCP.Non-shear control MCF-7 cells were maintained in suspension but not exposed to FSS; similarly to FSS-exposed cells, cells were seeded on TCP and maintained in culture.Following a period of growth, shear-exposed and non-shear-exposed cells were collected at intervals of 1, 2, and 3 weeks.MAPK activation was also evaluated by expression of p-ERK1/2 (Thr202/Tyr204), as it was previously demonstrated to be repressed by exposure to FSS [22].Following 1 week on TCP post-exposure to FSS, p-mTOR (Ser2448) was significantly elevated by a factor of 3.89 ± 0.23 in the MCF-7 cell line compared to the non-sheared control cells (Figure 4a).p-mTOR levels were normalized to that of the nonsheared MCF-7 cells at 2 weeks post exposure to FSS.There was no observed increase in either p-AKT (Ser473) or p-AKT1S1 (Thr246) in MCF-7 cells exposed to FSS after 1 week in culture; furthermore, p-AKT (Ser473) and p-AKT1S1 (Thr246) were significantly repressed by 0.56 ± 0.09 and 0.79 ± 0.001, respectively, at 3 weeks compared to non-sheared control cells (Figure 4a).There was no observed change in p-ERK1/2 (Thr202/Tyr204) at any time point in MCF-7 cells exposed to FSS.The ZR-75 cell line demonstrated no change in p-AKT (Ser473), p-AKT1S1 (Thr246), p-mTOR (Ser2448), or p-ERK1/2 (Thr202/Tyr204) following exposure to FSS and growth on TCP for 1, 2, and 3 weeks.p-mTOR (Ser2448) demonstrated elevated phosphorylation at 1 week; however, the factor increase was variable for each replicate and started to decrease at 2 and 3 weeks post exposure to FSS.The MCF-7-Y537S cell line demonstrated no change in phospho-proteins at any time point (Figure 4c).p-AKT (Ser473) and p-ERK1/2 (Thr202/Tyr204) were elevated at 1 week; however, the magnitude of enhanced expression was variable and therefore not significant.This further suggests that the effects of FSS exposure on HR + cells may be specific to WT ER cell lines.To determine if activation of p-mTOR occurred in breast cancer subtypes without ER, we next evaluated protein phosphorylation post exposure to FSS in the triple negative breast cancer (TNBC) cell line MDA-MB-231.Results demonstrated elevated p-mTOR (Ser2448) at 1 and 2 weeks post exposure to FSS; however, the factor increase was variable for each replicate and not significant (Figure 4d).The TNBC cell line demonstrated no change in phosphorylation of p-AKT (Ser473), p-AKT1S1 (Thr246), or p-ERK1/2 (Thr202/Tyr204).
Int. J. Mol.Sci.2024, 25, x FOR PEER REVIEW 7 of 16 ERK1/2 (Thr202/Tyr204) following exposure to FSS and growth on TCP for 1, 2, and 3 weeks.p-mTOR (Ser2448) demonstrated elevated phosphorylation at 1 week; however, the factor increase was variable for each replicate and started to decrease at 2 and 3 weeks post exposure to FSS.The MCF-7-Y537S cell line demonstrated no change in phosphoproteins at any time point (Figure 4c).p-AKT (Ser473) and p-ERK1/2 (Thr202/Tyr204) were elevated at 1 week; however, the magnitude of enhanced expression was variable and therefore not significant.This further suggests that the effects of FSS exposure on HR + cells may be specific to WT ER cell lines.To determine if activation of p-mTOR occurred in breast cancer subtypes without ER, we next evaluated protein phosphorylation post exposure to FSS in the triple negative breast cancer (TNBC) cell line MDA-MB-231.Results demonstrated elevated p-mTOR (Ser2448) at 1 and 2 weeks post exposure to FSS; however, the factor increase was variable for each replicate and not significant (Figure 4d).The TNBC cell line demonstrated no change in phosphorylation of p-AKT (Ser473), p-AKT1S1 (Thr246), or p-ERK1/2 (Thr202/Tyr204).Taken together, the phospho-protein Western blot data demonstrates an increase in p-mTOR but no change in p-AKT (Ser473) or p-ERK (Thr202/Tyr204) signaling.While our prior studies demonstrated an increase in p-AKT (Ser473) and repression in p-ERK (Thr202/Tyr204) immediately after exposure to FSS, this was not observed following culture of cells for 1 week and beyond.We next evaluated gene expression for genes associated with both the mTOR and ERK1/2 signaling pathways following growth on TCP for 1, 2, and 3 weeks post exposure to FSS. Results demonstrated that the mTORC2associated gene RICTOR was significantly repressed by a factor of 0.68 ± 0.03 in MCF-7 cells following 1 week of growth after exposure to FSS compared to non-FSS-exposed control cells (Figure 5a).This trend was not observed in the ZR-75 or MCF-7-Y537S cell lines (Figure 5b,c).Exposure to FSS significantly repressed the MAPK effectors JUN, c-FOS, and FRA2 by factors of 0.34 ± 0.14, 0.30 ± 0.11, and 0.67 ± 0.07, respectively, in MCF-7 cells at 1 week post exposure to FSS compared to non-FSS-exposed control cells (Figure 5d).There were no changes to MAPK effectors in the ZR-75 or MCF-7-Y537S cell lines (Figure 5e,f Taken together, the phospho-protein Western blot data demonstrates an increase in p-mTOR but no change in p-AKT (Ser473) or p-ERK (Thr202/Tyr204) signaling.While our prior studies demonstrated an increase in p-AKT (Ser473) and repression in p-ERK (Thr202/Tyr204) immediately after exposure to FSS, this was not observed following culture of cells for 1 week and beyond.We next evaluated gene expression for genes associated with both the mTOR and ERK1/2 signaling pathways following growth on TCP for 1, 2, and 3 weeks post exposure to FSS. Results demonstrated that the mTORC2-associated gene RICTOR was significantly repressed by a factor of 0.68 ± 0.03 in MCF-7 cells following 1 week of growth after exposure to FSS compared to non-FSS-exposed control cells (Figure 5a).This trend was not observed in the ZR-75 or MCF-7-Y537S cell lines (Figure 5b,c).Exposure to FSS significantly repressed the MAPK effectors JUN, c-FOS, and FRA2 by factors of 0.34 ± 0.14, 0.30 ± 0.11, and 0.67 ± 0.07, respectively, in MCF-7 cells at 1 week post exposure to FSS compared to non-FSS-exposed control cells (Figure 5d).There were no changes to MAPK effectors in the ZR-75 or MCF-7-Y537S cell lines (Figure 5e,f).
Discussion
Metastatic HR + breast cancer has a response rate of 30% to endocrine therapy [29], suggesting the need to better understand signaling cascades activated in HR + breast cancer
Discussion
Metastatic HR + breast cancer has a response rate of 30% to endocrine therapy [29], suggesting the need to better understand signaling cascades activated in HR + breast cancer in the metastatic setting.To better inform signaling cascades activated during cancer metastasis, we performed proteomics on HR + breast cancer cells exposed to FSS.Our proteomics data demonstrated enhanced phosphorylation of proteins associated with growth factor signaling cascades with observed increases in phosphorylated HER2, AKT1S1, RET, and RPTOR (Figure 1).Further, Western blot confirmed the activation of p-mTOR in HR + breast cancer cell lines with WT ER but not in the mutant ER cell line.The increased phosphorylation of mTOR signaling is in accordance with prior studies that demonstrated metastatic HR + breast cancer had amplifications and mutations to the AKT pathways [29].Further, activation of the AKT/mTOR pathway is observed in metastatic tumors [30][31][32], and increased phosphorylation of AKT, mTOR, and HER2 correlate with poor outcome for disease-free survival [33].While additional tests are required, the data presented here would suggest exposure to FSS-induced mTOR signaling in the metastatic setting in HR + cancer cells with WT ER.Evaluation of prior published work on the MCF-7-Y537S mutant vs. WT MCF-7 cells demonstrates that the ER mutant cell line has elevated pathways associated with mTORC1 signaling [4,34].Further, our phospho-proteomics studies demonstrated elevated levels of proteins in pathways associated with estrogen response and the p53 pathway.These pathways are also observed to be different in the MCF-7-Y537S and MCF-7 WT ER cell lines.While additional studies are required, the study performed draws initial insight on FSS survival pathways that may be enhanced in ER mutant cell lines.
In addition to mTOR, the increased phosphorylation of HER2 signaling is also in accordance with prior studies, where metastatic HR + breast cancer had enhanced HER2 signaling [29].The observed increased p-HER2 (S807) in the proteomics data is not currently well described or documented in tumor data.The evaluation of HER2 phosphorylation in HER2 low tumors warrants further investigation to enhance targeted therapy options for metastatic setting in HR + breast cancer.Prior clinical trials evaluating treatment of breast cancer in the metastatic setting demonstrated that breast cancer patients with low HER2 expression responded well to trastuzumab deruxtecan with increased overall patient survival [35].While not evaluated in this study, additional phospho-protein targets of interest from the proteomic data included p-RET (S699).Prior RNA sequencing studies of primary and matched brain metastasis have identified RET as a highly upregulated kinase in breast cancer [27].Further, RET expression is observed to correlate with ER expression and activate ER phosphorylation [36].A direct link between RET and HR expression has not yet been made; however, RET is observed to be elevated in ER + luminal B breast cancers, which historically have loss of PR [27].The link between RET and ER + breast cancer suggests RET expression can modulate breast cancer cell motility and metastasis [36,37].Further studies are needed to evaluate the connection between RET phosphorylation and patient outcome [38].
While not yet in clinical trial for breast cancer, RET has recently been propsed as a novel target for ER fusion and mutant breast cancers [27,39].RET activity was not further evaluated in this study; however, future studies should aim to determine the role of RET and HR expression (Figure 6).In addition to defining phosphoproteins altered with exposure to FSS, this study demonstrates loss of HR expression in WT ER cell lines following exposure to FSS (Figure 3).Further, the ER mutant cell line that displays constitutive active ER did not demonstrate a loss of ER or PR expression.Loss of both PR and ER is more commonly observed in metastatic tumors compared to matched primary tumors.In a meta-analysis of 39 studies in the metastatic setting, 22.5% of tumors converted from ER + to ER − between the primary and secondary site [40].Others found that 30.63% of ER + and 33.97% of PR + patients' primary tumors converted from positive to negative after metastasis [41].The positive to negative switch of both ER and PR were also associated with worse survival when compared to persistent positivity [41].The conversion of HR + primary breast tumors to HR − in the secondary site has been documented by many clinical studies [40,41].To date, it is undetermined what causes this cellular loss of HR.One mechanism may be in the increased phosphorylation of p-mTOR in HR + breast cancer cells following exposure to FSS.HER2 and mTOR are known mediators of PR expression and these signaling pathways regulate ER function [42,43].The initial results of this study suggest FSS activation of growth factor signaling and loss of HR expression; however, one limitation to this study was the collection and evaluation of cells through bulk analysis.Our prior work utilizing single cell evaluation of cancer cells exposed to FSS demonstrated that the levels of p-AKT and p-mTOR activation varies at the single cell level [22].Additional studies interrogating protein activation and HR expression on select cell populations would enhance the understanding of the impact of FSS on HR expression in the metastatic setting.Further, the observed variability from the bulk analysis and loss of protein activation over time may be dampened through additional single cell analysis.The study presented here provides initial groundwork for uncovering mechanisms of hormone receptor conversion and suggests that exposure to FSS induces the activation of growth factor signaling; specifically, the data suggest AKT/mTORC1 signaling is activated by FSS.Currently, both mTOR inhibitors and inhibitors to upstream mTOR mediators, such as RET, are candidate therapies for metastatic HR + breast cancer [27,39] (Figure 6).
expression and these signaling pathways regulate ER function [42,43].The initial results of this study suggest FSS activation of growth factor signaling and loss of HR expression; however, one limitation to this study was the collection and evaluation of cells through bulk analysis.Our prior work utilizing single cell evaluation of cancer cells exposed to FSS demonstrated that the levels of p-AKT and p-mTOR activation varies at the single cell level [22].Additional studies interrogating protein activation and HR expression on select cell populations would enhance the understanding of the impact of FSS on HR expression in the metastatic setting.Further, the observed variability from the bulk analysis and loss of protein activation over time may be dampened through additional single cell analysis.The study presented here provides initial groundwork for uncovering mechanisms of hormone receptor conversion and suggests that exposure to FSS induces the activation of growth factor signaling; specifically, the data suggest AKT/mTORC1 signaling is activated by FSS.Currently, both mTOR inhibitors and inhibitors to upstream mTOR mediators, such as RET, are candidate therapies for metastatic HR + breast cancer [27,39] (Figure 6).
Exposing Cells to Fluid Shear Stress Using the Microfluidic Device
The design and fabrication of the silicon master wafer used for the devices is described in our prior work [22].The shearing microfluidic device consisted of a single fluidic channel (1 m long, 70 µm wide, and 100 µm tall) fabricated by polydimethylsiloxane (PDMS) replication from the silicon wafer.The PDMS replicas (Sylgard 184, Ellsworth Adhesives, Germantown, WI, USA) were made by mixing a 10:1 ratio of base to curing agent followed by degassing in a vacuum chamber to delete any bubbles.The PDMS mixture was poured into the silicon master and cured for 12 h at 65 • C. Once completed, the PDMS replicas were cut to size with an X-Acto knife and removed from the silicon master.The inlet and the outlet were made by using a blunted 18-gauge needle.Finally, the PDMS replica was bonded to a 25 mm × 75 mm glass slide using an O 2 Harrick Plasma PDC-32G basic plasma cleaner (Harrick Plasma, Ithaca, NY, USA) for 2 min and 30 s and then exposed to plasma for 15 s. the devices were left for at least 24 h to ensure proper bonding between PDMS and glass.A 15-cm-long section of Tygon tubing (0.022" inner diameter × 0.042" outside diameter, Cole-Parmer, Vernon Hills, Illinois, USA) per device was cut and used to connect the inlet of the device to a 23-gauge needle connected to a 1 mL syringe.A length of 14 cm of tubing was used to connect the outlet of the device to a microcentrifuge tube to collect the cells post shearing.To prevent clumping and help maintain a single-cell suspension inside the syringe, all cell suspensions were diluted to 500,000 cells per 1 mL syringe and supplemented with 0.5% Pluronic™ F-68 Non-ionic Surfactant (100X) (Thermofisher, #24040032).A 10-syringe syringe pump (KDS 220CE, KD Scientific, Holliston, MA, USA) was used for all FSS exposure experiments to allow for an increased number of shearing devices per experiment to obtain sufficient cellular yields for all proteomics, gene expression, and Western blot studies.
Proteomics
Proteomics was performed through core services provided by the IDeA National Resource for Quantitative Proteomics.The samples used for proteomic/phospho-proteomic analysis were MCF-7 cells stripped for 24 h prior to the shearing event.The cells were loaded into the 1 mL syringe at a concentration of 500,000 cells/mL of stripped media, sheared, and collected immediately (on ice) post-shear.
In brief, methods as provided by the core are: CME bHPLC phosphoTMT Methods-Orbitrap Eclipse was performed through the following methods.Total protein from each sample was reduced, alkylated, and purified by chloroform/methanol extraction prior to digestion with sequencing grade modified trypsin/LysC (Promega, Madison, WI, USA).Tryptic peptides were labeled using a tandem mass tag 10-plex isobaric label reagent set (Thermo) and enriched using High-Select TiO 2 and Fe-NTA phospho-peptide enrichment kits (Thermo) following the manufacturer's instructions.Both enriched and unenriched labeled peptides were separated into 46 fractions on a 100 × 1.0 mm Acquity BEH C18 column (Waters, Milford, MA, USA) using an UltiMate 3000 UHPLC system (Thermo) with a 50 min gradient from 99:1 to 60:40 buffer A:B ratio under basic pH conditions, and then consolidated into 18 super-fractions.Each super-fraction was then further separated by reverse phase XSelect CSH C18 2.5 um resin (Waters) on an in-line 150 × 0.075 mm column using an UltiMate 3000 RSLCnano system (Thermo).Peptides were eluted using a 75 min gradient from 97:3 to 60:40 buffer A:B ratio.Eluted peptides were ionized by electrospray (2.4 kV) followed by mass spectrometric analysis on an Orbitrap Eclipse Tribrid mass spectrometer (Thermo) using multi-notch MS3 parameters.MS data were acquired using the FTMS analyzer in top-speed profile mode at a resolution of 120,000 over a range of 375 to 1500 m/z.Following CID activation with normalized collision energy of 31.0,MS/MS data were acquired using the ion trap analyzer in centroid mode and normal mass range.Using synchronous precursor selection, up to 10 MS/MS precursors were selected for HCD activation with normalized collision energy of 55.0, followed by acquisition of MS3 reporter ion data using the FTMS analyzer in profile mode at a resolution of 50,000 over a range of 100-500 m/z.
•
Buffer A = 0.1% formic acid, 0.5% acetonitrile • Buffer B = 0.1% formic acid, 99.9% acetonitrile • Both buffers adjusted to pH 10 with ammonium hydroxide for offline separation • Data analysis-phosphoTMT Proteins were identified and reporter ions quantified by searching the UniprotKB database restricted to Homo sapiens (June 2021) using MaxQuant (Max Planck Institute, version 2.0.3.0) with a parent ion tolerance of 3 ppm, a fragment ion tolerance of 0.5 Da, a reporter ion tolerance of 0.001 Da, trypsin/P enzyme with 2 missed cleavages, variable modifications including oxidation on M, Acetyl on Protein N-term, and phosphorylation on STY, and fixed modification of Carbamidomethyl on C. Protein identifications were accepted if they could be established with less than 1.0% false discovery.Proteins identified only by modified peptides were removed.Protein probabilities were assigned by the Protein Prophet algorithm [44].TMT MS3 reporter ion intensity values are analyzed changes in total protein using the unenriched lysate sample.Phospho (STY) modifications were identified using the samples enriched for phosphorylated peptides.The enriched and un-enriched samples are multiplexed using two TMT10-plex batches, one for the enriched and one for the un-enriched samples.
Following data acquisition and database search, the MS3 reporter ion intensities were normalized using ProteiNorm [45].The data were normalized using cyclic loess [46] and analyzed using proteoDA to perform statistical analysis using Linear Models for Microarray Data (limma) with empirical Bayes (eBayes) smoothing to the standard errors [46,47].A similar approach is used for differential analysis of the phospho-peptides, with the addition of a few steps.The phospho-sites were filtered to retain only peptides with a localization probability of >75%, filter peptides with zero values, and log2 transformed.Limma was also used for differential analysis.Proteins and phospho-peptides with an FDR-adjusted p-value < 0.05 and an absolute factor change >2 were considered significant.
Western Blotting
A 10-syringe syringe pump (KDS 220CE, KD Scientific) was used for FSS exposure, providing shearing events of multiple microfluidic devices at one time.After passing through the shearing device (sheared) or suspended (non-sheared), cells were seeded on tissue culture plastic and grown in culture.Cells exposed to FSS were pooled from multiple microfluidic devices to achieve appropriate cell numbers for growth in culture.Cells were lysed using 150 µL Mammalian Protein Extraction Reagent (M-PER) (Thermofisher 78501) with 1X protease (Thermofisher 1862209) and phosphatase inhibitors (Thermofisher 1862495).The lysed pellets were then centrifuged at 10,000 rpm at 4 • C for 10 min.A standardized amount of total protein was added to a new 1.5 mL microcentrifuge tube, ~20 µg per well.Reducing agent (Life Technologies, Carlsbad, CA, USA, B0009) and NuPAGE LDS sample buffer (Life Technologies B0007) were added to the samples per manufacturer's protocol then the proteins were heat denatured at 100 • C for 10 min.The samples were then run on Bis-Tris-NuPAGE gel (Invitrogen, Grand Island, NY, USA) in the Invitrogen mini gel tank (A25977) at 100 V for 1 h.Using iBlot and iBlot transfer stacks per manufacturer's protocol (Invitrogen, Grand Island, NY, USA), protein was transferred to nitrocellulose from the gels.The blots were blocked by incubation in 3% milk.After blocking, the membrane was incubated with primary antibody overnight at room temperature for p-AKT (Ser473) (Cell Signaling Technologies, Danvers, MA, USA, #4060), p-pras40(Thr246) (Cell Signaling Technologies, #2997), p-mTOR (Ser2448) (Cell Signaling Technologies, #5536), and p-ERK1/2 (Thr202/Tyr204) (Cell Signaling Technologies, #9101) (diluted 1:1000 in 3% milk).After incubation with primary antibody, the membrane was washed in 1X TBS-T three times for ten minutes each and incubated for 1 h in IRDye ® 800 CW secondary antibody (LI-COR Bioscience, Lincoln, NE, USA) at room temperature (1:10,000 dilution in 3% milk).Next, the nitrocellulose blotting papers were washed three times for ten minutes each in 1X TBS-T.Band density was determined using a LI-COR Odyssey imager.Rho GDI-α (Santa Cruz Biotechnology, Santa Cruz, CA, USA sc-373724)
Figure 1 .
Figure 1.Fluid shear stress activates cell death and DNA damage response in hormone receptor positive breast cancer cells.Volcano plot of significantly altered total-(a) and phospho-(b) proteins changed in MCF-7 cells following exposure to FSS.Significantly altered pathways as identified by Enrichr Hallmark pathways for total (c) and phospho-(e) proteins changed.Select total (d) and phospho-(f) proteins of interest observed to be upregulated following exposure to FSS.Comparison was to non-FSS-exposed MCF-7 cells maintained in suspension.N = 5 biological replicates * p < 0.05 and ** p < 0.01.
Figure 1 .
Figure 1.Fluid shear stress activates cell death and DNA damage response in hormone receptor positive breast cancer cells.Volcano plot of significantly altered total-(a) and phospho-(b) proteins changed in MCF-7 cells following exposure to FSS.Significantly altered pathways as identified by Enrichr Hallmark pathways for total (c) and phospho-(e) proteins changed.Select total (d) and phospho-(f) proteins of interest observed to be upregulated following exposure to FSS.Comparison was to non-FSS-exposed MCF-7 cells maintained in suspension.N = 5 biological replicates * p < 0.05 and ** p < 0.01.
Figure 2 .
Figure 2. Exposure to fluid shear stress alters the transcriptional response to endocrine treatment in hormone receptor positive breast cancer.(a) Protein expression of ER immediately after exposure to FSS. (b,c) Gene expression of ER and PR immediately after (b) or 24 h after (c) exposure to 10 dyn/cm 2 FSS.(d,e) Gene expression for ER, PR, and SDF1 in MCF-7 cells exposed or not exposed to
Figure 2 .
Figure 2. Exposure to fluid shear stress alters the transcriptional response to endocrine treatment in hormone receptor positive breast cancer.(a) Protein expression of ER immediately after exposure to FSS. (b,c) Gene expression of ER and PR immediately after (b) or 24 h after (c) exposure to 10 dyn/cm 2 FSS.(d,e) Gene expression for ER, PR, and SDF1 in MCF-7 cells exposed or not exposed to FSS followed by 24 h treatment with vehicle control (DMSO), 17-βestradiol (E2), or pre-treatment with tamoxifen or fulvestrant (ICI) prior to stimulation with E2.Error bars represent SEM and * p < 0.05, ** p < 0.01, and *** p < 0.001.NS = non-FSS-exposed MCF-7 cells maintained in suspension but not exposed to FSS. S = FSS-exposed MCF-7 cells.
Figure 3 .
Figure 3. Exposure to fluid shear stress represses estrogen receptor expression in hormone receptor positive breast cancer.(a-d) Gene expression for ER, PR, SDF1, and GPR30 in HR positive breast cancer cell lines MCF-7 (a,b), ZR-75 (c), and (d) MCF-7-Y537S after exposure to FSS followed by growth in culture on TCP for 2, 7, 14 and 21 days.Normalization was to non-sheared MCF-7 cells in suspension (a,c,d) or non-sheared and adherent MCF-7 cells (b).Error bars represent SEM and * p < 0.05 and ** p < 0.01.
Figure 3 .
Figure 3. Exposure to fluid shear stress represses estrogen receptor expression in hormone receptor positive breast cancer.(a-d) Gene expression for ER, PR, SDF1, and GPR30 in HR positive breast cancer cell lines MCF-7 (a,b), ZR-75 (c), and (d) MCF-7-Y537S after exposure to FSS followed by growth in culture on TCP for 2, 7, 14 and 21 days.Normalization was to non-sheared MCF-7 cells in suspension (a,c,d) or non-sheared and adherent MCF-7 cells (b).Error bars represent SEM and * p < 0.05 and ** p < 0.01.
Figure 4 .
Figure 4. Exposure to fluid shear stress induces activation of mTOR signaling in hormone receptor positive breast cancer cells.Western blot analysis for MCF-7 (a), ZR-75 (b), MCF-7-Y537S (c), and MDA-MB-231 (d) cell lines for p-AKT, p-AKT1S1, p-ERk1/2, and p-mTOR after exposure to FSS followed by growth in culture on TCP for 1, 2, and 3 weeks.While in culture, cells were not exposed
Figure 4 .
Figure 4. Exposure to fluid shear stress induces activation of mTOR signaling in hormone receptor positive breast cancer cells.Western blot analysis for MCF-7 (a), ZR-75 (b), MCF-7-Y537S (c), and MDA-MB-231 (d) cell lines for p-AKT, p-AKT1S1, p-ERk1/2, and p-mTOR after exposure to FSS followed by growth in culture on TCP for 1, 2, and 3 weeks.While in culture, cells were not exposed to FSS.Normalization was to loading control (RhoGDI) and non-FSS-exposed MCF-7 control cells.Non-FSS controls cells were maintained in suspension but not exposed to FSS prior to seeding on TCP.Error bars represent SEM and * p < 0.05 and ** p < 0.01.
Figure 5 .
Figure 5. Exposure to fluid shear stress does not alter transcription of mTOR-and MAPK-associated genes in hormone receptor positive breast cancer.(a-f) Gene expression in HR positive breast cancer cell lines MCF-7 (a,d), ZR-75 (b,e), and MCF-7-Y537S (c,f).Normalization was to non-sheared MCF-7 cells maintained in suspension followed by seeding in culture on TCP for 7 and 14 days.While in culture, cells were not exposed to FSS.Error bars represent SEM and * p < 0.05 and ** p < 0.01.
Figure 5 .
Figure 5. Exposure to fluid shear stress does not alter transcription of mTOR-and MAPK-associated genes in hormone receptor positive breast cancer.(a-f) Gene expression in HR positive breast cancer cell lines MCF-7 (a,d), ZR-75 (b,e), and MCF-7-Y537S (c,f).Normalization was to non-sheared MCF-7 cells maintained in suspension followed by seeding in culture on TCP for 7 and 14 days.While in culture, cells were not exposed to FSS.Error bars represent SEM and * p < 0.05 and ** p < 0.01.
Figure 6 .
Figure 6.Overview of FSS impact on HR + breast cancer.(a,b) Proposed pathway activity and HR expression in WT ER (a) and mutant ER (b) breast cancer cell lines before and after exposure to FSS.Made with Biorender.(b) Proposed protein targets of FSS and the known association with HR expression, metastatic expression, and potential therapeutic targets.(c) Known associations of ER with RET and mTOR in primary and metastatic HR + breast cancer.
Figure 6 .
Figure 6.Overview of FSS impact on HR + breast cancer.(a,b) Proposed pathway activity and HR expression in WT ER (a) and mutant ER (b) breast cancer cell lines before and after exposure to FSS.Made with Biorender.(b) Proposed protein targets of FSS and the known association with HR expression, metastatic expression, and potential therapeutic targets.(c) Known associations of ER with RET and mTOR in primary and metastatic HR + breast cancer.
). Normalization was to loading control (RhoGDI) and non-FSS-exposed MCF-7 control cells.Non-FSS controls cells were maintained in suspension but not exposed to FSS prior to seeding on TCP.Error bars represent SEM and * p < 0.05 and ** p < 0.01. | 2024-06-30T15:16:17.842Z | 2024-06-28T00:00:00.000 | {
"year": 2024,
"sha1": "b8381f7f483912fbb11e40664ae7613b210c5e63",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms25137119",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24b6e60146845c29399cebbe25ea1ea45f65f8b5",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219410395 | pes2o/s2orc | v3-fos-license | Retroperitoneal Neurofibroma and a Malignant Peripheral Nerve Sheath Tumor with Neurofibromatosis Type 1: A Report of Two Cases
Malignant peripheral nerve sheath tumors (MPNSTs) are highly aggressive sarcomas, with an incidence of 0.001% in the general population compared with 2%-5% in patients with neurofibromatosis type 1 (NF1). In patients with NF1, the life-time risk of MPNST is 8%-13%, and MPNSTs frequently cause death. Approximately 40% MPNSTs occur at deep locations; differentiating MPNSTs from neurofibromas in deep lesions of patients with NF1 is important. We report two cases of huge retroperitoneal tumors with NF-1. Although the background and tumor progression of the cases were similar, the diagnosis was MPNST and neurofibroma. This report compares images in both cases and discusses the magnetic resonance imaging (MRI) findings suggesting MPNSTs. Case 1 involved a 44-year-old man with NF1 who presented with left lower limb pain for a month. Physical examination revealed no muscle weakness or sensory disturbance. MRI showed a 5 x 7 x 11 cm lesion extending from the left L4 nerve root to the retroperitoneum (Fig. 1). T1weighted image (T1WI) showed a low signal, and T2weighted image (T2WI) showed heterogeneous low intensity with a high-intense signal area involving a cyst. Gadolinium-enhanced T1WI showed an inhomogeneous enhanced pattern. Computed tomography (CT) showed no calcification. Tumor resection and L4-5 posterior fusion were performed (Fig. 2a-d). The tumor was excised in one piece under the capsule (Fig. 2e). The pain disappeared postsurgery. The tumor presented as a solid lobulated mass, an inner white area with hemorrhage and necrosis, and an outer yellowish-white area formed an intranodal nodule (Fig. 2f). Pathological findings showed the inside lesion to be an MPNST and the outside area to be a neurofibroma; thus, an MPNST arising from neurofibroma was diagnosed (Fig. 2gi). Case 2 involved a 44-year-old man with NF1 who presented with right back pain for 3 months. Physical examination revealed no muscle weakness or sensory disturbance. MRI showed a 9 x 8 x 6 cm dumbbell-shaped lesion originating from the L1 nerve root extending from the intraspinal canal to the L1 vertebral body and retroperitoneum (Fig. 3af). The lesion displayed low intensity on T1WI, heterogeneous high intensity on T2WI, and a central faint enhancement pattern with gadolinium. CT revealed an osteolytic lesion with marginal sclerosis in the L1 vertebral body (Fig. 3g-i). Tumor resection and T11-L3 posterior and T12-L2 anterior fusion were performed (Fig. 4a-d). The tumor was excised in one piece under the capsule (Fig. 4e). Pain improved immediately post-surgery. The tumor presented as a uniform yellow-white solid mass without necrosis, diagnosed as neurofibroma (Fig. 4f-h). Some reports that mentioned MRI being useful for differentiating MPNSTs from neurofibroma. Wasa reported two or more of the four tumoral MRI features―intratumoral cystic lesion, largest dimension >10 cm, peripheral enhancement pattern, and perilesional edema-like zone― indicated MPNSTs with a sensitivity and specificity of 61% and 90%, respectively. In case 1, the tumor presented as 11 cm, exhibited an intratumoral cyst reflecting hemorrhage and necrosis, and was actually diagnosed as MPNST. Lack of a peripheral enhancement pattern and an edema-like zone may
Malignant peripheral nerve sheath tumors (MPNSTs) are highly aggressive sarcomas, with an incidence of 0.001% in the general population compared with 2%-5% in patients with neurofibromatosis type 1 (NF1) 1) . In patients with NF1, the life-time risk of MPNST is 8%-13%, and MPNSTs frequently cause death 2) . Approximately 40% MPNSTs occur at deep locations 2) ; differentiating MPNSTs from neurofibromas in deep lesions of patients with NF1 is important. We report two cases of huge retroperitoneal tumors with NF-1. Although the background and tumor progression of the cases were similar, the diagnosis was MPNST and neurofibroma. This report compares images in both cases and discusses the magnetic resonance imaging (MRI) findings suggesting MPNSTs.
Case 1 involved a 44-year-old man with NF1 who presented with left lower limb pain for a month. Physical examination revealed no muscle weakness or sensory disturbance. MRI showed a 5 x 7 x 11 cm lesion extending from the left L4 nerve root to the retroperitoneum (Fig. 1). T1weighted image (T1WI) showed a low signal, and T2weighted image (T2WI) showed heterogeneous low intensity with a high-intense signal area involving a cyst. Gadolinium-enhanced T1WI showed an inhomogeneous enhanced pattern. Computed tomography (CT) showed no calcification. Tumor resection and L4-5 posterior fusion were performed ( Fig. 2a-d). The tumor was excised in one piece under the capsule (Fig. 2e). The pain disappeared postsurgery. The tumor presented as a solid lobulated mass, an inner white area with hemorrhage and necrosis, and an outer yellowish-white area formed an intranodal nodule (Fig. 2f).
Pathological findings showed the inside lesion to be an MPNST and the outside area to be a neurofibroma; thus, an MPNST arising from neurofibroma was diagnosed ( Fig. 2gi).
Case 2 involved a 44-year-old man with NF1 who presented with right back pain for 3 months. Physical examination revealed no muscle weakness or sensory disturbance. MRI showed a 9 x 8 x 6 cm dumbbell-shaped lesion originating from the L1 nerve root extending from the intraspinal canal to the L1 vertebral body and retroperitoneum ( Fig. 3af). The lesion displayed low intensity on T1WI, heterogeneous high intensity on T2WI, and a central faint enhancement pattern with gadolinium. CT revealed an osteolytic lesion with marginal sclerosis in the L1 vertebral body ( Fig. 3g-i). Tumor resection and T11-L3 posterior and T12-L2 anterior fusion were performed ( Fig. 4a-d). The tumor was excised in one piece under the capsule (Fig. 4e). Pain improved immediately post-surgery. The tumor presented as a uniform yellow-white solid mass without necrosis, diagnosed as neurofibroma ( Fig. 4f-h).
Some reports that mentioned MRI being useful for differentiating MPNSTs from neurofibroma. Wasa reported two or more of the four tumoral MRI features intratumoral cystic lesion, largest dimension >10 cm, peripheral enhancement pattern, and perilesional edema-like zone indicated MPNSTs with a sensitivity and specificity of 61% and 90%, respectively 3) . In case 1, the tumor presented as 11 cm, exhibited an intratumoral cyst reflecting hemorrhage and necrosis, and was actually diagnosed as MPNST. Lack of a peripheral enhancement pattern and an edema-like zone may be due to the MPNST being surrounded by neurofibromas. In case 2, the tumor did not exhibit any of the four features and was actually diagnosed as a neurofibroma. Irregular tumor shape, unclear margin, and intratumoral lobulation were also reported as findings suggesting MPNSTs 4) . Of these, intratumoral lobulation was only seen in case 1.
MPNSTs surrounded by neurofibromas, as in case 1, may not manifest typical findings suggesting MPNTs such as peripheral enhancement pattern, perilesional edema-like zone, irregular tumor shape, and unclear margin. MRI is helpful for differentiating MPNSTs from neurofibromas, but MPNSTs arising from neurofibromas may require special attention.
Conflicts of Interest:
The authors declare that there are no relevant conflicts of interest.
Author Contributions: Kumiko Yotsuya wrote and prepared the manuscript, and all of the authors participated in the study design. All authors read, reviewed, and approved the article.
Informed Consent: Informed consent was obtained from all participants in this study. | 2020-11-17T14:05:49.831Z | 2020-05-15T00:00:00.000 | {
"year": 2020,
"sha1": "1c943f8acaafa81bb88117d745444efa361bb631",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/ssrr/4/4/4_2020-0056/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "87be1834ef7a6d9d3ac22570aa71df45e5b441f5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
86587463 | pes2o/s2orc | v3-fos-license | Three-Dimensional-Printed Polyether Ether Ketone Implants for Orthopedics
Sir, Manufacturing of personalized implant is the desired goal in the field of Orthopedics. Three-dimensional (3D) printing technologies have capabilities to fabricate patient-specific implants, devices, and instruments for the different fields of Medicine, including Orthopedics. The applications of 3D printing technologies are rapidly growing in the healthcare sectors for surgical planning, manufacturing of patient-specific implants, and developing anatomical models.1 Polyether ether ketone (PEEK) is an organic compound material now being used in 3D printing for manufacturing of complex design geometry and patient-specific implants for Orthopedics. PEEK, as a material, was initially introduced in the 1980s, and now, it is a top-notch organic thermoplastic polymer, which is colorless, and the models developed from PEEK material show suitable quality for various application areas such as medical, automotive, aerospace, and other associated areas.2 In the orthopedic field, it shows a significant impact for the manufacturing of load-bearing implants, which has somewhat similar properties as of human bone and also has lower wear resistance.3 Moreover, the human body readily accepts PEEK material.
Three-Dimensional-Printed Polyether Ether Ketone Implants for Orthopedics
Sir, Manufacturing of personalized implant is the desired goal in the field of Orthopedics. Three-dimensional (3D) printing technologies have capabilities to fabricate patient-specific implants, devices, and instruments for the different fields of Medicine, including Orthopedics. The applications of 3D printing technologies are rapidly growing in the healthcare sectors for surgical planning, manufacturing of patient-specific implants, and developing anatomical models. 1 Polyether ether ketone (PEEK) is an organic compound material now being used in 3D printing for manufacturing of complex design geometry and patient-specific implants for Orthopedics. PEEK, as a material, was initially introduced in the 1980s, and now, it is a top-notch organic thermoplastic polymer, which is colorless, and the models developed from PEEK material show suitable quality for various application areas such as medical, automotive, aerospace, and other associated areas. 2 In the orthopedic field, it shows a significant impact for the manufacturing of load-bearing implants, which has somewhat similar properties as of human bone and also has lower wear resistance. 3 Moreover, the human body readily accepts PEEK material.
To manufacture Orthopedic implants, PEEK is an advanced biomaterial and suits well for catheters devices. Till now, only subtractive manufacturing methods such as computer numerical control machines were used to manufacture customized PEEK implants. However, this technique is time-consuming, expensive, and also waste material. Second, it is also difficult to give exact contours or required shape of the implant. 3D printing technologies readily fulfill these challenges and have various advantages as compared to traditional manufacturing technologies. 4 With better technological developments, PEEK materials are now successfully used to manufacture customized orthopedic implants with the help of 3D printers. 5 These PEEK 3D-printed implants are primarily indicated and used for spine surgery, prosthetics, fixation of an osteotomy [ Figure 1] and fractures [ Figure 2], and reconstruction of complex calvarial and maxillofacial defects. Therefore, it is a suitable biomaterial which has columnar stiffness and is useful in reconstructive and orthopedic surgeries. 6 PEEK 3D printing technologies provide greater design freedom, less waste, and reduced weight of implants that enhanced the performance of implants and provide satisfaction to the patient. 7 It has improved the durability of 3D-printed implants, tools, and devices used in orthopedics. It is used safely and has reduced the failure rate. The orthopedic surgeons are now using PEEK material to improve the biocompatibility of implants which are more bone-friendly. These materials are used in a wide range of implants applications and have become new standard biomaterial. 8 PEEK materials are similar to human hard tissue and match with human body fluids. It has outstanding properties in orthopedics such as biocompatibility, osteoconductivity, nontoxicity, and noninflammatory nature and hence found a variety of applications in bone tissue engineering, restoration of periodontal defects, post teeth bleaching, and dental surgery. PEEK materials are also used as biomaterials in Orthopedic surgery, viz., trauma, osteotomy fixation, joint replacement, and spinal implants. These materials create an attractive platform and develop novel bioactive materials and dentistry. 9 In high temperature, PEEK materials have excellent chemical and mechanical properties, with a tensile strength of about 90-100 MPa and Young's Modulus of 3.6 GPa, and have 250°C useful operating temperature. It has properties such as high stiffness, high hardness, flexible, excellent sliding friction, excellent electrical properties, very minimal abrasion, good processability, excellent hydrolytic stability, and chemical resistance and does not tend to stress cracks. 10 By the application of PEEK materials, the 3D-printed Orthopedics implants provide several advantages [ Table 1] and can easily be fabricated with greater strength. In upcoming years, these materials will have a higher impact on different fields as engineering, medical, dentistry, and associated areas. 11 The only drawback of these PEEK implants, at present, is their higher cost as compared to conventionally used implants made up of stainless steel or However, PEEK possesses the deficient osteogenic properties, and the bio-inertness of PEEK limits its fields of application. Johansson et al. have tried to limit these drawbacks by coating the surface of PEEK with nanoscaled hydroxyapatite minerals. 12 PEEK is reliable for the fabrication of patient-specific implants with complex geometry which is difficult to make by the traditional manufacturing process of the implant. In Orthopedics, it revolutionizes as one size does not fit to all situations and PEEK 3D printing technologies easily fulfill this requirement. Patient data are easily obtained from CT/MRI and is converted into 3D computer-aided design data. This technology can easily print these data with a layer thickness of 0.3 mm. These PEEK 3D-printed implants are then tested to check whether it would provide a long-term result and perform satisfactorily to the patient for surgery. 6 In recent years, PEEK and its composites such as carbon fiber reinforced PEEK (CFR-PEEK) plates are increasingly being used. In a comparative study of 42 patients with proximal humeral fractures, the CFR-PEEK plates were compared with titanium plates, with a mean followup of 30.7 and 52.7 months, respectively. The shoulder mobility, clinical, and pain scores were reported to be similar, in both patient groups. 13 In a systematic review 14 of five published studies of lumbar spine fusion, using PEEK rods, the authors reported no statistically significant difference in the fusion success rate, pain, and functional improvement when compared with titanium rods at an average followup time of 24.1 ± 11.3 months. The PEEK implants for high tibial osteotomy were compared with the traditional metal implants in a cohort study of 41 cases, with a minimum 2-year followup. 15 No significant differences were found in the patient-reported outcomes, and the complications and reoperations were also similar for the PEEK and control groups.
The main limitation of this technology is a requirement of support structures that acquire extra cost. The accuracy of the implant is essential which depends on printing speed and the property of the PEEK material.
With this, in the future, the surgeons would be able to manufacture 3D-printed PEEK patient-specific implants in their clinics and hospitals, allowing them a perfect and creative innovation for their patients.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. Biocompatible, nontoxic, and noninflammatory: PEEK implants are well suitable for orthopedics, spinal and trauma applications due to its biocompatible, nontoxic, and noninflammatory characteristic. It helps to explore new modifications for implant applications Osteoconductive: PEEK materials are adopted now for making spinal implants, and it can be an excellent material to solve various problems in orthopedics Lightweight: PEEK materials are low molecular weight polymer. These are used mainly in orthopedics in fracture and osteotomy fixation, spinal fusions, ligament reconstructions, etc., The applications of PEEK material are likely to find many more indications in the future Excellent strength: PEEK materials are biocompatible material achieve high possible strength that bears the load of the human body. It provides better mechanical properties as compared to other conventionally used materials such as titanium Radiolucent, on radiographs: PEEK materials are transparent to radiation and almost entirely invisible in X-rays photographs. Hence, it helps in assessing the fracture reduction and its healing Customization, using 3D printing is possible: PEEK materials are printable using 3D printing technologies. Now, customized implants are more accessible to manufacture because 3D printing is quite successful in customized production and every patient and their problems are different Compatible with CT and MRI: PEEK materials are compatible with CT and MRI technologies, and thus, these implants do not interfere with these imaging techniques PEEK=Polyether ether ketone, CT=Computerized tomography, MRI=Magnetic resonance imaging | 2019-03-28T13:33:47.625Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "c5e2d501c2528ac7b9b23ebec44b037b5a343341",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc6415569",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c52a483abe5e102ea8ee85ef769034e355b69279",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211574327 | pes2o/s2orc | v3-fos-license | unarXive: a large scholarly data set with publications’ full-text, annotated in-text citations, and links to metadata
In recent years, scholarly data sets have been used for various purposes, such as paper recommendation, citation recommendation, citation context analysis, and citation context-based document summarization. The evaluation of approaches to such tasks and their applicability in real-world scenarios heavily depend on the used data set. However, existing scholarly data sets are limited in several regards. In this paper, we propose a new data set based on all publications from all scientific disciplines available on arXiv.org. Apart from providing the papers’ plain text, in-text citations were annotated via global identifiers. Furthermore, citing and cited publications were linked to the Microsoft Academic Graph, providing access to rich metadata. Our data set consists of over one million documents and 29.2 million citation contexts. The data set, which is made freely available for research purposes, not only can enhance the future evaluation of research paper-based and citation context-based approaches, but also serve as a basis for new ways to analyze in-text citations, as we show prototypically in this article.
Introduction
A variety of tasks use scientific paper collections to help researchers in their work.For instance, research paper recommender systems have been developed (Beel et al. 2016).Related are systems that operate on a more fine-grained level within the full text, such as the textual contexts in which citations appear (i.e., citation contexts).Based on citation contexts, things like the citation function (Teufel et al. 2006a, b;Moravcsik and Murugesan 1975), the citation polarity (Ghosh et al. 2016;Abu-Jbara et al. 2013), and the citation importance (Valenzuela et al. 2015;Chakraborty and Narayanam 2016) can be determined.Furthermore, citation contexts are necessary for context-aware citation recommendation (He et al. 2010;Ebesu and Fang 2017), as well as for citation-based document summarization tasks (Chandrasekaran et al. 2019), such as citation-based automated survey generation (Mohammad et al. 2009) and automated related work section generation (Chen and Zhuge 2019).
The evaluation of approaches developed for all these tasks as well as the actual applicability and usefulness of developed systems in real-world scenarios heavily depend on the used data set.Such a data set is typically a collection of papers provided in full text, or a set of already extracted citation contexts, consisting of, for instance, 1-3 sentences each.Existing data sets, however, do not fulfill all of the following criteria (see section "Existing data sets" for more details): 1. Size.The data set can be comparatively small (below 100,000 documents) which makes it difficult to use it for training and testing machine learning approaches; 2. Cleanliness.The papers' full texts or citation contexts are often very noisy due to the conversion from PDF to plain text and due to encoding issues; 3. Global citation annotations.No links from the citations in the text to the structured representations of the cited publications across documents are provided; 4. Data set interlinkage.Data sets often do not provide identifiers of the citing and cited documents from widely used bibliographic databases, such as DBLP1 or the Microsoft Academic Graph2 (MAG); 5. Cross-domain coverage.Often, only a single scientific discipline is available for evaluating or applying an approach to a paper or citation-based task.
In this paper we propose a new scholarly data set, which we call unarXive. 3The data set is built for tasks based on papers' full texts, in-text citations, and metadata.It is freely available at http://doi.org/10.5281/zenodo.33858 51 and the implementation for creating it at https ://githu b.com/IllDe pence /unarX ive.
Table 1 gives an overview of the proposed data set.Note that throughout this article, we refer to links between publications on the document level as "references" (corresponding to entries in a section "bibliography" or "references" near the end of a document), whereas on the text level we speak of "citations" (indicated by markers within the text associated with a reference).The proposed data set consists of over one million full text documents (about 269 million sentences) and links to 2.7 million unique publications via 15.9 million unique references and 29.2 million citations.Thus, we argue that it is considerably large, fulfilling criterion (1).By using publications' L A T E X source files and developing a highly accurate transformation method that converts L A T E X to plain text, we can resolve issue (2).Besides the pure papers' content, in-text citations are annotated directly in the text via global identifiers, thereby covering aspect (3).As far as possible, (citing and cited) documents are linked to the Microsoft Academic Graph (Sinha et al. 2015) (cf. item (4)).This enables us to use the arXiv paper content in combination with the metadata in the MAG, which, as of February 2019, contains data on 213 million publications along with metadata about researchers, venues, and fields of study.Our data set also fulfills constraint (5) as all disciplines covered in arXiv are included.This enables researchers to analyze papers from several disciplines and to compare approaches using scholarly data across disciplines.
Considering the application of our data set, we argue, that it not only can be used as a new large data set for evaluating paper-based and citation-based approaches with unlimited citation context lengths (since the publications' full texts are available), but also be a basis for novel ways of paper analytics within bibliometrics and scientometrics.For instance, based on the citation contexts and the citing and cited papers' metadata in the MAG, analyses on biases in the writing and citing behavior of researchers-e.g.related to authors' affiliation (Reingewertz and Lutmar 2018) or documents' language (Liang et al. 2013;Liu et al. 2018)-can be performed.Furthermore, (sophisticated) deep learning approaches, as they are also widely used in the digital library domain recently (Ebesu and Fang 2017), require huge amounts of training data.Our data set allows to overcome this hurdle and investigate how far deep learning approaches can lead us.Overall, we argue that with our data set we can significantly bring the state of the art of big scholarly data one step forward.
We make the following contributions in this paper: 1. We propose a large, interlinked scholarly data set with papers' full texts, annotated intext citations, and links to rich metadata.We describe its creation process in detail and provide both the data as well as the creation process implementation to the public.2. We manually evaluate the validity of our reference links on a sample of 300 references, thereby providing insight into our citation network's quality.3. We calculate statistical key figures and analyze the data set with respect to its contained references and citations.4. We compare our reference links to those in the MAG, and manually evaluate the validity of links only appearing in either of the data sets.In doing so, we identify a large number of documents where the MAG lacks coverage.5. We analyze the likelihood with which in-text citations in our data set refer to specific parts of a cited document depending on the discipline of the citing and cited document.Such an analysis is only possible with word level precision citation marker positions annotated in full text and metadata on citing as well as cited documents.The analysis therefore can showcase the practicability of our data set.
The paper is structured as follows: After outlining related data sets in section "Existing data sets", we describe in section "Data set creation" how we created our data set.This is followed by statistics and key figures in section "Statistics and key figures".In section "Evaluation of citation data validity and coverage", we evaluate the validity and coverage of our reference links.Section "Analysis of citation flow and citation contexts" is dedicated to the analysis of the citation flow and the contexts within our data set.We conclude in section "Conclusion" with a summary and an outlook.
Existing data sets
Table 2 gives an overview of related data sets.CiteSeerX can be regarded as the most frequently used evaluation data set for citation-based tasks.For our investigation, we use the snapshot of the entire CiteSeerX data set as of October 2013, published by Huang et al. (2015).This data set consists of 1,017,457 papers, together with 10,760,318 automatically extracted citation contexts.This data set has the following drawbacks (Roy et al. 2016;Färber et al. 2018): The provided meta-information about cited publications is often not accurate.Citing and cited documents are not interlinked to other data sets.Moreover, the citation contexts can contain noise from non-ASCII characters, formulas, section titles, missed references and/or other "unrelated" references, and do not begin with a complete word.
The PubMed Central Open Access Subset is another large data set that has been used for citation-based tasks (Gipp et al. 2015;Duma et al. 2016;Galke et al. 2018).Contained publications are already processed and available in the JATS (Huh 2014) XML format.While the data set overall is comparatively clean, heterogeneous annotation of citations within the text and mixed usage of identifiers of cited documents (PubMed, MEDLINE, DOI, etc.) make it difficult to retrieve high quality citation interlinkings of documents from the data set4 (Gipp et al. 2015).
Beside the abovementioned, there are other collections of scientific publications.Among them are the ACL Anthology corpus (Bird et al. 2008) and Scholarly Dataset 2 (Sugiyama and Kan 2015).Note that these data sets only contain the publications themselves, typically in PDF format.Therefore, using such data sets for paper-based or citationbased approaches is troublesome, since one must preprocess the data (i.e., (1) extract the content without introducing too much noise, (2) specify global identifiers for cited papers, and (3) annotate citations with those identifiers).Furthermore, there are data sets for evaluating paper recommendation tasks, such as CiteULike5 or Mendeley. 6These, however, only provide metadata about publications or are not freely available for research purposes.
Prior to publishing the data set described in this paper, we already published a data set with annotated arXiv papers' content in the past (Färber et al. 2018).In comparison, our new data set is superior to this initial version in the following regards: 1.The new data set is considerably larger (1 M instead of 90 k documents).
2. The new data set provides a similar level of cleanliness to the old data set regarding the papers' full texts and citation contexts.3. A new method for resolving references to consistent global identifiers has been developed.Contrary to the old method, the new method has been evaluated and performs very well (see section "Citation data validity").4. While the old data set links documents solely to DBLP, which covers computer science papers, the new data set links documents to the Microsoft Academic Graph, which covers all scientific disciplines and which has been used frequently in the digital library domain in recent years (Mohapatra et al. 2019). 5.While the old data set is restricted to computer science, the new data set covers all domains of arXiv (see section "Statistics and key figures" and Fig. 7).
Lastly, compared to the initial publication of our new data set (Saier and Färber 2019), this journal article provides significantly more details and insights into the data set's creation process (see section "Data set creation") and its resulting characteristics (see sections "Evaluation of citation data validity and coverage" and "Analysis of citation flow and citation contexts").Moreover, the data set has been further improved.Most notably, while in the initial version, only citing papers were associated with arXiv identifiers and only cited papers had been linked to the MAG, we now provide both types of IDs for both sides.This means, that for nearly all documents, MAG metadata is easily accessible, and full text is not only available for all citing papers but now also for over a quarter of the cited papers.
Data set creation
Scientific publications are usually distributed in formats targeted at human consumption (e.g., PDF) or, in cases like arXiv, also as source files the aforementioned (e.g., L A T E X sources for generating PDFs).Citation-based tasks, such as context-aware citation recommendation, in contrast, require automated processing of the publications' textual contents as well as the documents' interlinking through in-text citations.The creation of a data set for such tasks therefore encompasses two main steps: extraction of plain text and resolution (Färber et al. 2018) 90 k 1 entence CS Yes DBLP ACL-ARC (Bird et al. 2008) 11 k extractable* CS/CL Yes No ACL-AAN (Radev et al. 2013) 18 k extractable* CS/CL Yes No of references.In the following, we will describe how we approached these two steps using arXiv publications' L A T E X sources and the Microsoft Academic Graph.
Used data sources
The following two resources are the basis of the data set creation process.arXiv hosts over 1.5 million documents from August 1991 onward. 7They are available not only as PDF, but (in most cases) also as L A T E X source files.The discipline most promi- nently represented is physics, followed by mathematics, with computer science seeing a continued increase in percentage of submissions ranking third (see Fig. 7).The availability of L A T E X sources makes arXiv documents particularly well suited for extracting high qual- ity plain text and accurate citation information.So much so, that it has been used to generate ground truths for the evaluation of PDF-to-text conversion tools (Bast and Korzen 2017).
Microsoft Academic Graph is a very large, automatically generated data set on 213 million publications, related entities (authors, venues, etc.), and their interconnections through 1.4 billion references.8It has been widely used as a repository of all publications in academia in the fields of bibliometrics and scientometrics (Mohapatra et al. 2019).While pre-extracted citing sentences are available, these do not contain annotated citation marker positions.Full text documents are also not available.The size of the MAG makes it a good target for matching reference strings9 against it, especially given that arXiv spans several disciplines.
Pipeline overview
To create the data set, we start out with arXiv sources (see Fig. 1).From these we generate, per publication, a plain text file with the document's textual contents and a set of database entries reflecting the document's reference section.Association between reference strings and in-text citation locations are preserved by placing citation markers in the text.In a second step, we then iterate through all reference strings in the database and match them against paper metadata records in the MAG.This gives us full text arXiv papers with (word level precision) citation links to MAG paper IDs.As a final step, we enrich the data with MAG IDs on the citing paper side (in addition to the already present arXiv IDs) and arXiv IDs on the cited paper side (in addition to the already present MAG IDs)-this is a straightforward process, because the paper metadata in the MAG includes source URLs, meaning papers found on arXiv have an arXiv.orgsource URL associated with them, such that a mapping from arXiv IDs to MAG IDs can be created.
Listing 2 shows how our data set looks like.In the following, we describe the main steps of the data set creation process in more detail.
L A T E X parsing
In the following, we will describe the tools considered for parsing L A T E X, the challenges we faced in general and with regard to arXiv sources in particular, and our resulting approach.
Tools
We took several tools for a direct conversion from L A T E X to plain text or to intermediate formats into consideration and evaluated them.Table 3 gives an overview of our results.Half of the tools failed to produce any output for a large amount of arXiv documents we used as test input and were therefore deemed not robust enough.GrabCite (Färber et al. 2018) is able to parse 78.5% of arXiv CS documents but integrates resolving references (see section "Resulting approach") against DBLP into the parsing process and therefore would require significant modification to fit our new system architecture.LaTeXML and Tralics are both robust and can be used as L A T E X conversion tools as is.Based on subse- quent tests, we observed that LaTeXML needs on average 7.7 s (3.3 if formula environments are heuristically removed beforehand) to parse an arXiv paper, while Tralics needs 0.09.Because the quality of their output seemed comparable, we chose to use Tralics.
Challenges
Apart from the general difficulty of parsing L A T E X due to its feature richness and people's free-spirited use of it, we especially note difficulty in dealing with extra packages not included in documents' sources.10While Tralics, for example, is supposed to deal with natbib citations,11 normalization of such citations leads to a decrease of citation markers not being able to be matched to an entry in the document's reference section from 30 to 5% in a sample of 565,613 citations we tested.
Resulting approach
Our L A T E X parsing solution consists of three steps: flattening, parsing, and output genera- tion.First, we flatten each arXiv document's sources to a single L A T E X file using latex- pand 12 , 13 and normalize citation commands (e.g.� ) to prevent parsing problems later on.In the second step, we then generate an XML representation of the L A T E X document using Tralics.Lastly, we go through the gen- erated XML structure and produce two types of output-(i) an annotated plain text file with the document's textual contents and (ii) database entries reflecting the document's reference section.For (i) we replace XML nodes that represent formulas, figures, tables, as well as intra-document references with replacement tokens and turn XML nodes originating from citation markers in the L A T E X source (i.e., ∖ ) into plain text citation annotation markers.For (ii), each entry in the document's reference section is assigned a unique identifier, its text is stored in a database, and the identifier put into the corresponding annotation in the plain text (cf.Listing 2).
Reference resolution
Resolving references to globally consistent identifiers (e.g.detecting that the reference strings (1), (2), and (3) in Listing 1 all reference the same document) is a challenging and still unsolved task (Nasar et al. 2018).Given it is the most distinctive singular part of a publication, we base our reference resolution on the title of the cited work and use other pieces of information (e.g., the authors' names) only in secondary steps.In the following, we will describe the challenges we faced, matching arXiv documents' reference strings against MAG paper records, and how we approached the task.
Challenges
Reference resolution can be challenging when reference strings contain only minimal amounts of information, when formulas or other special notation is used in titles, or when they refer to non publications (e.g., Listing 1, ( 4)-( 6)).Another problem we encountered was noise in the MAG.One such case are the MAG papers with IDs 2167727518 and 2763160969.Both are identically titled "Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC" and dated to the year 2012.But while the former is cited 17k times and cites 112 papers within the MAG, the latter is a neither cited nor cites any other papers. 14Taking the number of citations into account when matching references, reduced the number of mismatches in this particular case from 2,918 to 0 and improved the overall quality of matches in general.
Resulting approach
Our reference resolution procedure can be broken down in two steps: title identification and matching.If contained in the reference string, title identification is performed based on an arXiv ID or DOI (where we retrieve the title from an arXiv metadata dump or via crossref.org15); otherwise we use Neural ParsCit (Prasad et al. 2018). 16The identified title is then matched against the normalized titles of all publications in the MAG.Resulting candidates are considered, if at least one of the author's names (as given in the MAG) is present in the reference string.If multiple candidates remain, we judge by the citation count given in the MAG-this particularly helps mitigate matches to rouge almost-duplicate entries in the MAG, which often have few to no citations, like paper 2763160969 mentioned in the previous section.
Result format
Listing 2 shows some example content from the data set.In addition to the paper plain text files and the references database, we also provide the citation contexts of all successfully resolved references extracted to a CSV file as well as a script to create custom exports. 17For the provided CSV export, we set the citation context length to 3 sentences-the sentence containing the citation as well as the one before and after-as used by Tang et al. (2014) and Huang et al. (2015).
Statistics and key figures
In this section we present the data set and its creation process in terms of numbers.Furthermore, insight into the distribution of references and citation contexts is given.
Listing 2 Excerpts from (top to bottom) a paper's plain text, corresponding entries in the references database, entries in the MAG, and extracted citation context CSV
Creation process
We used an arXiv source dump containing all documents up until the end of 2018 (1,492,923 documents).114,827 of these were only available in PDF format, leaving 1,378,096 sources.Our pipeline output 1,283,584 (93.1%) plain text files, 1,139,790 (82.7%) of which contained citation markers.The number of reference strings identified is 39,694,083, for which 63,633,427 citation markers were placed within the plain text files.This first part of the process took 67 h to run, unparallelized on a 8 core Intel Core i7-7700 3.60 GHz machine with 64 GB of memory.
Of the 39,694,083 reference strings, we were able to match 16,926,159 (42.64%) to MAG paper records.For 31.32% of the reference strings we could neither find an arXiv ID or DOI, nor was Neural ParsCit able to identify a title. 18For the remaining 26.04% a title was identified, but could not be matched to the MAG.Of the matched 16.9 million items' titles, 52.60% were identified via Neural ParsCit, 28.31% by DOI and 19.09% by arXiv ID.Of the identified DOIs, 32.9% were found as is, while 67.1% were heuristically determined.This was possible because the DOIs of articles in journals of the American Physical Society follow predictable patterns.The matching process took 119 h, run in 10 parallel processes on a 64 core Intel Xeon Gold 6130 2.10 GHz machine with 500 GB of memory.
Comparing the performance of our approach using all papers (1991-2018) to using only the papers from 2018 (i.e.recent content), we note that the percentage of successfully extracted plain texts goes up from 93.1 to 95.9% (82.7 to 87.8% only counting plain text files containing citation markers) and the percentage of successfully resolved references increases from 42.64 to 59.39%.A possible explanation for the latter would be, that there is more and higher quality metadata coverage (MAG, crossref.org,etc.) of more recent publications.
Resulting data set
Our data set consists of 2, 746,288 cited papers, 1,043,126 citing papers, 15,954,664 references and 29,203,190 citation contexts. 19 Figure 2 shows the number of citing documents for all cited documents.There is one cited document with over 10,000 citing documents, another 8 with more than 5,000 and another 14 with more than 3,000.1,485,074 (54.07%) of the cited documents are cited at least two times, 646,509 (23.54%) at least five times.The mean number of citing documents per cited document is 5.81 (SD 28.51).Figure 3 shows the number of citation contexts per entry in a document's reference section.10,537,235 (66.04%) entries have only one citation context, the maximum is 278, the mean 1.83 (SD 2.00). 19References that were successfully matched to a MAG record but have no associated citation markers (due to parsing errors; cf.section "Challenges") are not counted here. 18To assess whether or not the large percentage of reference strings without identified title is due to Neural ParsCit missing a lot of them, we manually check its output for a random sample of 100 papers (4027 reference strings).We find that 99% of cases with no title identified actually do not contain a title-like for example items (1), ( 2) and (4) in Lst. 1.These kind of references seem to be most common in physics papers.The 1% where a title was missed were largely references to non-English titles and books.We therefore conclude that the observed numbers largely reflect the actual state of reference strings rather than problems with the approach taken.
Because not all documents referenced by arXiv papers are hosted on arXiv itself, we additionally visualize the citation flow with respect to the MAG in Fig. 4. 95% of our citing documents are contained in the MAG.Of the cited documents, 26% are contained in arXiv and therefore included as full text, while 74% are only included as MAG IDs.On the level of references, this distribution shifts to 43/57.The high percentages of citation links contained within the data set can be explained due to the fact, that in physics and mathematics-which make up a large part of the data set-it is common to self-archive papers on arXiv.
Citation data validity
To evaluate the validity of our reference resolution results, we take a random sample of 300 matched reference strings and manually check for each of them, if the correct record in the MAG was identified.This is done by viewing the reference string next to the matched MAG record and verifying, if the former actually refers to the latter. 20Given the 300 items, we observed 3 errors, giving us an accuracy estimate of 96% at the worst, as shown in Table 4. Table 5 shows the three incorrectly identified documents.In all three cases the misidentified document's title is contained in the correct document's title, and there is a large or complete author overlap between correct and actual match.This shows that authors sometimes title follow-up work very similarly, which leads to hard to distinguish cases.
Citation data coverage
For the 95% of our data set, where citing as well as cited document have a MAG ID, we are able to compare our citation data directly to the MAG.The composition of reference section coverage (i.e.how many of the references are reflected in each of the data sets) of all 994,351 citing documents can be seen in Fig. 5.Of the combined 26,205,834 reference links, 9,829,797 are contained in both data sets (orange), 5,918,128 are in unarXive only (blue), and 10,457,909 are in the MAG only (green).On the document level we observe, that for 401,046 documents unarXive contains more references than the MAG, and for 545,048 it is the other way around.The striking difference between reference and document level21 suggests, that the MAG has better coverage of large reference sections.This is supported by the fact that citing papers, where the MAG contains more references, cite on average 34.28 documents, while the same average for citing papers, where unarXive contains more references, is 17.46.Investigating further, in Fig. 6 we look at the number of citing documents in terms of reference section size (x-axis) and exclusive coverage in unarXive and MAG 22 (y-axis).As we can see (and as the almost exclusively blue area on the right hand side of Fig. 5 suggests), there is a large number of papers, citing ≤ 50 documents, where ≥ 80% of the reference section are only contained in unarXive.Put differently, there is a large portion of documents, where the reference section is covered to some degree by unarXive, but has close to no coverage in the MAG.The number of citing documents, where the MAG contains 0 references whereas unarXive has ≥ 1 , is 215,291-these have an average of 15.1 ref- erences in unarXive. 23The number of citing documents (within the 994,351 at hand), where unarXive contains 0 references whereas the MAG has ≥ 1 , is 0.
23 Manually looking into a sample of 100 of these documents, we find the most salient commonality to be irregularities w.r.t. to the reference section headline.58 of the papers (55 physics, 2 quantitative biology, 1 CS) have no reference section headline, 2 have a double reference section headline and further 2 have the headline directly followed by a page break.The reason for the large number of MAG documents with no references might therefore be, that the PDF parser used can not yet deal with such cases.Needless to say, additional references are only of value if they are valid.From both the citation links only found in unarXive, as well as those only found in the MAG, we therefore take a sample of 150 citing paper cited paper pairs and manually verify, if the former actually references the latter.This is done by inspecting the citing paper's PDF and checking the entries in the reference section against the cited paper's MAG record. 24On the unarXive side, we observe 4 invalid links, all of which are cases similar to those showcased in Table 5.On the MAG side, we observe 8 invalid links.Some of them seem to originate from the same challenges as the ones we face, e.g.similarly titled publications by same authors, leading to misidentified cited papers.Other error sources are, for instance, an invalid source for a citing paper being used and its reference section parsed (e.g.paper ID 1504647293, where one of the PDF sources is the third author's Ph.D. thesis instead of the described paper).Given that the citation links exclusive to unarXive appear to be half as noisy as those exclusive to the MAG, we argue that the 5,918,128 links only found in unarXive could be useful for citation and paper based tasks using MAG data.This would especially be the case for the field of physics, as it makes up a significant portion of our data set.
Analysis of citation flow and citation contexts
Because the documents in unarXive span multiple scientific disciplines, interdisciplinary analyses, such as the calculation of the flow of citations between disciplines, can be performed.Furthermore, the fact that documents are included as full text and citation markers within the text are linked to their respective cited documents, makes varied and fine grained study of citation contexts possible.To give further insight into our data set, we therefore conduct several such analyses in the following.Note that, for interdisciplinary investigations, disciplines other than physics, mathematics, and computer science are combined into other for space and legibility reasons, as they are only represented by a small number of publications.On the citing documents' side, these span the fields of economics, electrical engineering and systems science, quantitative biology, quantitative finance, and statistics.Combined on the cited documents' side are chemistry, biology, engineering, materials science, economics, geology, psychology, medicine, business, geography, sociology, political science, philosophy, environmental science, and art.
Citation flow
Figure 7 depicts the flow of citations by discipline for all 15.9 million matched references.As one would expect, publications in each field are cited the most from within the field itself.Notable is, that the incoming citations in mathematics are the most varied (physics and computer science combined make up 35% of the citations).As citation contexts are useful descriptive surrogates of the documents they refer to (Elkiss et al. 2008), a composition as varied as mathematics in Fig. 7 bears the question as to whether a distinction by discipline could be worth considering, when using citation contexts as descriptions of cited documents.That is, computer scientists and physicists might refer to math papers in a different way than mathematicians do.Borders between disciplines are, however, not necessarily clear cut, meaning that such a distinction might not be as straight forward as the color coding in Fig. 7 suggests.
Availability of citation contexts
Another aspect that becomes relevant, when using citation contexts to describe cited documents, is the number of citation contexts available per cited publication.Figure 8 shows, that the distribution of the number of citation contexts per cited document is similar across disciplines.In each discipline, around half of the cited documents are just mentioned once across all citing documents, 17.5% exactly twice, and so on.The tail of the distribution drops a bit slower for physics and mathematics.The mean values of citation contexts per cited document are 9.5 (SD 50.3) in physics, 7.0 (SD 28.8) in mathematics, 5.1 (SD 31.1) in computer science and 3.5 (SD 11.0) for the combined other fields.This leads to two Fig. 7 Citation flow by discipline for 15.9 million references.The number of citing and cited documents per discipline are plotted on the sides conclusions.First, it suggests that a representation relying solely on citation contexts may only be viable for a small fraction of publications.Second, the high dispersion in the number of available citation contexts shows that means might not be very informative when it comes to citation counts aggregated over specific sets of documents.
Characteristics of citation contexts
For our analysis of the contents of citation contexts, we focus on three aspects: whether or not citations are (1) integral, (2) syntactic and (3) target section specific.These aspects were chosen, because they give particular insights into the citing behavior of researchers, as explained alongside the following definition of terms.
"Integral", "syntactic" and "target section specific" citations
We first discuss the terms "integral" and "syntactic", which are both established in existing literature.An integral citation is one, where the name of the cited document's author appears within the citing sentence and has a grammatical role (Swales 1990;Hyland 1999) (e.g." Swales [73] has argued that ...").Similarly, a citation is syntactic, if the citation marker has has a grammatical role within the citing sentence (Whidby et al. 2011;Abu-Jbara and Radev 2012) (e.g."According to [73] it is ...").Integral citations are seen as an indication of emphasis towards the cited author (where the opposite direction would be towards the cited work) (Swales 1990;Hyland 1999).Syntactic citations are of interest, when determining how a citation relates to different parts of the citing sentence (Whidby et al. 2011;Abu-Jbara and Radev 2012).Both qualities are relevant when studying the role of citations (Färber and Sampath 2019).
Table 6 gives a more detailed account of both terms' use in literature.Note that Lamers et al. (2018) provide a classification algorithm for integral and non-integral citations that slightly differs from Swales' original definition depending on the interpretation of a citation marker's scope, but also gives a clear classification in an edge case where Swales' definition is unclear.Furthermore note, that the two ways for distinguishing syntactic and nonsyntactic citations found in literature are not identical.This is in part because the method given by Abu-Jbara and Radev (2012) is kept rather simple.For the intents and purposes of our analysis we follow the definitions of Lamers et al. and Whidby et al. for "integral" and "syntactic" respectively.As a third aspect for analysis, we define "target section specific" citations as those citations, where a specific section within the citation's target (i.e. the cited document) is referred to.Examples are given in Table 7. Target section specific citations are of interest for two reasons.First, in a similar fashion to integral citations, they are a particular form of citing behavior that might be used to infer characteristics of the relationship between citing author and cited document (e.g. a focus on the document rather than authors, or in depth engagement or familiarity with the cited document's contents).Second, when using citation contexts as descriptions of cited documents, such as in citation context-based document summarization, target section specific citations might benefit from special handling, as their contexts only describe a (sometimes very narrow) part of the cited document.
In the following we will analyze all three aspects (integral, syntactic, target section specific) with respect to the different scientific disciplines covered by our data set.
Manual analysis of citation contexts
For each of the disciplines computer science, mathematics, physics, and other, we take a random sample of 300 citation contexts and manually label them with respect to being integral, syntactic, and target section specific.The result of this analysis is shown in Table 8.Each of the assigned labels is most prevalent in mathematics papers, which is furthermore true for the co-occurrence of the labels integral and syntactic.Mathematics is also the only discipline, in which citations are more likely to be syntactic than not.The difference in frequency of integral and syntactic citations might be due to variations in writing culture between the different disciplines.We think that the comparatively high frequency of target section specific citations in mathematics could be due to the fact, that in mathematics intermediate results like corollaries and lemmata are immediately reusable in related work.We further investigate target section specific citations in the following section.(Swales, 1990))."
Automated analysis of target section specific citations
Sentences including a target section specific citation often follow distinct and predictable patterns.For example, a capitalized noun (e.g."Corrolary", "Lemma", "Theorem") is followed by a number and a preposition (e.g."in", "of"), and then followed by the citation marker (e.g."Corrolary 3 in [73]").Another pattern is the citation marker followed by a capitalized noun and a number (e.g."[73] Lemma 7").This lexical regularity allows us to identify target section specific citations in an automated fashion.Specifically, we search the entirety of our 29 M citation contexts for word sequences, that match either of the part of speech tag patterns < > and < > .Doing this, we find 365,299 matches (1.25% of all contexts).This is less then the 2.31% one would expect due to the manual analysis25 and suggests, that above two patterns are not exhaustive.Nevertheless we can use the identified contexts to further analyze them with respect to their distribution of disciplines.
Table 9 shows the results of this subsequent analysis.Because our data set does not contain equal numbers of citations from each discipline (cf.Fig. 7), we normalize the absolute numbers of pattern occurrences.Rows are then sorted by normalized ratio in decreasing order.Looking at the citing documents (those in which the pattern was found), we see a similar picture to the one in our manual analysis (shown in Table 8).Namely, mathematics with the highest count of target section specific citations by far, and a similar count for computer science and physics, where the latter is slightly lower.Counting by the cited documents (the document in which a specific part is being referenced), the differences decrease a little bit, but mathematics still occurs most frequently by far.
An interesting pattern emerges, when taking an even more detailed look and breaking these citations down by the disciplines on both sides of the citation relation.We then can observe the following.
-The most determining factor for target section specific citations seems to be, that a mathematician is writing the document.† As with integral and syntactic citations, the writing culture of the field might play a role here.-The second most determining factor then appears to be, that a mathematical paper is being cited.‡ Mathematics documents might lend themselves to being cited in this way.-The third most determining factor is an intra-discipline citation (i.e. the citing document is from that same discipline as the cited).This supports the interpretation of tar- Math → Math pairs, where all three of the above factors come into play simultaneously, consequentially show the highest occurrence of target section specific citations by far.
To summarize the results of our analysis of citation flow and citation contexts, we note the following points.
-Publications in mathematics are cited from "outside the field" (e.g. by computer science or physics papers) to a comparatively high degree.Distinguishing citation contexts referring to mathematics publications by discipline might therefore be beneficial in certain applications (e.g.citation-based automated survey generation).-For most publications, only one or a few citation contexts are available.
-Integral citations appear to be about twice as common in computer science as they are in physics, and again twice as common in mathematics as they are in computer science.
Going with Swale's interpretation of the phenomenon, this would mean the focus put on authors in mathematics is higher than in computer science, and higher in computer science than in physics.-In mathematics, syntactic citations seem to be more common that non-syntactic citations.This is beneficial for reference scope identification (Abu-Jbara and Radev 2012) and any sophisticated approaches based on citation contexts (like context-aware citation recommendation), as citation markers in syntactic citations stand in a grammatical relation to their surrounding words.-We define target section specific citations as those citations, where a specific section within the cited document is referred to.This type of citation is the most common in mathematics (comparing mathematics, computer science and physics).Through an subsequent analysis of 365k target section specific citations, we find that they are more common in intra-discipline citations than in inter-discipline citations.This supports our assumption that they are an indicator for familiarity with the cited document.
Our five criteria outlined in the beginning, namely size, cleanliness, global citation annotations, data set interlinkage, cross-domain coverage, in the end made it possible to reach above results.Without sufficient size, our results would be less informative.If our documents contained too much noise, the quality of reference resolution would have deteriorated.Global citation annotations, especially because of their word level precision, make fine grained lexical analyses of citation contexts like the one in section "Automated analysis of target section specific citations" possible.Without interlinking our data set to the MAG, available meta data would have been scarce.While we mainly focused on the scientific discipline information in the MAG, there is much more (authors, venues, etc.) that can be worked with in future analyses.Lastly, if our data set would have only covered a single scientific discipline, an analysis of citation flow, as well as interdisciplinary comparisons of citation context criteria would not have been possible.
Conclusion
Evaluating and applying approaches to research paper-based and citation-based tasks typically requires large, high-quality, citation-annotated, interlinked data sets.In this paper, we proposed a new data set with over one million papers' full texts, 29.2 million annotated citations, and 29.2 million extracted citation contexts (of three sentences each), ready to be used by researchers and practitioners.We provide the data set and the implementation for creating the data set from arXiv source files online for further usage.
For the future, we plan to use the data set for a variety of tasks.Among others, we will develop a citation recommendation system based on all arXiv papers.Furthermore, we plan to perform additional analyses on citations and citation contexts across scientific disciplines, and to use the differences in citing behavior for enhanced citation recommendation.
Fig. 1
Fig. 1 Schematic representation of the data set generation process
Fig. 5 6
Fig. 5 Composition of reference section coverage for all citing documents (cut off at 100 cited documents)
Table 1
Overview of the proposed data set
Table 2
Overview of existing data sets
Table 3
Comparison of tools for parsing L A T E X See https ://www-sop.inria.fr/marelle/trali cs/.
Table 4
Confidence intervals for a sample size of 300 with 297 positive results as given by Wilson score interval and Jeffreys interval(Brown et al.
Table 9
Occurrence of target section specific citations by discipline (pairs annotated as follows, † : Mathematics citing document, ‡ : Mathematics cited document, X → X: Citing and cited document are from the same discipline) | 2020-03-02T15:13:23.930Z | 2020-03-02T00:00:00.000 | {
"year": 2020,
"sha1": "e91658908a322ea02b70f342562c2d74792165fa",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11192-020-03382-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "e91658908a322ea02b70f342562c2d74792165fa",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
267972021 | pes2o/s2orc | v3-fos-license | Development and Texture of Pulmonary Trunk in Children
: We have studied texture and sites of pulmonary trunk in children (from the birth to 12 years of life). Experiments were carried out on 55 children's corpses. The following results were obtained: pulmonary trunk develops continuously and asynchronously: rapid growth process develops in children of an early age, diameter of pulmonary trunk increases from the first days of its existence and subsequently develops in its length
The structure and dimensions of the pulmonary trunk were studied on 55 cadavers of children (from birth to 12 years of age).It has been established that the development of the pulmonary trunk is continuous and asynchronous rapid development occurs at an early age, at the very beginning the diameter of the pulmonary trunk increases, and later it develops in length.The wall of the pulmonary trunk thickens on average 2,1 times, but this does not depend on the increase in its size.Pillow-shaped formations protrusion into the wall of the pulmonary trunk contributes to the fastest flow of blood into the lungs.
We have studied texture and sites of pulmonary trunk in children (from the birth to 12 years of life).Experiments were carried out on 55 children's corpses.The following results were obtained: pulmonary trunk develops continuously and asynchronously: rapid growth process develops in children of an early age, diameter of pulmonary trunk increases from the first days of its existence and subsequently develops in its length.The wall of the pulmonary trunk became thicker 2.1 times on the average but this growth does not depend on increase in its size.Pillow-shaped formation and protrusion into a wall of the pulmonary trunk is contributing to more rapid blood flow into the lungs.
In the early postnatal period, the internal organs of humans and mammals rapidly develop and form [1,2,3,4].This genetic determination process takes place on the basis of the specific conditions of a deaf person.Although many scientific studies have been carried out in this regard, but considering the fact that the structure of the umbilical vein and long branch artery vessels have not been studied, considering its cardiopulmonological aspects, we decided to study the development and structure of the umbilical vein outside the umbilical cord.
The tasks of our research consisted of the following: 1. Determination of the morphometric indicators of the left and right upper limb and its arterial vessels in the period from birth to 12 years.
2. To study the dynamics of the structure of these arteries (up to 12 years of age).
Researches were conducted on the corpses of young children (up to 12 years old) who died due to accidental accidents.
The materials taken from the corpses were divided into 5 groups according to the age of the children (in accordance with the recommendations of the VII International Scientific Conference on Morphology, Physiology and Biochemistry and PARD), infancy, childhood, first and second childhood.In each group, the topography of at least 10-12 veins and arteries, location, cycle, length, diameter and thickness of the arteries were analyzed.The diameter and length of the upper body were measured using a barbell circle.
For microscopic examinations, the lung organ and the tissue of its arteries were fixed in 10% formalin.Then tissues were removed from alcohol and celloidin was poured into them.Transverse and longitudinal sections with a thickness of 4-6 μm were stained by hematoxylin-eosin, Van-Gizon, and Mallory methods.
The thickness of the vessel wall layers was measured using a MOVx15 ocular-micrometer.The obtained morphometric results were analyzed using statistical methods.The lung undergoes a number of morphological and morphometric changes from birth to 12 years of age.During this time, the member lengthens to an average of 2.4 mm, and its diameter increases by an average of 3 times.In this case, it is observed that a sharp change in the size of the lung organ mainly corresponds to the period of early childhood (table).According to the results of macroscopic examinations, the length of the head increases by 1.90 mm, and the diameter increases by 1.70 mm.
According to the results of the macroscopic examination, the length of the lungs increases by 1.90 mm and the diameter by 1.10 mm during infancy and childhood.
By early childhood (1-3 years old), the length of the member increased by 61%, diameter by 72% (table).This is considered the most drastic change of the lung organ, and no such enlargement was observed in other periods.Dynamics of the structure of the lung organ in childhood (M±m, mm)
Age periods
The length of lung organs
Diameter
In infancy In infancy In early childhood I childhood II childhood In the first period of childhood (4-7 years old), the length of the lung organ increased to an average of 4 mm (19.5%), diametrically to 2.5 mm (20%), that is, both sizes changed synchronously.
In the second period of childhood (8-12 years old), the length of the lungs increases by 7.5%, and the diameter increases by 17%.
In general childhood (up to 12 years), the wall thickness increases simultaneously with the increase in the length and diameter of the lung organ.In babies, it is equal to 10 μm, and at the age of 12 it is 12 μm, that is, the thickness increases by 22.1%.When comparing the thickness of the pulmonary organ devoir characteristic of different young people, it can be significantly changed mainly in infancy (20.5%) and second childhood (40.5%).
Each mining vein consists of inner, middle and outer layers, and their thickness varies between 0-12 mesh.In штафте, the thickness of the middle layer (average) was 50.2%, the outer layer was 48.2%, and the inner layer was 1.5%.In this case, the inner layer thickens on average by 29 microns until the age of 12, and is equal to 443 microns.Therefore, their relative size is from 1.5% to 3%.The thickness of the middle layer thickens from 430 μm to 980 μm, that is, by 230%, and the outer layer (up to 12 years) thickens almost 2.1 times (from 410 μm to 850 μm).
When comparing the thickening of these layers during the general childhood, we can say that the deaf layer thickens asynchronously and continuously.
In infants, the inner layer of the spinal cord consists of flat endothelium cells, and the subendothelial layer is underdeveloped and thin.During infancy, the subendothelial layer thickens a little, and it is possible to see fiber-like fibers in it.
In early childhood, the internal elastic membrane between the inner and outer layers of the spinal cord is underdeveloped, the elastic fibers in the subendothelium are thin, and fibroblasts and other connective tissue cells are located in the transverse and transverse directions.During the second childhood the thickness of the sub-endothelial plaque is increasing and it is directed along the wall of lungs.
Muscle tissue is composed of smooth muscle cells and is mainly found between circular elastic bundles.Collagen fibers are few and oriented parallel to elastic fibers.
During infancy, no significant changes are observed in the urinary tract.During the early childhood, elastic fibers and membranes formed by them are formed in the upper layer of the upper layer.The number of fibers reaches 35-40, they are condensed at the border of the outer layer, and the outer elastic membrane is not formed.Collagen fibers are also thickened. | 2024-02-27T16:14:01.440Z | 2023-06-07T00:00:00.000 | {
"year": 2023,
"sha1": "cda57891c922ff4f8844d1e4fb1ba98bf13ef707",
"oa_license": "CCBYNC",
"oa_url": "https://zienjournals.com/index.php/tjms/article/download/4135/3431",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e6b797f5e0925a1086cfabf15b8d5eb31045a173",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
247842624 | pes2o/s2orc | v3-fos-license | Advancing gender equity in the academy
Implementation science offers a rigorous set of tools to help mitigate long-standing and worsening gender disparities in academia.
The power of implementation science Implementation science offers an intentional and untapped approach to mitigate the gender inequities that are rife in universities, as it aims to reduce the gaps between what we know and what we do (2).It also offers insights into continuous learning about what works, under what conditions, how adaptations are made, and how those adaptations may affect outcomes.Applied to the challenge of gender disparities in academia, implementation science offers (i) research designs that are rigorous and maximize the knowledge gained, (ii) frameworks to guide intervention deployment with an eye toward context, and (iii) evaluation of equity-focused outcomes.
Evaluating implementation and intervention outcomes
While individual-level interventions (e.g., leadership training) for gender equity have strong evidence, there has been little study of structural approaches (e.g., flexible work arrangements; see Fig. 1) (3,4).To assess these approaches, rigorous implementation studies, designed along a "hybrid" continuum, can emphasize different implementation and intervention outcomes depending on how well-established the evidence is for a given intervention (5).For example, a university attempting to evaluate the success of a wellestablished leadership program for women faculty may tilt their evaluation toward how well they implemented the program rather than the number of women who move into leadership roles.When developing new approaches where less data exists, universities should consider using a hybrid design that emphasizes intervention outcomes more heavily.
Understanding the interplay of interventions and context
Implementation science frameworks provide guidance on how interventions interact with the environment in which they are deployed-both the environment within the organization and the broader environmental context in which it operates (6).Attention to context is necessary for a new intervention to have maximum impact (7).Different types of schools (e.g., liberal arts and professional schools) within a university and across universities (e.g., public versus private) may experience varying challenges given their contexts.The social environment in which the university operates (e.g., urban or rural) can also influence intervention implementation.These multilevel forces must be at the forefront during program implementation and are well suited to guide use of both individual-and structural-level interventions.
Focus on equity
Blending implementation science and equityoriented approaches will enhance our learning about the efforts that are successful, why they are beneficial, and who gains from them.Particular attention must be given to the intersections of women's roles, especially for women of color, given that the experiences of people at multiple marginalized intersections typically reflect social-structural systems of power, privilege, and inequity (8).For example, an important implementation outcome is who is exposed to an intervention (i.e., reach).It will be important to focus on equitable reach throughout any change effort, considering who benefits from an intervention and who does not (9).Some approaches may benefit faculty from one disciplinary area or rank or only those who have the time and flexibility for individualfocused programs added to their ongoing responsibilities.
Recommendations for universities and funders
In light of the urgent need for action and the untapped potential of implementation science to support work on gender equity in the academy, we recommend the following.
Build the science for structural interventions
Structural and organizational problems demand structural and organizational solutions.The relative dearth of studies on structural interventions is notable, and rigorous research is needed to build the evidence base.Effective structural approaches can benefit all faculty, regardless of their characteristics.
Evaluate interventions using an implementation science lens
Evaluations using methods and frameworks from implementation science are paramount to understand the success of interventions to improve gender equity.Implementation science study designs will provide efficient approaches to both test and evaluate interventions and their implementation within different contexts.
Challenges to achieving this vision
The path forward is not without challenges.Measuring the success of these approaches requires exploration of how women with multiple intersecting identities, such as women of color, LGBTQ women, and nonbinary individuals, benefit.There are challenges with anonymity in collecting these types of data, especially when just a few people identify with a given category or when environments lack the safety for faculty to answer openly.Given the new NIH UNITE initiative to end structural racism in STEM and that many universities are engaging in cluster hires for faculty from minority communities, an opportunity exists to gather these data to ensure that efforts to reduce inequities do not inadvertently disadvantage some women.
We must also consider the potential of unintended consequences that can occur when new initiatives are implemented.Evaluations must be open to exploring experiences around such programs.Unintended effects can include feeling singled out (e.g., the perception that women need leadership training but men do not), and taking time away for individualized development could detract from research or teaching efforts.
Raising diversity's voice in academia
In the wake of COVID-19, we are at risk of losing much of the gender equity progress gained in academia.We feel this keenly in our daily conversations with women trainees, faculty colleagues, and institutional leaders.Using approaches from implementation science can accelerate progress by providing an evaluation framework, understanding the interplay between interventions and context, and staying laser focused on equity to lead to programs that are effective, sustainable, and equitable.New structural approaches need to be developed, refined, and tested to restart progress toward gender equity.When implementing new programs, universities must not settle for the status quo.A change in mindset is needed.Rather than putting the weight of change on the shoulders of individual faculty, universities must remove structural barriers to the advancement of faculty.This will improve outcomes and career satisfaction for all faculty, not just women.By applying rigorous methods from implementation science, we can further develop the knowledge base on gender equity practices and strengthen the availability of these interventions to universities.Our community must take the best of what we know to be fair and equitable and put it into practice to support all women in the university workforce.This will also increase the diversity of voices in universities, which will only serve to enhance the entire enterprise.
Evidence-based interventions that support the careers of women faculty have already been developed and shown to work (3, 4).Most of these interventions target individuals and include intensive and advanced leadership training.Yet, few universities offer these programs.The time is ripe to implement these effective interventions broadly and systematically, recognizing that individually focused interventions may be necessary but not sufficient for lasting and meaningful change. | 2022-04-01T06:22:58.417Z | 2022-03-30T00:00:00.000 | {
"year": 2022,
"sha1": "fed39ea1bbe4935b596e55a7e55884c2995ad532",
"oa_license": "CCBYNC",
"oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.abq0430?download=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75b1258f3739494ae2014ccdaf78eff48da756cd",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222157664 | pes2o/s2orc | v3-fos-license | Biological Functions of Prokaryotic Amyloids in Interspecies Interactions: Facts and Assumptions
Amyloids are fibrillar protein aggregates with an ordered spatial structure called “cross-β”. While some amyloids are associated with development of approximately 50 incurable diseases of humans and animals, the others perform various crucial physiological functions. The greatest diversity of amyloids functions is identified within prokaryotic species where they, being the components of the biofilm matrix, function as adhesins, regulate the activity of toxins and virulence factors, and compose extracellular protein layers. Amyloid state is widely used by different pathogenic bacterial species in their interactions with eukaryotic organisms. These amyloids, being functional for bacteria that produce them, are associated with various bacterial infections in humans and animals. Thus, the repertoire of the disease-associated amyloids includes not only dozens of pathological amyloids of mammalian origin but also numerous microbial amyloids. Although the ability of symbiotic microorganisms to produce amyloids has recently been demonstrated, functional roles of prokaryotic amyloids in host–symbiont interactions as well as in the interspecies interactions within the prokaryotic communities remain poorly studied. Here, we summarize the current findings in the field of prokaryotic amyloids, classify different interspecies interactions where these amyloids are involved, and hypothesize about their real occurrence in nature as well as their roles in pathogenesis and symbiosis.
Introduction
The term "amyloid" dates back to the previous century. In 1838, Matthias Schleiden introduced "amyloid" (from Latin "amylum"-starch) to describe a starch material in plant cells [1]. In 1854, Rudolf Virchow first used "amyloid" to characterize cerebral inclusions that colored blue in a reaction with iodine [2]. Based on the iodine test reaction, Virchow hypothesized about polysaccharide nature of pathological inclusions in so-called "waxy" human organs that underwent irreversible changes called amyloidosis [1]. Five years later, in 1859, August Kekulé and Carl Friedreich showed that the inclusions in "waxy" spleen were enriched in nitrogen and were rather proteinaceous than starchy [3].
Currently, the term "amyloid" refers to the highly ordered protein aggregates formed by unbranched fibrils the protein monomers of which are stacked by intermolecular β-sheets [4] composed of β-strands running perpendicular to the fibril axis and connected via hydrogen bonds [5]. The spatial organization of amyloid fibrils determines "cross-β" diffraction pattern characterized by two scattering diffraction signals:~4.7 Å, corresponding to the interstrand distance, and~10 Å, corresponding to the distance * CR, Congo red; ThT, Thioflavin T; ThS, Thioflavin S; C-DAG, Curli-dependent amyloid generator; CD, Circular dichroism; FTIR, Fourier-transform infrared spectroscopy; XDR, X-ray diffraction; NMR, Nuclear magnetic resonance; SDS, Sodium dodecyl sulfate; SEM, Scanning electron microscopy; TEM, Transmission electron microscopy. ** Type of proven or hypothetical inter-species interactions: Type I, host-pathogen interactions; Type II, interactions between different microbial species in the communities; Type III, host-symbiont interactions; N/a, not applicable. *** Hypothetical interaction based on the structural protein function. **** This protein also possesses infectious prion properties [60].
Amyloids of Biofilms and Their Involvement in Host-Pathogen Interactions and Interspecies Interactions within Prokaryotic Communities
The highest number of the identified functional amyloids of prokaryotes is represented by the biofilm components. Biofilm is a community of microorganisms encapsulated in hydrated extracellular polymeric substances (EPS) [73]. EPS account for almost 90% of the dry weight of a biofilm and include polysaccharides, eDNA, lipids, and proteins [73]. The biofilm proteins include the extracellular enzymes, carrying out degradation and remodulation of EPS, and structural proteins, providing stability and integrity to biofilm [74]. The stability of amyloid fibrils, originating from their spatial structure, makes them perfect structural proteins of the biofilm EPS. Thus, amyloids, making the biofilms stable, serve as scaffolding proteins, as well as play a role in surface and intercellular adhesion [75]. At the same time, a biofilm formation is linked with the development of 65% of all bacterial infections and 80% of chronic bacterial infections [76] such as periodontitis, chronic rhinosinusitis, chronic otitis media, chronic urinary tract infections, and cystic fibrosis pneumonia [77]. Biofilm formation creates a local microenvironment (such as anaerobic conditions or zones with lowered pH), to protect the microbial community from the antibiotic treatment, host defense, and environmental stresses [78] and contributes to formation of so-called "persister" microbial sub-population formed by dormant, multi-drug resistant cells [79]. Thus, amyloids that have been identified within the pathogenic bacteria and being part and parcel of the biofilm matrix can act as virulence and pathogenesis factors [80].
The curli are the main structural proteins of EPS of Escherichia coli biofilms [29,31,81], adhering to both biotic and abiotic surfaces [82][83][84]. In 2002, amyloid properties were demonstrated for E. coli curlin CsgA [29] and in 2007 for Salmonella enterica curlin AgfA [31]. Curli amyloid formation involves secretion system Type VIII and is controlled by the expression of two operons-csgABC and csgDEFG (curli-specific genes)-in E. coli [29]. CsgA is the main structural protein while CsgB nucleates CsgA polymerization on the cell surface [85]. CsgC, the third gene from csgABC operon, is a periplasmic chaperone that prevents a premature CsgA polymerization [86]. Lipoprotein CsgG forms a pore in the outer membrane of bacterial cells and mediates the transport of the curli subunits to the cell surface [87]. CsgE and CsgF proteins facilitate CsgA and CsgB transport through CsgG pore [88]. CsgE interacts directly with the pore and secreted proteins and acts as a secretion adaptor [88]. The precise function of CsgF remains unclear, but it is required for the normal functioning of the CsgB nucleator [88,89].
Despite curli fimbriae were initially characterized within clinical isolates, the precise role of amyloid formation of those proteins in infection remained unclear [90,91]. Indeed, curli operons are not only present in the genomes of pathogenic strains of Proteobacteria but are also widespread within the non-pathogenic strains [92]; and curli homologs have also been found within Firmicutes, Thermodesulfobacteria, and Bacteroidetes phyla [92], including Porphyromonas gingivalis [93].
The curli fimbriae apparently take part in bacteria's adhesion to host cells [94], interact with the host proteins [95,96], and trigger the host immune response [97] during an infection. The curli-producing E. coli and Salmonella spp. strains are highly adhesive to a variety of cell lines. Thus, curli-producing K-12 E. coli has demonstrated a higher level of adherence to human uroepithelial cells in comparison to curli deficient strains [94]. Similarly, higher levels of curli production in S. typhimurium SR-11 are linked to adherence to a murine small intestinal epithelial [98]. Nevertheless, ∆csgA strain of enteroaggregative E. coli (EAEC) has not shown any decrease in adherence to mammalian cells, suggesting that the E. coli system of adhesion to host cells includes not only the curli fimbriae but a broad repertoire of molecular factors [99]. Moreover, curli expression levels have been significantly lowered within the enterohemorrhagic E. coli [100,101] and invasive Salmonella spp. strains [102].
The curli interact with the host proteins including fibronectin, laminin, and plasminogen [90,103,104]. They also interact with Toll-like receptors, which leads to an innate immune system activation [105,106]. On the contrary, the curli can protect bacterial cells from the immune reactions via antimicrobial peptides sequestering [107] and inhibition of the classical pathway of the complement cascade activation [108].
The Gram-negative bacterium Pseudomonas aeruginosa is a cause of nosocomial and chronic infections associated with the biofilm formation, for example during cystic fibrosis pneumonia [109].
The biofilm matrix of Pseudomonas species includes amyloid fibrils formed by Fap proteins [33]. Amyloid fibril formation in Pseudomonas is controlled by a fapABCDEF operon, evolutionally distant from the curli system of E. coli [33]. Unlike the curli system, fap genes are unique for Proteobacteria species [110]. FapC is the main structural component of amyloid fibrils, whereas FapB, similar to CsgB from curlin system, acts as a nucleator of fibril polymerization [34]. Transport of FapB and FapC subunits to the cell surface is facilitated by FapF protein which forms trimer pores in the outer membrane of bacteria [111].
Fap amyloid fibrils increase the biofilm hydrophobicity, facilitate mechanical stiffness [112], and reversibly bind quorum sensing molecules, supporting their role as a reservoir for signal molecules that can modulate the reaction of the microbial community to turbulent environmental conditions [35]. Similar to curli, Fap proteins contribute to bacterial adhesion to a substrate. Thus, Pseudomonas strains overexpressing fap operon have a highly adhesive phenotype and an enhanced ability to form biofilms [33,34]. However, overexpression of fap operon notably changes the complete proteomic landscape, thus it is impossible to assume the direct connection between Fap amyloidogenesis and the altered phenotype [113]. The role of Fap proteins in Pseudomonas virulence has been demonstrated using P. aureginosa mutant strain with fapC deletion. Strains with fapC deletion had lowered virulence to Caenorhabditis elegans [114]. In murine models of acute and chronic infections, fap operon transcription in P. aureginosa was also significantly elevated [115].
Gram-positive bacterium Bacillus subtilis forms biofilms on the surface of solid agar plates and floating biofilms, or pellicles, at the air-liquid interface [116]. TasA protein, the main component of Bacillus biofilm EPS [117], can form amyloids both in vitro and in vivo [45][46][47]. While B. subtilis is a soil-dwelling non-pathogenic bacterium, Bacillus cereus is a soil bacterium responsible for the development of food-borne disease. However, the role of biofilm formation and TasA amyloid formation in a particular disease development is unclear. At the same time, TasA amyloids of Bacillus apparently contribute to the interspecies interaction in complex biofilm communities as TasA amyloid fibrils adhere to Streptococcus mutans exopolysaccharides during the initial steps of multispecies biofilm formation [118].
Biofilms are the main form for Streptococcus mutans-a Gram-positive bacterium involved in the dental plaques and cavities formation [119,120]. Within the proteins of S. mutans amyloid formation in biofilm, EPS has been demonstrated for adhesin P1, WapA, and Smu_63c proteins [56,57]. Adhesin P1 and WapA protein represent substrates of sortase-an enzyme cleaving the C-terminal signal motif of proteins and attaching them to the cell wall through transpeptidase reaction [121]. As a result of adhesin P1 and WapA protein cleavage amyloid-forming fragments, C123 and AgA, respectively, are generated [57]. Smu_63c is a secreted protein that forms amyloids under acid conditions. These amyloids act as negative regulators of genetic competence and biofilm cell density [57]. The deletion of one of the genes encoding amyloid-forming proteins was shown not to affect the ability of S. mutans to form biofilms. At the same time, double (lacking in adhesin P1 and WapA) or triple deletions lead to decreased biofilm formation [57]. Mutants lacking in the adhesin P1 gene have a lowered virulence in the murine cavity models, but the precise role of adhesin P1 amyloidogenesis in virulence is still unclear [122]. Similar to P. aeruginosa, Staphylococcus species, S. aureus and S. epidermidis, are the leading causes of nosocomial infections [123]. At the same time, S. aureus as well as S. epidermidis can act not only as pathogens but as a part of the normal skin microbiome. Staphylococcus biofilm formation promotes adhesion and substrate colonization, including multicellular host tissues, as well as contributes to protection against antibiotic agents and immune system elements [124]. Thus, the biochemical content of Staphylococcus biofilms is a target of extensive research. The extracellular polymeric substances of staphylococcal biofilms include a variety of amyloid proteins, but their role in host-pathogen interactions have not yet been elucidated.
Sbp and Aap are amyloid-forming proteins of Staphylococcus epidermidis [54,55]. Sbp is a small (18 kDa) extracellular protein that forms the biofilm scaffolds [125]. The amyloid properties of Sbp have been demonstrated in vitro and in E. coli cells [55]. Aap is a multidomain protein associated with the bacterial cell wall. Aap includes the N-terminal region of tandem A-repeats, L-type lectin domain, the region of tandem B-repeats, the proline/glycine-enriched domain, and the C-terminal sortase recognition motif [126]. The ability to form amyloids was demonstrated in vitro for the B-repeats domain. Amyloid formation by B-repeats domain of Aap has a Zn 2+ -dependent manner and requires metal ions for assembly. The peptides identified as B-repeats and lectin domains of Aap protein were also present in detergent-resistant aggregates from S. epidermidis biofilms [54]. These data are consistent with the research suggesting that Aap protein takes part in biofilm formation in a processed form, lacking the N-terminal domain [127,128]. Sbp and Aap colocalization in biofilms was demonstrated [125] unlike physical interaction in vitro [55].
There is a variety of the identified amyloid-forming proteins composing the extracellular biofilm matrix of Staphylococcus aureus. In 2012, phenol-soluble modulins (PSMs) were identified as a part of fibrils in the biofilm matrix of S. aureus. PSMs also form amyloid fibrils in vitro [48]. In the amyloid state, PSMs stabilize biofilms [48], whereas monomeric PSMs facilitate biofilm detachment [129]. Extracellular DNA (eDNA) is required for PSMs polymerization, so eDNA can act as a nucleator in the amyloid formation [130]. The amyloid properties have been demonstrated for the N-terminal leader peptide of ArgD propeptide (N-ArgD) as well. N-ArgD is a naturally occurring cleavage product of ArgD, appearing due to the AIP (autoinducing peptide) maturation [50] and identified as a part of fibrils, composing the biofilm matrix of S. aureus. The SuhB protein of S. aureus forms amyloids under overexpression in E. coli cells [49]. The precise function of SuhB remains unknown, but the suhB mutant strain is impaired (in terms) of biofilm formation [131]. Another S. aureus protein that can form amyloids extracellularly is called Bap (biofilm-associated protein) [132]. Bap is a multidomain protein anchored to the bacterial cell wall. The N-terminal domain of Bap is cleaved as a result of the Bap processing [133]. The cleaved fragment forms amyloid fibrils in the extracellular space at acidic conditions and low Ca 2+ concentration. The Ca 2+ concentration increase leads to acquiring a stable globular conformation of the N-terminal domain of Bap [51]. Thus, the N-terminal domain can act not only as a scaffold protein of biofilm but also as a sensor [75]. Local acidosis, the pH decrease, appears in vivo during staphylococcal infection due to glucose utilization by these microorganisms and are accompanied by the host's inflammatory response [132]. Within S. aureus strains, bap gene has been identified within bovine mastitis isolates [133] but not within human clinical isolates. Deletion in the bap gene leads to a lowered capacity to adhere to the bovine epithelial cells. S. aureus ∆bap strain cell titer is also significantly lower at 10 days post-infection [51]. Notably, Esp-the Bap ortholog of Enterococcus faecalis, a commensal bacterium capable of inducing nosocomial infection-forms amyloids, supporting the idea of the prevalence of amyloid formation by Bap-like proteins in biofilm matrix [53].
Pathogenic bacteria can also adhere to the host tissues in a biofilm-independent way. In particular, Mycobacterium tuberculosis possesses adhesive structures called pili. MTP (Mycobacterium tuberculosis pili) are structurally similar to E. coli curli and able to form amyloid fibrils [69]. The mtp gene has been identified only within the pathogenic strains of M. tuberculosis, supporting the key role of MTP in mycobacterial virulence [134]. MTPs bind laminin in vitro while ∆mtp strain is unable to bind it [69]. Moreover, mutants show a lowered ability to adhere and invade macrophages and alveolar epithelial cells [135].
Overall, amyloids are widespread structural components of prokaryotic biofilms. Interestingly, not only bacteria but also archaea can contain amyloids in their EPS. For instance, in 2014, the Haloferax volcanii biofilm extracellular matrix was demonstrated to bind ThT and CR dyes with the specific fluorescence [70]. In bacterial biofilms, the amyloids form a scaffold and facilitate their stiffness and integrity. Amyloids may also contribute to intercellular and surface adhesion, which makes them one of the key virulence factors of pathogenic bacteria. Thus, the crucial role of amyloids of biofilms in adhesion is apparently widespread across various prokaryotes, thus allowing us to suppose that there are numerous still unknown biofilm-associated amyloids underlying the pathogenesis and development of infectious diseases. Considering that the number of only human pathogenic bacteria species is about 1500 [136] and 65% of them form biofilms in disease-associated processes [55], the real number of such prokaryotic amyloids involved in pathogenesis in humans and animals could exceed hundreds and even thousands. The interactions between bacteria in microbial communities represent another type of interspecies interactions where the bacterial biofilm amyloids are involved by providing the cell adhesion to heterogeneous exopolysaccharides and where the number of yet unidentified prokaryotic amyloids could be remarkably high.
Amyloids of the Outer Membrane Proteins and Their Probable Roles in Host-Pathogen and Host-Symbiont Interactions
Fibril formation exhibiting several properties of amyloids has been demonstrated for the outer membrane proteins of Proteobacteria and associated with their virulence. In particular, the full-length OmpA of E. coli and its N-terminal domain form ThT-binding fibrils in vitro [37]. The role of OmpA protein in the pathogenicity of E. coli and other bacteria is thoroughly studied so far. OmpA is believed to contribute to bacterial adhesion and invasion, as well as to the antimicrobial peptide resistance [137]. Meningitic E. coli strains deficient in ompA are less virulent and invasive in the chick embryo and rat models [138]. In uropathogenic E. coli OmpA contributes to the infection persistence. As ∆ompA strain adheres and invades the bladder epithelium, the number of formed E. coli colonies is lower in comparison to the wild-type strain. Moreover, ompA expression is increased between 16 and 20 h after infection [139]. OmpA is notably overexpressed during the biofilm formation [140]. The facts listed above allow use to suggest that OmpA amyloids could be involved in the biofilm formation and associated with virulence of E. coli.
OmpC, another amyloid-forming outer membrane protein of E. coli [38], forms fibrillar aggregates in vitro that are resistant to proteinase K treatment. OmpC fibrils stained with ThT demonstrate a specific peak of fluorescence emission as well as OmpC fibrils stained with Congo red show green birefringence under polarized light [38]. OmpC has been detected in the brain of murine models pointing to its possible role in the neurodegeneration and formation of a spongiform encephalopathy [38]. Similar to the OmpA protein, OmpC takes part in adhesion and invasion of pathogenic strains of E. coli. In the avian pathogenic E. coli strains, deletion of ompC has led to a drop in adherence and ability to invade the murine brain microvascular endothelial cells. The ompC deletion also led to a decreased colonization and invasion capacity in ducklings and murine models [141].
OmpP2-like protein of Mannheimia haemolytica, an outer membrane porin of Pasteurellaceae [142], forms extracellular fibrils in vivo [39]. After incubation, a purified OmpP2-like protein forms polymeric aggregates binding CR. OmpP2-like protein apparently is a part of the biofilm matrix. Its role in adherence of M. haemolytica to host cells was demonstrated using the adenocarcinomic human alveolar basal epithelial cells [39].
Outer membrane proteins of symbiotic bacteria can also form amyloid fibrils. RopA and RopB are outer membrane proteins of plant symbiotic bacterium Rhizobium leguminosarum, which are predicted to form pores in bacterial outer membrane [143,144]. The levels of production of RopA and RopB proteins rises at the initial steps of nodulation [145]. These data suggest that RopA and RopB proteins are required for the early stages of the plant-bacteria symbiosis. In vitro, RopA and RopB form fibrillar aggregates exhibiting typical physicochemical properties of amyloids including green birefringence in polarized light upon CR staining, binding of ThT, resistance to proteases and ionic detergents treatment. In vivo, RopA and RopB form extracellular amyloid fibrils after the prolonged incubation of R. leguminosarum cells on the cultural media [40]. Moreover, the amount and size of the aggregates formed by RopA protein after induction of the nodulation process in free-living culture is increased after flavonoid treatment [40]. Based on these observations, RopA and RopB hypothetically act as adhesins and represent a part of the EPS of biofilms formed by R. leguminosarum on different surfaces including the plant roots, thus participating in colonization of plant by bacteria cells. Considering the increasing number of the RopA amyloids after flavonoid stimulation, a more specific role of amyloids of this protein at the initial stages of plant-microbe symbiosis can also be proposed.
The proteins discussed in this section can potentially act as the outer membrane porins possessing a β-barrel channel, thus facilitating molecular transport through the outer membrane and perform additional functions such as membrane stabilization and intercellular adhesion [146]. In vivo amyloid formation mechanisms by these proteins are not fully understood and could be realized either through β-barrel to the amyloid transition or via the alternative folding ways of β-barrels and ordered amyloid aggregates [147]. The amyloids formed by outer membrane porins of Proteobacteria apparently contribute to both host-pathogen and host-symbiont interactions, thus drawing a line of similarity between the mechanisms of virulence of both symbiotic and pathogenic bacteria.
Amyloids of Bacterial Toxins and Their Contribution to Pathogenesis
The amyloid state is used by prokaryotes to regulate the activity of several toxins via inactivating or storing them. Microcin E492 (Mcc) is a bacteriocin of Klebsiella pneumoniae. In its soluble form, microcin E492 forms pores in membranes of Enterobacter species. The pore formation leads to membrane potential drop and, as a result, a cell death [148]. K. pneumoniae produces an active soluble form of microcin E492 during the exponential growth phase. At the stationary growth phase, the toxin becomes inactivated in the amyloid form [41,149]. Listeriolysin O (LLO) of Listeria monocytogenes is another bacterial toxin that is inactivated in the amyloid state [64]. L. monocytogenes is the intracellular human pathogen. Invading a cell, L. monocytogenes gets inside of phagolysosome. The release of the bacteria into the cytoplasm is driven by listeriolysin O activity. LLO is active in acidic conditions of the phagolysosomes and forms pores in their membrane, which leads to the release of L. monocytogenes [150]. The increase of pH in cytoplasm promotes the polymerization of LLO with amyloid fibrils formation and toxin inactivation [64,151].
The amyloid formation can result not only in inhibiting but also in activating of a toxin function. The harpins are Gram-negative plant pathogenic bacteria proteins. Harpins are transported via secretion system Type III to extracellular space, where these proteins of Xanthomonas axonopodis, Pseudomonas syringae, and Erwinia amylovora form amyloid fibrils [43]. The precise function of harpins and their amyloid fibrils remains unclear. The harpins' secretion triggers a hypersensitive response in plants [152,153]. The hypersensitive response is a protective mechanism preventing the spread of pathogens across plant tissues due to a rapid cell death in the localized region [154]. The way harpins cause hypersensitive response is unknown, but there are pieces of evidence, proving the ability of harpins to form pores in the cell membranes [155] and promote their depolarization [156].
Thus, amyloid formation may orchestrate the prokaryotic toxins' activity in two ways, i.e., activating them as in the case of amyloid harpins or inactivating similar to microcin E492 and listeriolysin O, which contributes to the host-pathogen interactions of bacteria with multicellular hosts and antagonistic interactions within prokaryotic communities.
Amyloids of Extracellular Protein Layers
The amyloid formation within the protein layer surrounding the cell is widespread across prokaryotes. In particular, the amyloids can form additional extracellular layers changing cell surface properties. Actinobacteria species have a complex life cycle that includes hyphae and spores formation. Growth of hyphae and transition to the next stage of the life cycle requires the formation of additional protein layers modulating cell surface properties including an increase in hydrophobicity [157]. Chaplins are proteins of Streptomyces coelicolor that form amyloids on the surface of spores and aerial hyphae [158]. Chaplins secretion on the surface of the developing hyphae increases their hydrophobicity and lowers the water surface tension [65,158]. Rodlin RdlA, another amyloid-forming protein of S. coelicolor, is secreted at the later stages of hyphae development [67]. Together with chaplins, rodlins form a protein coat of spores, increasing their stiffness and hydrophobicity [157].
As opposed to rodlins and chaplins amyloid-forming bioemulsifier BE-AM1 from a Gram-positive bacterium, Solibacillus silvestris, lowers the hydrophobic properties of the cell surface. Bioemulsifier BE-AM1 production also facilitates an intercellular adhesion and biofilm formation [63].
Amyloids are a part of protein sheaths of archaea. Thus, the main component of sheaths of methanogenic thermophilic archaeon Methanosaeta thermophila is amyloidogenic protein MspA [71]. Archaeal sheaths are protein envelopes that encapsulate several cells. The cell division leads to the formation of long chains of cells under the sheath [159]. The sheaths protect archaea against protists and regulate cell turgor pressure [159]. Amyloid properties of MspA of M. thermophila increase the stability of sheaths in an extreme environment [71].
Amyloid Formation by Cytoplasmic Prokaryotic Proteins
The proteins listed above are secreted and form the amyloids extracellularly. However, over the last few years, the list of prokaryotic amyloids was expanded with the bacterial cytoplasmic proteins. CarD is an RNA polymerase-binding transcription factor of Mycobacterium tuberculosis stabilizing the transcription initiation complex [160]. CarD forms fibrils that can bind ThT with fluorescence enhancement in vitro. CarD overexpression in E. coli cells also leads to ThS binding aggregates [68]. HelD is a helicase of Bacillus subtilis that also interacts with RNA polymerase. HelD forms amyloid fibrils in vitro and forms ThS binding aggregates when expressed in E. coli cells [59]. The ability of HelD and CarD to form amyloids in vivo, under native conditions, and a potential physiological role of that processes require further investigations. Amyloid formation by HelD and CarD can represent the mechanisms of the functional protein inactivation via amyloid formation. As both proteins interact with RNA-polymerase, their polymerization can occur due to environmental changes and alter transcriptome and phenotype of cells.
A transcriptional regulator Rho of Clostridium botulinum (Cb-Rho) stands out as exceptional as its prion-like domain is able to form not only amyloids but also a prion, a self-propagating protein aggregate [60]. The prion-like domain of Cb-Rho was demonstrated to switch conformation to the amyloid state in E. coli cells without overexpression and to be able to propagate in more than 100 generations. Cb-Rho Amyloid formation leads to its decreased activity and changes in genome expression. There are also data on the ability of several amyloid-forming archaeal domains to act as prions, which propose that prion formation is spread within prokaryotic organisms [161].
Thus, the role of amyloids of the extracellular protein levels and cytoplasm of prokaryotes in the interspecies interactions cannot be excluded especially given the position of interactions between different species of prokaryotes with microbial communities.
Amyloids of Prokaryotes and Interspecies Interactions: A Tip of the Iceberg
The data discussed in the previous sections and summarized in Table 1 indicate that more than 30 amyloid-forming proteins of prokaryotes have been identified to date and most of them contribute in some way to interspecies interactions. Such interactions, mediated by prokaryotic amyloids, can be divided into three types: Type I, host-pathogen interactions in which most of the identified prokaryotic amyloids are involved; Type II, antagonistic and synergistic interspecies interactions within microbial communities; and Type III, host-symbiont interactions ( Figure 1 and Table 1). While the involvement of prokaryotic amyloids in the host-pathogen interactions has been scrutinized, the range of biological roles of amyloids in two latter types of interactions remains poorly studied and represents an intriguing question to be addressed in the future. The involvement of the prokaryotic amyloids in the Type I (host-pathogen) interactions is related to the function of these amyloids or their structural proteins as the virulence factors or toxins. Most functional amyloids produced by bacterial pathogens of animals can act as virulence factors [80], predominantly as adhesins and structural proteins of the biofilm matrix, as biofilm formation is required for the multicellular host's tissues colonization during infection. Moreover, bacterial amyloids can trigger the immune response in the host-pathogen interactions and protect the microorganisms from the host immune response [80]. Toxins represent the second subtype of functional prokaryotic amyloids involved in host-pathogen interactions. In this case, the amyloid state of a toxin can be either active (harpins) [43] or inactive (listeriolysin O [151] and microcin E492 [41]). Accumulation of bacterial toxins in the inactive amyloid state represents the storage function of amyloids (formation of dormant aggregates for further use). This function has been described not only in prokaryotes but also in eukaryotes, particularly, in animals (hormones and amyloid bodies) and plants (seed storage proteins) [162][163][164]. The existence of other functional virulence-associated amyloids mediating the host-pathogen interactions cannot be excluded. For example, evolutionary conservative domain of the mucin degrading metalloprotease YghJ involved in the virulence of enterotoxigenic E. coli [165] forms amyloids in vitro and being secreted to the cell surface of E. coli in the C-DAG system [166]. At the same time, the full-length YghJ forms detergent resistant aggregates in vivo [167].
Amyloid-forming bacterial toxins participate not only in the Type I pathogen-host interactions but also in the Type II antagonistic interspecies interactions within prokaryotic communities such as microcin E492 (see Section 4). Even though these communities seem to represent a promising source to search for novel amyloids due to the formation of biofilms, little is known about biological roles of prokaryotic amyloids in such interspecies systems because the amyloid formation of secreted proteins is mostly studied in the model single-species biofilms. Nevertheless, other examples of the amyloids involved in the interspecies interactions within prokaryotic communities have recently been revealed. In particular, amyloid fibrils of TasA of Bacillus subtilis bind exopolysaccharides of Streptococcus mutans at the initial steps of biofilm formation, thus representing an example of the amyloid involvement in the synergistic Type II interactions [118]. Several indirect pieces of evidence also suggest the involvement of the amyloid formation in the Type II interactions. Cross-seeding, the process, in which amyloids of one protein cause polymerization of another [168], between bacterial amyloid-forming proteins can contribute to the development of the multicellular biofilms. It has been shown that curli from Escherichia coli, Salmonella typhimurium LT2, and Citrobacter koseri are able to cross-seed each other in vitro [169]. Cross-seeding of curli has also occurred in vivo in two-species biofilms of E. coli and S. typhimurium [169]. The ability of non-homologs bacterial amyloid-forming proteins to cross-seed requires further investigation and may highlight some probable interactions between amyloids of different prokaryotic species in the future.
Microbial communities may also trigger the amyloid-associated neurodegenerative diseases development such as Alzheimer's and Parkinson's diseases, in their multicellular hosts (humans and animals). Thus, they demonstrate an unusual mix of Type I and II interactions though molecular mechanisms underlying these effects remain poorly understood [170]. The influence of the gut microbiota amyloid formation on the neurodegeneration development was first proposed in 2016. Chen et al. demonstrated that exposing aged rats to curli-producing bacteria leads to the increase of α-synuclein deposition in neurons in both brain and gut, which is a feature of neurodegenerative disorders [171]. Cross-seeding is a potential mechanism, by which gut microbiota can trigger the amyloidogenesis of human proteins. The cross-seeding between bacterial and human amyloids has been demonstrated in vitro: curli fibrils cross-seed fibrillation of amyloid-β [172] and Fap amyloids cross-seed α-synuclein aggregation [173]. The occurrence of human amyloids cross-seeding by prokaryotic amyloids in physiological conditions in vivo represents an important subject for investigation. Nevertheless, the indirect mechanisms of metabolic triggering of human amyloid-associated diseases by gut microbiota without any involvement of cross-seeding cannot be excluded [174][175][176].
The role of amyloids in non-pathogen-host interactions has been less investigated for a while. Some recent demonstration of amyloid properties of RopA and RopB proteins of symbiotic root nodule bacterium Rhizobium leguminosarum bv. viciae [40] suggests that the bacterial amyloids can be involved not only in the interactions between pathogenic bacteria and multicellular hosts but also in the symbiotic (Type III) interactions with them. Notably, the virulence mechanisms-the ability of a microorganism to successfully infect and colonize host-share similarities between symbiotic bacteria belonging to the order Rhizobiales and pathogenic microorganisms [177]. In both cases, adhesion and host tissue surface colonization by bacteria are the key steps in the establishment of the interspecies interaction. In the interaction between pathogenic bacteria and mammalian tissues, bacterial amyloids act as the virulence factors promoting adhesion [178]. The role of amyloids in adhesion of Escherichia and Bacillus species to plant leaves and roots has also been shown [179,180]. Rhizobia attachment to plant roots is mediated by glucomannans that bind plant lectins in acidic conditions and the synthesis of lipopolysaccharides and cellulose is necessary. Notably, these molecules are a part of extracellular polymeric substances of biofilms [181,182] among them prokaryotic amyloids are widely spread at least in pathogenic species.
Quorum sensing is another mechanism that controls virulence of both pathogenic and symbiotic bacteria [177]. The quorum sensing is a process of gene expression regulation in response to changes in microorganism's population density. It is mediated by small signal molecules called autoinducers [183] and regulates the transition from free-living form to associated with the multicellular host, modulated adhesion to substrates, and biofilm formation in both pathogenic and symbiotic bacteria [177]. The Fap protein of species from Pseudomonas genus, which includes pathogens of plants and animals, forms amyloid fibrils, which transiently bind autoinducers [177]. Thus, prokaryotic amyloids can act as a reservoir of signal molecules and modulate the response of the microbial community to fluctuations in conditions [35].
At the latter stages of chronic infection, both rhizobia and plant pathogens need to avoid or take under control plant defense response. Rhizobia have both specific [184,185] and general, found in rhizobia and phytopathogens, systems to overcome plant immune response such as Type IV and III secretion systems, producing effector molecules to a plant cell [186]. The Type III secretion system of Rhizobium is associated with host specificity regulation [187]. In phytopathogens, protein secretion via Type III secretion system promotes resistance to plant immunity and modulates the physiological condition of plant cells for the chronic infection persistence [188]. Type III secretion system effector molecules not only inhibit the plant defense response but can also trigger the hypersensitive response. This group of effector molecules includes harpins, which are secreted proteins of Xantomonas, form amyloid fibrils, and elicit the hypersensitive response [43]. Thus, the molecular systems providing virulence of symbiotic and pathogenic bacteria exhibit significant similarity and though amyloids of symbiotic prokaryotes identified to date could probably function as adhesins. Therefore, the real number of amyloids of symbiotic prokaryotes is likely to be significantly higher and may include biofilm scaffold proteins, adhesins, and host immune response modulators. In particular, it has been predicted that proteins bearing potentially amyloidogenic regions are widespread within order Rhizobiales that includes both pathogenic and symbiotic species. These proteins are involved in the transport of siderophores and lipopolysaccharides or act as adhesins or flagellum assembly components and contain domains typical for virulence factors [189].
Conclusions
Prokaryotic amyloids identified to date contribute to various interspecies interactions, including Type I interactions between pathogenic bacteria and multicellular hosts, Type II interactions between different microbial species in the communities, and Type III host-symbiont interactions (the data are summarized in Table 1 and Figure 1). The impact of microbial amyloids in the pathogenesis seems to be significantly underestimated both from the point of view of amyloid biodiversity (numerous novel bacterial amyloids may potentially be found among biofilm-forming pathogenic bacteria species) and the unusual mechanisms of their actions (as in the recently described case of triggering human amyloid diseases by gut microbiota). Even though very few amyloids of symbiotic bacteria have been identified to date, they most likely represent only "the tip of the iceberg" considering the similarity between the molecular systems underlying host-pathogen and host-symbiont interactions including virulence factors to which most prokaryotic amyloids belong. Finally, microbial communities may also be considered as the reservoirs of prokaryotic amyloids involved in both pathogenic and symbiotic interspecies interactions. | 2020-10-06T13:36:15.584Z | 2020-09-30T00:00:00.000 | {
"year": 2020,
"sha1": "1a29c0e9e85dd2289c64f90ce44cfb3c2f9c07a2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/19/7240/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad3069ca974e641df4f96020882796a699363b83",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
263646298 | pes2o/s2orc | v3-fos-license | Computational “Accompaniment” of the Introduction of New Mathematical Concepts
: The computational capabilities of computer tools expand the student’s search capabilities. Conducting computational experiments in the classroom is no longer an organizational problem. This raises the “black box” problem, when the student perceives the computational module as a magician’s box and loses conceptual control over the computational process. This article analyses the use of various computer tools, both existing and specially created for “key” computational experiments, that aim at revealing the essential aspects of the introduced concepts using specific examples. This article deals with a number of topics of algebra and calculus that are transitional from school to university, and it shows how computational experiments in the form of a “transparent” box can be used.
Introduction
In the methodology of teaching mathematics in the Soviet Union in the 1950s, 1960s, 1970s, and partly in the 1980s, the formation of mathematical concepts relied heavily on the operational activities of students.This corresponded to the activity approach well developed by Soviet psychologists.The psychological theory of activity was created in Russian psychology due to the works of L. S. Vygotsky, S. L. Rubinshtein, A. N. Leontiev, A. R. Luria, A. V. Zaporozhets, P. Ya.Galperin, and many others.The most complete theory of activity is presented in the works of A. N. Leontiev, in particular in his last book Activity.Consciousness.Personality [1].At the level of mathematics teaching methodology, this was manifested by studying algebra as a separate subject, in which much attention was paid to algebraic calculations.
The psychological basis of this approach to teaching mathematics was studied in detail by I.S. Shapiro and described in the work From Algorithms to Judgments [2].In this work, I.S. Shapiro discusses the operator-logical form of knowledge, which is well consistent with the methodology of teaching school algebra as a subject that studies the transformation of algebraic expressions from one form to another.
The main idea of the book is the "convolution of algorithms", which he considers as a form of generalization and as a mechanism for "running ahead" in solving complex problems.Let us give an example from this work [2] (p.231): "Let us describe in general terms one of the experiments.It took less than three minutes for a math-savvy student A to solve the problem.
Simplify: 2cos 3 α − cosα 2tg π 4 − α •sin 2 π 4 + α • cosα We asked the student to tell the way of thinking.Student A wrote on the fly: 2cos 3 Experimenter:-You reasoned in such detail?A:-No, I immediately saw that it turns out cos2α, etc." Approximately the same way of "thinking aloud" was observed when the problem was solved by other students gifted in mathematics.The presence of a folded system of inferences ensured the simultaneous and quick consideration of several actions and the choice of a way to solve a task.Of the eight ninth-graders gifted in mathematics who participated in the experiment, six solved the problem orally, and two with a minimum number of records of intermediate equalities.
"Running ahead" suggests that the actions to transform trigonometric expressions not only pass into the internal plane and turn into thought processes, but also act as objects that the student operates on, building a plan for solving the task.These processes are integrally considered in the APOS theory, the main ideas of which will be outlined below.
The emergence of powerful mathematical tools for performing symbolic calculations, such as Maxima, Mathematica, Maple, Sage, and MathPartner, has significantly reduced the value of manual calculations, on which the technique of moving from algorithms to judgments through algorithm convolution was based.Computer programs perform calculations faster and without errors.Moreover, programs such as UML (Universal Mathematical Solver) are specifically oriented towards solving school problems in algebra and present not only the answer, but also the chain of transformations that the teacher requires.
Under these conditions, the following questions arise.
•
To what extent should the performance of calculations by hand be preserved in the teaching of mathematics at school?
•
How to preserve and develop the mathematical culture of students without relying on traditional operational activities?
If it is difficult to answer the first question, then we will try to answer the second question constructively, analyzing various examples of computational schemes related to the formation of mathematical concepts.
Activity-Interiorization-Encapsulation
Shapiro's idea of the convolution of algorithms fits well with the idea of internalization of external actions.The convolution of algorithms can be considered as one of the manifestations of the psychological mechanism of internalization, that is, the transfer of actions with objects of the external environment to the internal-mental-plan. Another important psychological phenomenon associated with interiorization is "encapsulation", which plays an important role in the APOS (Actions, Processes, Objects and organizing them in Schemas) theory [3] ". . . to describe how actions become interiorized into processes and then encapsulated as mental objects, which take their place in more sophisticated cognitive schemas. .." [4].
This statement can be explained as follows: the performing of an action with objects of the external environment (actions) transfers them through the process of internalization into mental processes (processes), which in turn are folded (encapsulated) into objects (objects), on which mental activity (schemas) is built.
At the same time, if Shapiro associates the performance of calculations exclusively by hand, Dubinsky considers the possibility of replacing calculations by hand with digital symbolic calculations.
Of interest is the existence of the concept of "encapsulation" in a different sense, as one of the main components of object-oriented programming.
John D. Cook believes that the use of the same term in programming and psychology is not accidental; he calls encapsulation in programming "logical", and the phenomenon of encapsulation as a mechanism of thinking "psychological encapsulation" [5]: "A piece of software is said to be encapsulated if someone can use it without knowing its inner workings.The software is a sort of black box.It has a well-defined interface to the outside world."You give me input like this and I'll produce output like that.Never mind how I do it.You don't need to know".
I think software development focuses too much on logical encapsulation.Code is logically encapsulated if, in theory, there is no logical necessity to look inside the black box.
. . .Maybe there's nothing wrong with the code, but you don't trust it.In that case, the code is logically encapsulated but not psychologically encapsulated.That lack of trust negates the psychological benefits of encapsulation...A failure of logical encapsulation is objective and may easily be fixed.A loss of confidence may be much harder to repair".
Thus, if the object is psychologically encapsulated, then the student is fluent in it, using it to produce complex judgments.At the same time, a logically encapsulated object can exist in a program as a "black box" that one can work with, but it is not an element of human mental activity.The challenge for methodists is to make logical and psychological encapsulation become part of a single whole.One of the ways is to de-encapsulate the object represented by logical encapsulation so that by working with it in an "expanded" form through internalization, one could achieve psychological internalization.It should be noted that formally assimilated definitions of mathematical concepts can also be classified as logically encapsulated objects.With the formal assimilation of mathematical knowledge, as written by mathematician A.Ya. Khinchin [6], the student cannot use knowledge, provide examples, solve problems, although he/she can correctly pronounce the formulations of definitions and theorems: "Those who have taken out of school only external, formal expressions of mathematical methods, without having mastered their substantial essence, when they meet a real problem, will, of course, be deprived of the opportunity to see which of these methods can be applied to its solution.He will not be able, as we say, to formulate a practical problem mathematically; to a large extent, he will be helpless in solving this problem, since he has not developed the habit of really comprehending the formal operations performed, as a result of which neither the interests of the practical task facing him, nor even the mathematical content of the emerging problems will be able to guide him when choosing these operations" [6] (pp.21-27) (translated by the authors of the article).
Next, we will consider examples of the introduction of mathematical concepts that demonstrate how computational processes can be used to de-encapsulate concepts given by verbal definitions, and introduce new concepts based on the analysis of the computational scheme.
Positional Numeral Systems and Information Compression Algorithms
In grade school, school students begin to add numbers in the unary numeral system, when the value of a number is determined as the cardinality of a set of sticks or matches.The addition algorithm in this system is extremely simple-the students need to connect together two piles corresponding to the terms.Later, at university, they will return to the unary numeral system when they study the theory of algorithms and build Turing machines.Then, the task of constructing an algorithm can be described, for example, as III + II → IIIII.Further, in grade school, students are introduced to the decimal numeral system (and in computer science classes in high school also to the binary numeral system) and students study addition algorithms in these numeral systems.Between the introduction of the concept of a number through the unary number system and the further study of algorithms for numbers in the decimal system, a logical gap arises-why was it necessary to introduce a positional numeral system, if it is much more difficult to add numbers in it than in the unary one?To answer the question, the following calculations can be made.Let us calculate how much ink we need to spend on writing numbers in different numeral systems.We will assume that one drop is needed to write a one, and three drops to write a zero.Then, the number "ten" in the unary system will be written as IIIIIIIIII and will require ten drops, and in decimal it will be written as 10 and will require 1 + 3 = 4 drops.If we need to write the number "one hundred" in different systems, then in the unary system, ninety more sticks will need to be added to the number ten, which will require ninety drops of ink, whereas only three additional drops will be required to go from "ten" to "one hundred" in decimal.To go from "one hundred" to "one thousand" in a unary numeral system, nine hundred drops are required, whereas in decimal only another three drops.
Thus, the purpose of the transition to the decimal number system is to compress information.To give students an even better idea of what compression means, we can offer to calculate the length of the string that represents the number "one thousand" in different number systems.If one centimeter is allocated to one digit, then to write the number "one thousand" in the unary numeral system, we will need to write a line ten meters long, and to write the number "one million" a line a thousand times longer, that is, ten kilometers long, while in the decimal system, "one million" will be seven centimeters long.
Thus, simple calculations show that it is reasonable to introduce a decimal notation in order to more compactly encode information-information compression-a concept that is important for computer science.From this point of view, the transition to a new base by division can be considered as an information compression algorithm.The actions that are performed in this case are carried out in elementary school: arrange the sticks (matches) into piles of ten each, then do the same with these piles, arranging the piles into groups of ten each, etc. Formally, this algorithm can be described as follows, where the mod and div operations should be considered as operations with heaps (in this interpretation, they are carried out by one operation connecting mod and div), described above, and output-as fixing the next "digit".
P-Adic Numbers and the Algorithm "Division with Remainder"
An amazing example of how the essence of a mathematical concept can be expressed through calculations is p-adic numbers.Here is a standard definition from the mathematical literature, which even mathematically gifted students cannot immediately understand.Definition 1.An integer p-adic number for a given prime p is an infinite sequence a = {a 1 , a 2 , ....} of residues a n modulo p n satisfying the condition: a n ≡ a n+1 (modp n )) [7].
Consider the computational process of the algorithm for converting a natural number N into a positional system with base p ≥ 2: Computational algorithm k :=0; while (N = 0) a k := N mod p; N := N div p; k := k + 1 end while Let us apply this algorithm to the "forbidden"-negative-number, for example, to N = −1.Let us take as an example the smallest prime number p = 2.
The first step of this algorithm gives 1 as the remainder (a 0 = 1) and −1 as the quotient (N = −1).
The algorithm loops and the output is an infinite sequence of ones: (. ..111) = (. . .a 2 a 1 a 0 ).Consider another computational process defined by the algorithm for adding numbers in the positional number system: Computational algorithm k := 0; s := 0; It is unusual that this algorithm, like the previous one, does not stop if one of the terms is given by an infinite sequence.
For example, if we add −1, written as a sequence (...111) with the number 4, written in binary, we get: . ..1 1 1 1 1 1 0 0 . ..0 0 0 1 1 that is, the number 6 in binary notation (if we do not take into account the infinite number of zeros that precede the first unit from the left).
After these calculations, one could "come up with" another definition, for example, the one which is given in Wikipedia: "In number theory, given a prime number p, the p-adic numbers form an extension of the rational numbers which is distinct from the real numbers, though with some similar properties; p-adic numbers can be written in a form similar to (possibly infinite) decimals, but with digits based on a prime number p rather than ten, and extending (possibly infinitely) to the left rather than to the right.Formally, given a prime number p, a p-adic number can be defined as a series where k is an integer (possibly negative), and each a i is a integer such 0 ≤ a i < p.A p-adic integer is a p-adic number such that k ≥ 0".
It should be noted that even if the last definition is given before the operation of the two algorithms above is shown, understanding the concept of a p-adic number will present significant difficulties.
It is important to note that the idea of p-adic numbers is used at the "lower level" of computer calculations: the inverse binary code of integers is nothing but a 2-adic representation with a fixed number of digits.
It can be concluded that some mathematical concepts are comprehended through computational algorithms.
Diophantine Equations, Continued Fractions, and Euclidean Algorithm for Finding GCD
In the course of algebra and/or in the course of discrete mathematics at technical universities, such concepts as the greatest common divisor, Bézout's identity, continued fractions, convergents, linear Diophantine equations, and modular reciprocal are studied.Usually, the introduction of these concepts is accompanied by verbal definitions, from which the connection of these concepts with certain algorithms is not visible.At the same time, all of the above topics are united by the Euclidean algorithm.However, when presenting the material, this fact fades into the background, while the general computational scheme can be used as a tool for forming a general idea that connects these concepts.Moreover, it can be used to derive an algorithm for constructing convergents from the GCD linear representation algorithm (extended Euclidean algorithm).
We have created a special environment, which is based on the table representing a computational process that combines the calculation of quotients, remainders, and linear representations of remainders through the original pair of numbers.
The special environment implements the following algorithm: In this algorithm, a and b are the original numbers, and q and r are the quotient and the remainder when a is divided by b.In the algorithm, after each iteration of the loop, the variables a and b are assigned the values b and r, respectively.
The vectors (x a ;y a ), (x b ;y b ), (x r ;y r ) are vector representations of the numbers a, b, and r.With them in the algorithm, the same actions are performed as with the numbers a, b, and r.
Thus, the presented algorithm combines the regular and extended Euclidean algorithms.
If in this algorithm we replace subtraction with addition and change the initialization of vectors in the first line of the algorithm, we get an algorithm for constructing a continued fraction for a rational number a/b and convergents for this continued fraction.It is easy to prove that both algorithms will generate numbers of the same absolute value.For coprime numbers a and b, the last pair of numbers in the extended Euclidean algorithm will be (b;-a); that is, in absolute value, it will give the original pair of numbers in the reverse order.This consideration makes it possible to explain the transfer of the extended Euclidean algorithm to the algorithm for constructing convergents.
Thus, understanding the work of similar computational algorithms leads to the realization of more general ideas underlying them, the connection of different representations of these general concepts, and the transfer from the algorithm to the proof of theorems.The latter suggests that computations can become the basis for both the psychological and logical encapsulation of a new concept.
The other side of the analyzed example is the methodological aspect associated with the creation of this environment based on the existing logical connection between the various topics of the mathematics course.
As can be seen from Figures 1 and 2, the same simple computational base can combine several tasks that are different in subject matter, but close in meaning and algorithms used.The other side of the analyzed example is the methodological aspect associated with the creation of this environment based on the existing logical connection between the various topics of the mathematics course.
As can be seen from Figures 1 and 2, the same simple computational base can combine several tasks that are different in subject matter, but close in meaning and algorithms used. of theorems.The latter suggests that computations can become the basis for both the psychological and logical encapsulation of a new concept.
The other side of the analyzed example is the methodological aspect associated with the creation of this environment based on the existing logical connection between the various topics of the mathematics course.
As can be seen from Figures 1 and 2, the same simple computational base can combine several tasks that are different in subject matter, but close in meaning and algorithms used.A feature of this module is that the algorithms underlying it are known to the students, and they can not only solve problems, but also study solutions of similar tasks by choosing the demonstration mode.In this mode, the program generates random numbers a and b, checking that the calculation table is neither too large nor too small.In the testing mode, the program will check each move (filling one cell) and highlight the result in green or red color, depending on whether the correct number is entered in the cell or not.Finally, in exam mode, the student completes the entire spreadsheet and it is sent to the server for review.It should be noted that the possibility of opening two programs, one of which works in demo mode and solves the example required for answering the exam, is blocked by the fact that entering numbers into this program is not allowed-tasks are generated automatically.
X := X + h•V(t); t := t + h end while
It can be seen that, up to the notation of variables, both computational schemes are the same.The second algorithm does not specify the initial position of the point.If we add X := 0 at the beginning, then the algorithms will match completely.The ability to change the initial value of X indicates that the point can start moving from different initial positions and move along different trajectories with the same velocity change function.In terms of antiderivatives, X(t) is called the antiderivative of V(t), and the described result says that the antiderivative of a function V(t) is determined up to a constant value.Knowing that the speed V(t) is the derivative of the coordinate X(t) with respect to time t, we arrive at the idea of connection between the concept of antiderivative and the concept of derivative.
Thus, a comparison of computational schemes for different problems makes it possible to reveal the commonality between various mathematical concepts.The presented case study shows the relationship between the representations "the area of region under a curve" and "the coordinate of a point moving with a given velocity".
Combinatorial Identities and Generating Functions
Trigonometric identities are well represented in the school curriculum, and combinatorial identities are much less so.The former are well developed in the school methodology, while the latter receive much less attention.The reason, in our opinion, is the greater content depth of the latter and the impossibility at the school level to build their study on the basis of operational culture.
Consider two simple combinatorial identities: Each of them can be comprehended in two interpretations: combinatorial and algorithmic (Table 1).Construct the next binary set and match it with a subset of elements that correspond to the units of the binary set until no more binary sets of n elements end of do In fact, different algorithms use different data structures.While the first algorithm constructs the subsets directly, the second encodes the subsets as sets of zeros and ones.Thus, different interpretations can be associated with different data structures.
Indeed, when teaching courses on discrete mathematics, difficulties arise in explaining the complexity of algorithms if they use different data structures.
Algorithmic interpretation partially collapses the calculation process: in the algebraic version, due to the implementation of algorithms for working with polynomials, the process of counting subsets with the same number of elements is encapsulated.Instead of solving one problem, we get a solution to many problems-simultaneously counting the number of all subsets with the same number of elements.
More surprising is that, being within the framework of algebraic interpretation, we can easily obtain the second identity by differentiating the Newton binomial and substituting x = 1: "Moving backward" we can compare the computational algorithms for generating combinatorial objects for the left and right sides, but for this we need to find combinatorial objects that allow these computational schemes.If the objects are known-subsets with a distinguished element-then the interpretation of the calculations will not be difficult.The question remains: how was the desired combinatorial object guessed?This is the creative part of the computational problem.
Interestingly, in combinatorial problems, the calculation formulas themselves often give an idea of which combinatorial problem was solved.For example, n! is associated with the calculation of permutations, and the sign of multiplication with a combination of independent features, from which expressions of the form a n can be interpreted.Division is associated with the idea of factorization, and addition with the division of the set of combinations into subsets of objects for which the number of combinations can be counted using known formulas.
The program Wise Tasks Combinatorics [8] is built on the idea of a connection between algebraic and computational interpretations.
In this system, the problem is described by a program that generates all combinations that are obtained from simple sets through their Cartesian products, unions, and other binary operations on sets that are used in describing combinatorial problems.The program goes through all such combinations and counts their number.The interface provides students with the ability to enter arbitrary arithmetic expressions, supplemented by such combinatorial functions as the factorial and the number of k-element subsets of a set of n elements.
All calculations with expressions entered by a student are performed on the set of rational numbers, for which the number of digits of the numerator and denominator is not limited (long numbers).
The result is compared with the result calculated by the program and reported to the student (Figure 4).
Thus, the student's answer is compared not with the teacher's answer, but with the answer that is generated automatically according to the condition of the problem.If the compiler of the problem makes an error in the condition, this leads to the fact that another problem is actually formed, and the system will check the solution of this particular problem.Tasks can be posed by a student who would look for answers, and the system will check the correctness of the answer of any task allowed by the system.For example, Figure 4 shows three different expressions that define the same answer.Thus, the student's answer is compared not with the teacher's answer, but with the answer that is generated automatically according to the condition of the problem.If the compiler of the problem makes an error in the condition, this leads to the fact that another problem is actually formed, and the system will check the solution of this particular problem.Tasks can be posed by a student who would look for answers, and the system will check the correctness of the answer of any task allowed by the system.For example, Figure 4 shows three different expressions that define the same answer.
Therefore, the data structure of the task, which allows its convenient description, allows one to set and check tasks.
The only difficulty for the compiler of tasks is the need to use a special xml-language for describing the conditions of tasks.The authors of the system [8] found the following solution: instead of using a common editor for composing tasks, thematic editors were created that allow one to set tasks by changing the parameters of task conditions.For example, there exist the editor of tasks on maps, the editor of tasks on numbers, the editor of tasks on words, the editor of tasks for coloring polygons and polyhedra, the editor of tasks on a chessboard, etc.
Discussion
In the considered case studies, the role of calculations is different: from filling in tables, which are protocols for the execution of algorithms, to a comparative analysis of the algorithms themselves.Important for this work is the question of the transition from calculations by hand to computer ones.When making calculations by hand, the student performs two roles: the organizer and the executor of the calculations.Working with a computer, the student retains only the role of the organizer, outsourcing the execution of calculations to the computer.In such a setting, the following risks can be distinguished: (1) The performance of elementary computational actions by a student can be considered as training of elementary mental mechanisms, the failure of which may have delayed consequences, that can only be assessed in a longitudinal study with the participation of psychologists; Therefore, the data structure of the task, which allows its convenient description, allows one to set and check tasks.
The only difficulty for the compiler of tasks is the need to use a special xml-language for describing the conditions of tasks.The authors of the system [8] found the following solution: instead of using a common editor for composing tasks, thematic editors were created that allow one to set tasks by changing the parameters of task conditions.For example, there exist the editor of tasks on maps, the editor of tasks on numbers, the editor of tasks on words, the editor of tasks for coloring polygons and polyhedra, the editor of tasks on a chessboard, etc.
Discussion
In the considered case studies, the role of calculations is different: from filling in tables, which are protocols for the execution of algorithms, to a comparative analysis of the algorithms themselves.Important for this work is the question of the transition from calculations by hand to computer ones.When making calculations by hand, the student performs two roles: the organizer and the executor of the calculations.Working with a computer, the student retains only the role of the organizer, outsourcing the execution of calculations to the computer.In such a setting, the following risks can be distinguished: (1) The performance of elementary computational actions by a student can be considered as training of elementary mental mechanisms, the failure of which may have delayed consequences, that can only be assessed in a longitudinal study with the participation of psychologists; (2) When outsourcing computations to a computer, a person must be sure of the correctness of their implementation.
In our theoretical analysis, only the second question can be answered.This answer is presented in methodically developed case studies related to the introduction of new concepts.In some case studies, computational schemes were used, in which students partially performed calculations "by hand", while in others, the implementation of computational schemes was completely carried out on a computer.Let us analyze the role of computations in different case studies.
In the case study "Positional numeral systems and information compression algorithms", a connection was built between explaining the ideas of the decimal number system to younger students based on actions with sets of objects and the algorithm for moving to a new base.It is shown how the comparison of calculations of string lengths in unary and decimal systems can serve as a basis for introducing the concept of information compression.Thus, in this case study, calculations were used so that schoolchildren independently obtained experimental data for comparison and felt the difference in the growth of records of the same number in two different coding systems.
The case study of "p-adic numbers and the algorithm "division with remainder"" showed that a simple division algorithm with a remainder can become the basis of theoretical generalizations and allow one to come to an understanding of the complex concept of a p-adic number in ways accessible to a schoolchild.The encapsulation of this algorithm allows schoolchildren to comprehend the idea of a reverse code, which is used in computer processors, without additional effort.This case study shows that in some situations the computational algorithm reveals the concept better than its formal definition.This case study uses a psychological phenomenon that is well known to mathematics teachers: students who find it difficult to give a definition, but who have the right ideas about a mathematical concept, instead of giving a formal definition, offer to show how a particular concept works in an algorithm and provide calculations illustrating this concept.
The case study "Diophantine equations, continued fractions and Euclidean algorithm for finding GCD" presents a computational scheme implemented in the form of a computer module, but requiring calculations by hand to solve various problems.It is shown how one computational scheme can serve as a basis for a general look at such different concepts as a continued fraction, a Diophantine equation, and a reciprocal number in modular arithmetic.The computational scheme has become here a means of "enlarging didactic units" [9], allowing one to see what is common in different mathematical concepts and make the computational scheme the basis for theoretical reasoning.This case study shows that computational schemes can become a means of interiorization (and subsequent encapsulation) of concepts: filling in computational protocols and comparing them with each other leads to the generalizations that the teacher plans.
In the case study "Exponential function and Euler's computational scheme", by discretizing the algorithm for solving a simple differential equation, a connection was made between the definition of an exponential function and a geometric progression, and thus with the definition of the y = ax function, which is based on a generalization of the idea of repeated multiplication of a number by itself.This case study shows that the consideration of computational schemes for simple discrete models makes it possible to connect the main ideas of calculus with the ideas of algebra and number theory.
In the case study "The concept of the integral and the approximate calculation of the derivative", as in the case study "Diophantine equations, continued fractions and the Euclidean algorithm", one computational scheme describes the solution of different problems and thus it becomes a mechanism for generalizing and forming the concept of integral and antiderivative.Unlike the case study mentioned above, calculations by hand are not assumed here, but the solution is built in a dynamic mathematics system (spreadsheets can also be used) using any software environment that allows one to visualize the movement of a point (Scratch, Python, or JavaScript).This case study shows the importance of connecting the ideas of computer science and mathematics in a student's rich computer environment.In computer-free learning, it is actually assumed that the definition of the integral as the limit of finite sums is already encapsulated in the student's intellectual mechanisms.In fact, it turns out that few students can use this definition in problem solving.In accordance with the works of Vygotsky [10], Leontiev [1], Papert [11], and Dubinsky [3], in order for these actions to be encapsulated into concepts, they must be brought outside, and then the student's actions with them in the external environment will lead to their internalization into internal processes, which are then encapsulated into concepts.Dubinsky calls this approach de-encapsulation of mathematical concepts [3].In these terms, we can say that most of the case studies discussed in this paper demonstrate the de-encapsulation of various mathematical concepts.
In the case study "Combinatorial identities and generating functions", it is shown that computational algorithms for enumerating combinatorial objects can become the basis for the formation of combinatorial thinking.On the other hand, the ability to accurately describe a set of combinatorial objects provides the basis for creating a new type of tasks (Wise Tasks) [9,12], which have the property of checking the correctness of the answer according to the description of the condition, and not according to the reference answer.Also, in this case, the features of using symbolic algebra systems such as Mathematica, Maple, etc. for the algebraic solution of combinatorial problems were discussed.Important here is the transition from one interpretation to another and back.In this case, concepts encapsulated in one computational scheme can be de-encapsulated in another, which is the basis for understanding [3].
The analysis does not present mechanisms for conceptualizing computation that go beyond already known theories.So, for example, in work [13] it is shown that the search for a Turing machine with a fixed number of states and a binary alphabet, outputting the longest result to the tape and stopping, distinguishes between human and computer solutions (brute force algorithm).A person, using his/her existing conceptual knowledge, obtains twice as bad a result as a brute-force algorithm.At the same time, a person finds it difficult to justify the solution proposed by the computer.This problem shows a further direction of research: the study of the mechanisms of constructing concepts within the framework of which it is possible to explain the solutions found by the computer.
Conclusions
What conclusions can be drawn from the conducted theoretical and methodological analysis?
1.
Reading the work of I.S. Shapiro, written more than 40 years ago, shows that in modern conditions it is impossible to expect the action of psychological mechanisms that are formed during the algebraic transformation of trigonometric expressions.The appearance in the environment of computing tools that duplicate calculations that the student traditionally performed by hand raises the problem of preserving, under new conditions, the psychological effect that forms the student's intellectual mechanisms (convolution of algorithms, and encapsulation of algorithms into mathematical concepts), which was previously achieved by "manual" calculations.It is necessary to clarify the implementation of the ideas of the activity approach to learning, when the performer of operations is not a student, but a computing device.Solving this problem requires serious psychological longitudinal research.
2.
The students' knowledge of algorithms related to mathematical concepts can often be identified with the students' subjective feelings of understanding of these concepts.Therefore, the implementation of algorithms according to transparent computational schemes contributes to overcoming formalism in the study of mathematics, as it forms the feeling in schoolchildren that they themselves can engage in mathematical activities.
3.
The use of various environments that execute mathematical algorithms implies the possibility of using these algorithms in a "logically encapsulated" form.In order to achieve "psychological encapsulation", it is required to de-encapsulate the algorithm, that is, to deploy it in the form of a computational circuit that is available for a student to check.
4.
Some simple computational schemes, such as Euler's scheme for solving differential equations, can serve as the basis for generalizations that students themselves can make, revealing the commonality of computational schemes for solving problems in different representations of mathematical concepts.
5.
The presence of "mathematical solvers" gives more weight to the ability to correctly set problems.Connecting different computation schemes, such as a naive enumeration scheme with an efficient one, provides a framework to support research activities in which the student's intelligence interacts with "artificial intelligence" (AI) in solving a problem (AI in this situation is represented by powerful calculators based on a "force" solution of the problem, in which the lack of mathematical theory and effective algorithms is compensated by a simple enumeration of options).6.
The introduction of mathematical concepts through computational processes requires the attention of methodologists to the data structures used in computations.Different interpretations of the same problem can generate different computational schemes due to the different data structures to which the algorithms are applied.The importance of studying data structures in the study of mathematics and computer science has not yet received due attention, although the practice of introducing schoolchildren to the concept of the complexity of algorithms is becoming increasingly common.
Figure 1 .
Figure 1.Working with the Euclidean algorithm and the extended Euclidean algorithm in the special environment.
Figure 2 .
Figure 2. Working on the construction of the continued fraction and its convergents in the special environment.
Figure 1 .
Figure 1.Working with the Euclidean algorithm and the extended Euclidean algorithm in the special environment.
Figure 1 .
Figure 1.Working with the Euclidean algorithm and the extended Euclidean algorithm in the special environment.
Figure 2 .
Figure 2. Working on the construction of the continued fraction and its convergents in the special environment.Figure 2. Working on the construction of the continued fraction and its convergents in the special environment.
Figure 2 .
Figure 2. Working on the construction of the continued fraction and its convergents in the special environment.Figure 2. Working on the construction of the continued fraction and its convergents in the special environment.
Figure 4 .
Figure 4. Wise Tasks Combinatorics system.The system allows us to check tasks by their description, and it does not matter which formula represents the answer.
Figure 4 .
Figure 4. Wise Tasks Combinatorics system.The system allows us to check tasks by their description, and it does not matter which formula represents the answer. | 2023-10-05T15:24:04.440Z | 2023-10-02T00:00:00.000 | {
"year": 2023,
"sha1": "a9ca56eb6aba6a4dcb77bebd6334768c80564598",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-3197/11/10/194/pdf?version=1696249337",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4f81d387354fb136834c973c572108ebd5dc8a5f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
118295185 | pes2o/s2orc | v3-fos-license | Hamiltonian operator for loop quantum gravity coupled to a scalar field
We present the construction of a physical Hamiltonian operator in the deparametrized model of loop quantum gravity coupled to a free scalar field. This construction is based on the use of the recently introduced curvature operator, and on the idea of so-called"special loops". We discuss in detail the regularization procedure and the assignment of the loops, along with the properties of the resulting operator. We compute the action of the squared Hamiltonian operator on spin network states, and close with some comments and outlooks.
I. INTRODUCTION
General relativity in Ashtekar-Barbero variables [1,2] can be cast in an SU(2) Yang-Mills theory and treated as a Hamiltonian system with constraints consisting of the Gauss (gauge constraints), spatial diffeomorphism and Hamiltonian constraints. Canonical loop quantum gravity [3][4][5][6], which is an attempt of quantization a la Dirac [7] of general relativity, has successfully completed the construction of a kinematical Hilbert space and the implementation of the Gauss constraints and the spatial diffeomorphism constraints [9] in the quantum theory, leading to a gauge and spatial diffeomorphism invariant Hilbert space H G Diff . The treatment of the last constraints is a more complicated task. The Hamiltonian has been regularized and promoted to an operator acting on H G Diff by Thiemann [11] improving earlier attempts [12], however even if the general structure of the solutions to the Hamiltonian constraints is known, it is very difficult to define the physical Hilbert space. The issues are conceptual and technical.
Conceptual, because the Hamiltonian is not preserving H G Diff and even if attempts to deal with the absence of a physical Hilbert space have been explored [13], this problem has led to new research directions, in particular the master constraint program [14], the algebraic quantum gravity program [15], the deparametrized models [16-19, 21, 22] in the canonical setting, the spinfoam program [23] in the covariant framework and also some toy models [24][25][26][27] in which an alternative quantization strategy of the Dirac algebra is applied.
Concerning the technical difficulties, the Hamiltonian constraint is composed of two terms: the Euclidean part and the Lorentzian part. Both are non polynomial in the canonical variables, specially the second term that involves a double Poisson bracket of the Euclidean part with the volume and has a complicated form in terms of Ashtekar variables. A clever way to tame the non polynomial character of the constraints is using "Thiemann's trick", i.e. replacing the classical non polynomial functions by Poisson brackets of polynomial functions with the volume and of the Euclidean part with the volume. Once promoted to an operator the resulting expression comprises several commutators containing the volume operator [28][29][30]. While this procedure helps to bypass the non polynomial character of the constraint, the resulting operator however is not self-adjoint and the explicit calculation of the Hamiltonian action is impossible because the volume operator present in the final expression has no explicit spectral decomposition. The partially formal result is already an extremely involved expression [32,33].
In this work we present another proposal for quantizing the Hamiltonian constraints. The first change is already in the classical formula for the scalar constraint. It is the sum of terms proportional to the Euclidean scalar constraint and, respectively, the Ricci scalar of the three metric tensor [8]. Our aim is to implement the dynamics in the quantum model of gravity coupled to a free scalar field [19]. The construction is conceptually based on the recently introduced "intermediate" Hilbert space H vtx [34] that is preserved by the obtained Hamiltonian operator, raising hope for a well defined evolution operator with satisfactory properties, e.g. self adjointness.
The developed regularization is based on a concrete implementation of a proposal first appeared in [35] concerning the Euclidean constraint, and the use of the curvature operator introduced in [36] to deal with the Lorentzian part. The paper is organized as follows. In section II we review the classical model of gravity minimally coupled to a scalar field; in section III we review the loop quantum gravity construction, present the regularization of the Hamiltonian and discuss the quantum operator and its properties; then we close in section IV with some conclusions and outlooks to further developments of this program.
II. CLASSICAL THEORY
Considering gravity minimally coupled to a scalar field in the standard ADM formalism [37], the theory is set as a constrained system for the standard canonical variables q ab (x) and φ(x), respectively the metric and the scalar field on a 3d manifold Σ with conjugate momenta p ab (x) and π(x). The analysis shows that the vector constraints C a (x) and the scalar constraints C(x) in this model are expressed in terms of the vacuum gravity constraints, C gr a (x) and C gr (x), and the scalar field variables as follows C a (x) = C gr a (x) + π(x)φ ,a (x), (II.1) C(x) = C gr (x) + 1 2 where q is the determinant of the metric q ab .
With the Ashtekar-Barbero variables (A i a , E a i ) (i = 1, 2, 3) used in LQG, where G is Newton constant and β is the Immirzi parameter, additional constraints -the Gauss constraints generating Yang-Mills gauge transformations -are induced: The field A i a is identified with an su(2)-valued differential 1-form while the field E a i with an su(2) * vector density where τ 1 , τ 2 , τ 3 ∈su(2) is a basis of su (2) such that A solution by points in the phase space (A i a , E a i , φ, π) must satisfy all the constraints: In terms of the Ashtekar-Barbero variables, the gravitational part of the scalar constraint reads where R is the Ricci scalar of the metric tensor q ab on Σ related to the Ashtekar frame variable by The first term of C gr is usually related to the Euclidean scalar constraint To construct a quantum theory mainly two strategies can be adopted. The first is to promote the whole set of constraints to operators defined in an appropriate Hilbert space and look for the states annihilated by the constraints operators to build a Physical Hilbert space. The second, that we consider in this work, is to deparametrize the theory classically then quantize. The deparametrization procedure starts with assuming that the constraints (II.9) are satisfied, hence we can solve the vector constraints for the gradient of the scalar field, (II. 13) and then use this condition in (II.2) to solve it for π: (II.14) In case of vanishing potential which is our assumption in the rest of this article, equation (II.14) represents the deparametrization of the system with respect to the scalar field, which can be seen as an emergent time. Note that in this case, on the constraint surface, it is necessary to have The sign ambiguity in (II.14) amounts to treating different regions of the phase space, namely for + and − respectively We choose the phase space region corresponding to + and . It contains spacially homogeneous spacetimes useful in cosmology. Then, the scalar constraints can be rewritten in an equivalent form as We will also restrict ourselves to the case of although technically there is no problem in admitting both signs in the quantum theory.
The constraints C commute strongly, implying [18] {h(x), h(y)} = 0. (II. 22) In this case a Dirac observable O on the phase space would satisfy The vanishing of the first and second Poisson brackets induce gauge invariance and spatial diffeomorphism invariance respectively. The vanishing of the third Poisson bracket is equivalent to writing
A. The general structure
The quantization of gravity coupled to a massless scalar field was performed in [19,20]. While the derivation was partially formal -the existence of the operatorsĈ gr a is assumed at some stage -the result is expressed in a derivable way by elements of the framework of LQG: • The physical Hilbert space H is the space of the quantum states of the matter free gravity which satisfy the quantum vector constraint and the quantum Gauss constraint.
• The dynamics is defined by a Schrödinger like equation where t is a parameter of the transformations ϕ → ϕ + t .
• The quantum HamiltonianĤ is a quantum operator corresponding to the classical observable This operator could be defined by using already known operators q(x) and C gr (x), as outlined in [20]. However, the observable √ qC gr written in terms of the Ashtekar-Barbero variables reads The denominator | det E(x)| present in (II.10) disappears in (III.4). Moreover, the formula (III.85) below for |detE(x)| R(x) expressed in terms of the quantizable observables (holonomies and fluxes) also contains the same denominator, which again disappears after using the formula (III.4). That coincidence of reductions motivates us to quantize the expression (III.4) for h(x) directly.
B. Kinematical Hilbert space
The kinematical quantum states in LQG are cylindrical functions of the variable A, i.e., they depend on A only through finitely many parallel transports where e ranges over finite curves -we will also refer to them as edges -in Σ. The scalar product is . . , g n )ψ (g 1 , . . . , g n ). (III.8) We denote the space of all the cylindrical functions defined as above with a graph γ by Cyl γ and, respectively, the space of all cylindrical functions by Cyl. The kinematical Hilbert space H kin is the completion with respect to the Hilbert norm defined by (III.8).
Every cylindrical function f is also a quantum operator (III.10) A typical example is defined by a path p in Σ, a half-integer j = 0, 1 2 , 1, 3 2 , ..., the corresponding representation and some orthonormal basis v 1 , ..., v 2j+1 ∈ H (j) , Note that a connection operator " A" itself is not defined.
An operatorĴ x[e]ξ , which is naturally defined in this framework, is assigned to a triple (x, ξ, [e]), where x ∈ Σ, ξ ∈ su(2) and [e] is a maximal family of curves beginning at x such that each two curves overlap on a connected initial segment containing x. To define the action ofĴ x[e]ξ on a function Ψ ∈ Cyl, we represent this function on a graph such that e I ∈ [e]. The action isĴ ψ(h e 1 e ξ , h e 2 , ..., h en ). (III.14) For ξ = τ i , it is convenient to introduce a simpler notation The field E a i (x) is naturally quantized as Given an edge e : [t 0 , t 1 ] → Σ, and a function f ∈ C(SU (2)), the variation is given by the following formula where by h e,t 1 ,t (A) (respectively, h e,t,t 0 (A)) we mean the parallel transport with respect to A along e from the point e(t) to e(t 1 ) (e(t 0 ) to e(t)), and by the partial derivatives with respect to group elements we mean The quantum flux is a well defined operator where e runs through the classes of curves beginning at x, and κ S (e) = −1, 0, 1, (III. 22) depending on whether e goes down, along, or, respectively, up the surface S. A generalized function ξ may also involve parallel transports depending on A. A typical example is assigns to each point x a path p(x), h p(x) (A) is the parallel transport, and Ad is the adjoint action of SU(2) in the Lie algebra su (2) Ad(g)ζ = gζg −1 .
(III. 25) In conclusion, the operators compatible with the LQG structure of H kin are (functions of the) parallel transports and fluxes.
The quantum Gauss constraint operator readŝ are functions such that (2)). (III.28) We denote their algebra, subalgebra of Cyl by Cyl G , and the corresponding subspace of H kin by H G kin . A dense subspace of H G kin is spanned by the spin network functions. A spin network function is defined by a graph γ = (e 1 , ..., e n ), half integers (non zero) (j 1 , ..., j n ) assigned to the edges and intertwiners (ι 1 , ..., ι m ) assigned to the vertices (v 1 , ..., v m ): Each ι α is an invariant of the tensor product of the representations assigned to the edges e I whose source is v α and the representations dual to those assigned to the edges whose target is v α .
Given a graph γ, we denote by Cyl G γ the space spanned by all the spin network functions defined on this graph, and To define the orthogonal decomposition of the space of the Gauss constraint solutions we need to admit closed edges, that is edges for which the end point equals the beginning point, and closed edges without vertices (embeddings of a circle in Σ). In the case of an edge without vertices, we choose a beginning-end point arbitrarily in the definition of the spin network function. On the other hand we do not count those graphs that can be obtained from another graph by the splitting of an edge. Then the space of all the solutions to the Gauss constraint can be written as the orthogonal sum where γ ranges over all the un-oriented graphs defined in this paragraph.
C. The vertex Hilbert space
Every Given a graph γ consisting of edges and vertices the action of U f on a cylindrical function (III.6) reads where for the parallel transport along each edge f (e I ) we choose the orientation induced by the map f and the orientation of e I chosen in (III.6). Smooth diffeomorphisms map analytic graphs into smooth graphs, therefore their action is not defined in our Hilbert space H kin . Suppose, however, that given a graph γ, a smooth diffeomorphism f ∈ Diff ∞ (Σ) maps γ into an analytic graph. Then (III.32) and (III.33) define a unitary map The idea of the vertex Hilbert space of [34] is to construct from elements of the Hilbert space H G kin partial solutions to the vector constraints, by averaging the elements of each of the subspaces H G γ with respect to all the smooth diffeomorphisms Diff ∞ (Σ) Vert(γ) which act trivially in the set of the vertices Vert(γ). Denote by TDiff are in one to one correspondence with the elements of the quotient Since D γ is a non-compact set and we do not know any probability measure on it, we define the averaging in Cyl * , the algebraic dual to Cyl. Given Ψ ∈ H G γ , we turn it into Ψ| ∈ Cyl * , and average in Cyl * , The resulting η(Ψ) is a well defined linear functional for every embedded graph γ. We extend it by linearity to the algebraic orthogonal sum (III.31) The vertex Hilbert space H G vtx is defined as the completion under the norm induced by the natural scalar product It has an orthogonal decomposition that is reminiscent of (III.31): Let FS(Σ) be the set of finite subsets of Σ. Then of the graphs γ ∈ γ(V ) and S G γ is the subspace S G γ ⊂ H G γ of the elements invariant with respect to the symmetry group Sym γ . Importantly, is an isometry. The orthogonal complement of S G γ in H G γ , on the other hand, is annihilated by η.
The Hilbert space H G vtx carries a natural action of Diff ω (Σ), which we will also denote by U . It is defined by In this sense, they are partial solutions to the quantum vector constraint. They can be turned into full solutions of the quantum vector constraint by a similar averaging with respect to the remaining Diff(Σ)/Diff(Σ) Vert(γ) [34]. We denote the space of those solutions H G Diff .
D. The Hamiltonian operator
In the Hilbert space H G vtx we will introduce (derive) an operator In order to define the corresponding operator, we need to consider how to regularize and quantize an expression of the form where f is a smearing function defined on Σ while a(x) and b(x) are functionals of the fields A i a and E a i . Introducing a decomposition of the manifold Σ into cells ∆, the integral can be approximated as can be quantized as well-defined operators, equation (III.49) then shows how to define the operator corresponding to Σ d 3 x f (x) a 2 (x) + b 2 (x). Equation (III.49) is the basis of our construction of the operator (III.46).
In our case, the operators corresponding to a(x) and b(x) themselves will be available, and will have the general form where the operatorsâ v andb v , when applied to a spin network state defined on a graph, have a non-zero action only if v is one of the vertices of the graph. In this case, the operator can be defined simply by insertingâ(x) andb(x) into the right-hand side of equation (III.49). In this way one obtains an operator, whose restriction to the space of spin network states defined on a given graph γ takes the form In other words, our regularization gives (III.53)
Euclidean part
We start with the quantization of the Euclidean part of our Hamiltonian (see (II.12)). In equation (III.49), the role of a(x) is now played by the function Consequently, we consider the quantization of the integral (where an arbitrary smearing function f has been introduced). According to the general framework of LQG, we need to express the integral in terms of parallel transports h e and fluxes P S,i . The easiest example is to consider the Riemann sum for this integral obtained by considering a cubic partition P of Σ into cells of coordinate volume and to distribute the suitably For each cube denote by x the center, by S a , a = 1, 2, 3, three sides x a =const (for each a there are two, choose any one and orient such that the following is true). Moreover, for every x ∈ , denote by p (x) the line from x to x ∈ . Then, we have where by P S a ,i we mean P S,ξ of (III.20) with h p (x) standing for the parallel transport (with respect to a given field A) along p , and for a with W l = i l(l + 1)(2l + 1). In this way we write the original expression in terms of fluxes and parallel transports (as a limit), in the sense that in the limit → 0 when we refine the partition ( → ·).
More generally, we regularize the integral by using a partition P which consists of: • an -dependent cellular decomposition C of Σ; • assigned to each cell ∆ ∈ C : such that the following functional approaches the Euclidean Hamiltonian, As in the cubic example, by P S I ∆ ,i we mean P S,ξ of (III.20) with 2 Equation (III.60) is obtained using the relations which depends on the ordering of the operators, symbolized by . . . . . ..
In this way we obtain an operator which depends on the partition P and is well defined in H kin . However, as we refine the partition P , the operator family does not converge to any operator in H kin . This is a well known problem in LQG and it does not have a solution in the kinematical Hilbert space H kin . A way out is to consider the dual action of the regulated operators d 3 xf (x)Ĥ EuclP (x) in the Hilbert space H G vtx . That was done for the (formally regularized) operatorĈ gr in [34]. As it is explained therein, and those arguments apply also in the case at hand, a limit as → 0 exists upon several conditions about the partitions P . To begin with, we adjust the partitions individually to each subspace H γ in the decomposition (III.31). Secondly, a successful partition has to have a suitable diffeomorphism covariance in the dependence of the partitions on γ and on .
The outstanding problem though, is the dependence of the result on choices made. There are many partitions which satisfy the conditions. The resulting operator carries a memory of the choice of P , for example on the adjustment of the fluxes to graphs. To restrict that ambiguity, we study first the straightforward quantization of Certainly the product of the two Dirac delta distributions is ill defined at some points x, e I (t) and e I (t ). However, we can precisely indicate those points at which the expression is identically zero. To begin with, the product δ(x, e I (t))δ(x, e I (t )) vanishes except for the triples (x, e I (t), e I (t )) such that and it is ψ(g 1 , ..., g n ) g I =he I (A),g I =he I (A) (III.72) modulo the ill defined factor (δ(v, v)) 2 which has to be regularized. Our regularization is also expected to replace F k abė IėI by a parallel transport h e II along a loop e II assigned to the two (segments of) edges. Finally, diffeomorphism invariance implies that each vertex v and a pair of transversally intersecting edges e I and e I at v contribute the same operator as any other diffeomorphism equivalent triple v , e I and e I .
We are now in a position to formulate assumptions about the construction of the partitions P adapted to a graph γ, as shown in [34], and the assumptions about the assignment of the loop α K ∆ used to regularize the connexion curvature if ∆ does not contain an edge of γ but it contains a segment of an edge then, by splitting the edge and reorienting its segments suitably, we turn that case into the case of ∆ containing a 2-valent vertex; • the value of non vanishing κ ∆IJIJ is an overall constant κ 1 (v) depending on the valence of the vertex but independent of ∆, I, J.
Concerning the prescription for the assignment of the loops α IJ -we call them special loopswhich are created by the Euclidean part of our Hamiltonian operator, we wish the construction to satisfy the following requirements: -The loop added by the Hamiltonian should be attached to the graph according to a diffeomorphism invariant prescription [3,34]. This property allows the operator to be well defined on the space H G vtx .
-It should be possible to distinguish between loops attached to the same vertex but associated to different pair of edges, and between loops attached to the same pair of edges by successive actions of the Hamiltonian. This property makes it possible to define the adjoint operator on a dense domain in H G vtx , and consequently to construct a symmetric Hamiltonian operator.
Consider a vertex v of the graph γ defined above and a set of links {e I } incident at v. In order to satisfy the first requirement, we use a construction that was introduced in [10] and was presented in a work of T. Thiemann [11]. The construction consists of two parts. Firstly, to each pair of links e I and e J incident at v, we define an adapted frame in a small enough neighborhood of v. Then we require that the loop α IJ , associated to the pair (e I , e J ), lies in the coordinate plane spanned by the edges e I and e J . The choice of the adapted frame is based on the following lemma: Let e and e be two distinct analytic curves intersecting only at their starting point v. Then there exist parameterizations of these curves, a number δ > 0, and an analytic diffeomorphism such that, in the corresponding frame, the curves are given by We will call the associated frame a frame adapted to e, e .
To carry out the second part of the construction, we need a diffeomorphism invariant prescription of the topology of the routing of the loop α IJ . In other words, the plane in which the loop lies should be chosen in a way which is diffeomorphism invariant, and which does not cause the loop to intersect the graph γ at any point different from the vertex v. The choice that α IJ lies in a small enough neighborhood of v guarantees that the loop cannot intersect any edge of γ except the edges incident at the vertex v. Then the routing of the loop in that neighborhood is achieved through the prescription given in [11] (and which we do not repeat here). Now let us turn to the second requirement, which is crucial in order to have the possibility of defining a dense adjoint operator that allows one to construct symmetric Hamiltonian operators, and eventually to provide self-adjoint extensions. To state the prescription that satisfies the second requirement, we need to define the order of tangentiality of an edge at the node. This is defined as follows. Considering the vertex v and the edge e I , we denote by k IJ 0 the order of tangentiality of e I with another edge e J incident at v. If the edges e I and e J are not tangent at v, we understand that k IJ = 0. The order of tangentiality k I of the edge e I at the vertex v i.e. as the highest order of tangentiality of the edge e I with the remaining edges incident at v. The element which completes the prescription of the special loop according to the two requirements is now stated as follows:
Requirement 2.
The special loop α IJ is tangent to the two edges e I and e J at the vertex v up to orders k I + 1 and k J + 1 respectively, where k I ( 0) and k J ( 0) are respectively the orders of tangentiality of e I and e J at the node. This property indeed makes a loop attached by the Hamiltonian to a given pair of edges perfectly distinguishable from any other loop at the same node.
To summarize, the prescription for assigning a special loop to a pair of links incident at a vertex is to choose the loop to lie in the coordinate plane defined by the frame adapted to the pair of edges, then to follow a specific and well defined routing of the loop described in [11], and finally to impose the tangentiality conditions introduced above. With this prescription, the loop assigned to a pair of edges is unique up to diffeomorphisms.
In consequence, given a graph γ and the auxiliary graph γ obtained by the splitting, the contribution from a cell ∆ containing a vertex v reads where (ė I ,ė J ) is 0 ifė I andė J are linearly dependent or 1 otherwise. This operator maps Considering all the graphs we combine the operators into a single -dependent operator In order for the operator (III.76) to be cylindrically consistent, we should have κ 1 (v) = κ 1 , an overall constant independent of the valence of the vertex v. 3 However, since our goal at the end is to implement this operator in the gauge invariant Hilbert space, we can equally well define the operator by proceeding with the regularization directly on the spaces Cyl G γ orthogonal to each other. In that case the question of cylindrical consistency does not arise, and we may allow the possibility that κ 1 (v) depends on the valence of the vertex.
In this way we have determined the action of an operator (III.55) up to a value of κ 1 (v) (constant or not), assuming the conditions (1) and (2). This operator passes naturally to Cyl G (III.77) As we refine the partition by → 0, the loops α IJ are shrank to v. However, the -dependent defined by the duality * in H G vtx (on a domain that includes η(Cyl G )), is insensitive to the shrinking, as long as each loop α IJ is shrank within the diffeomorphism class of γ ∪ α IJ . Hence we drop the label in the dual operator. It follows that the Euclidean part of the Hamiltonian is defined as In order to define the square root in this equation, one could choose a symmetric ordering of H E * v . However, a symmetric ordering of the Euclidean term is not necessary for constructing the complete Hamiltonian, for which instead the square root of the sum of Euclidean and Lorentzian terms needs to be defined.
Lorentzian part
Following the strategy of quantization indicated by equation (III.49), we now introduce a second operator corresponding to the integral of the term √ qR again smeared with an arbitrary function f The construction of the operator is in two parts: first we write an approximate expression of the classical integral by implementing a cellular decomposition C of the 3d manifold, characterized by a regulator . Secondly, the regularized expression is promoted to an operator, which after taking the regulator limit, leads to a background independent operator acting in the Hilbert space of gauge invariant states.
The aim is to construct an operator corresponding to the following function on the classical phase space: Consider a cellular decomposition C of the manifold Σ. The size of the cells is assumed to be controlled by the regulator , in such a way that the coordinate size ∆ of each cell ∆ ∈ C satisfies ∆ < . We can then write the integral (III.80) as a limit of a Riemannian sum over the cells ∆ , (III.81) where on the right-hand side x ∆ denotes any point inside ∆ .
Next we decompose each cell ∆ into c ∆ closed cells ∆, where a cell ∆ has a boundary formed by a number n ∆ of 2-surfaces (faces). In equation (III.81), we then approximate the integral of
|det[E]| by a Riemannian sum over the cells ∆, and the integral of |det[E]|R by a regularized Regge action for an appropriate ∆-decomposition of ∆ , obtaining
(III.82) The functionals q ∆ (E) [28] and R ∆ (E) are defined on the classical phase space as 4 where we use the following notation: -given ∆, the index I = 1, ..., n ∆ labels the surfaces (faces) S I ∆ forming the boundary ∂∆ of the cell ∆ and u labels the hinges on that boundary (the 1-skeleton of the cell); -the symbols S I ∆,u and S J ∆,u stand for the two surfaces in ∂∆ that intersect at u; -the symbol P S I ∆ ,i represents the flux of the field E a i across S I ∆ , defined in (III.20) with and κ 0 (∆) is a regularization constant depending on the shape of the cell ∆; -finally α u is a fixed integer parameter corresponding to the number of cells sharing the hinge u in the cellular decomposition C .
Considering the coordinate size ∆ < of the cell ∆, defined such that the limit → 0 is equivalent to ∆ → 0, the functional q ∆ (E) is such that 1 V 2 ∆ q ∆ (E) approximate the function |det[E]| at any point within the cell ∆, V ∆ ∝ 3 ∆ being the coordinate volume of ∆. Also, each term in the sum defining R ∆ (E) (III.85), rescaled by L u ∝ ∆ that is the coordinate length of the edge u on the boundary of ∆, approximate the function L u (E)Θ u (E) in the limit ∆ → 0, where L u (E) and Θ u (E) are respectively the length of the hinge u and the dihedral angle at u in ∆ expressed in terms of densitized triads. 4 The functional q∆(E) can be defined in a different way: This definition would lead to a volume operator that is sensitive to the differential structure at the nodes, see [29][30][31].
The sum over the cells ∆ of the functional R ∆ (E) corresponds to the regularized Regge action [38] in 3d on ∆ , which is by itself an approximation of the function ∆ d 3 x |det[E]|R. We direct the reader to [36] for more details about the concepts of this construction.
To continue the calculation from equation (III.82), we assume that the cells ∆ are chosen such that we obtain the same contributions q ∆ (E) and R ∆ (E) from each cell ∆, up to higher order corrections in ∆ (equivalently, up to higher order corrections in ). Hence each sum over the cells ∆ becomes the number of cells c ∆ times the contribution of the cell∆, chosen as the cell containing the point x ∆ at which the smearing function f is evaluated. In this way we obtain Let us now introduce the approximation which is an averaging of the values of the function f inside the cell ∆ , and which can be seen as a better approximation of the value of the function f inside the cell ∆ , in the sense that we are probing the function f in several points inside the cell instead of one point x ∆ . Inserting equation (III.90) in equation (III.89), we come to the result we are looking for: where last step is achieved by combining the two sums over ∆ and ∆.
Notice that the expression of R ∆ (E) in (III.85) contains an overall factor of q ∆ (E) −1 .
This leads to a crucial simplification in the expression of q ∆ (E) R ∆ (E), namely, the factors of q ∆ (E) are canceled: In the quantum theory, this simplification implies that the volume operator will be absent from the Lorentzian part, and consequently from the whole Hamiltonian operator. The absence of the volume is an important technical advantage in the calculation of the action of the Hamiltonian.
Before promoting this expression to an operator, we study the term ijk P S I ∆,u ,j P S J ∆,u ,k appearing in equation (III.92). This term approximates the classical function which means that the edges e I (t) and e J (t ) are different (I = J) and transversal at their intersection point. In order to pass this property to the quantum operator, we introduce the coefficient κ ∆IJ , defined in the following, in the expression of q ∆ (E) R ∆ (E) and we write In order to promote the expression in (III.96) to a quantum operator, we first need to set some requirements on the decomposition C so that we adapt it to the functions in Cyl: given a Ψ in Cyl with a graph γ = (e 1 , ..., e n ) of Vert(γ) = (v 1 , ..., v m ), the requirements are as follows: • each cell ∆ contains at most one vertex of the graph γ; • each 2-cell (face) on the boundary of a cell ∆, containing a vertex of γ, is punctured exactly by one edge of the graph γ. The intersection is transversal and belongs to the interior of the edge; • if v ∈Vert(γ) and v ∈ ∆, then κ ∆IJ is not zero only for edges e I and e J of γ meeting transversally at v; • if ∆ does not contain an edge of γ but it contains a segment of an edge then, by splitting the edge and reorienting its segments suitably, we turn that case into the case of ∆ containing a 2-valent vertex, A result that follows from the derivation of the curvature operator in [36] is that the value of non vanishing κ ∆IJ is an overall constant κ 2 (v) depending on the valence of the vertex but independent of ∆, I, J. This property is obtained from an averaging procedure used in order to remove the dependence on C .
Having the quantum operators corresponding to P S I ∆,u ,j s', we are now able to define the quantum operator corresponding to q ∆ (E) R ∆ (E) as Considering a cylindrical function Ψ γ in the Hilbert space Cyl γ , thanks to regularization detailed above we have
Now we can define an operator acting in Cyl
Notice that there is no ordering ambiguity in the operator H L v and therefore no ordering ambiguity in H L v * .
a. A symmetric Hamiltonian operator
The Hamiltonian operator represents the quantization of the classical Hamiltonian of the deparametrized theory of GR coupled to a free scalar field. This final operator is required to be self-adjoint on some non-trivial domain in order for it to generate unitary evolution of the quantum system and for its spectra to admit a physical interpretation. Therefore, a first step toward achieving self-adjointness 5 of the Hamiltonian is to construct a symmetric operator. A symmetric operatorĤ could be introduced as a combination of the operator H T x and its adjoint operator H T (III.105)
b. Gauge and diffeomorphism invariance
The operatorsĥ α ee areĴ x,e,i are both gauge invariant. Hence the Hamiltonian operatorĤ is gauge invariant. Considering the group of smooth diffeomorphisms, the operatorĤ is also diffeomorphism invariant, thanks to the regularization adopted and the averaging procedures involved in defining the curvature operator [36]. As a consequence of its gauge and diffeomorphism invariance,Ĥ preserves the space H G vtx .
c. Action of the symmetric Hamiltonian operator
Looking into the action ofĤ on a spin network state |γ, {j}, {ι} , one would like to express the matrix elements of this operator in terms of the quantum numbers labeling the states, namely the spins j and the intertwiners ι. Notice that while the operator H E v e,e in the Euclidean part is a graph changing operator, hence not preserving the original intertwiner space, the operator H L v e,e in the Lorentzian part is diagonal on the basis adapted to the pair of edges {e, e }. Also, from equations (III.107) and (III.108), we can deduce that the domain of the Hamiltonian operatorĤ admits an orthogonal sum decomposition in terms of stable subspaces under repeated action ofĤ. This result generalizes to other symmetrizations than the one proposed in (III.105), and it may be of considerable importance in the elaboration of self-adjointness proofs and the calculation of the evolution of physical states in this model.
IV. SUMMARY AND OUTLOOKS
We considered a model of Einstein gravity coupled to a free scalar field, in which the dynamics of the gravitational field is described by deparametrization with respect to the scalar field. In the corresponding quantum theory, constructed using the techniques of loop quantum gravity, the quantum dynamics is given by the evolution of the physical (i.e. gauge and diffeomorphism invariant) states of the gravitational field with respect to the scalar field. This evolution is governed by a physical Hamiltonian operator, which we constructed in this paper. The implementation of the Lorentzian part of our Hamiltonian is based on regularization used to define the curvature operator introduced in [36]. As to the Euclidean part, we refined and made precise the idea, first considered in [17], of regularizing the curvature by means of loops attached to pairs of edges at a vertex of a spin network graph.
By carefully specifying the properties of the special loops created by the Euclidean Hamiltonian operator, we were able to define an operator which is diffeomorphism invariant, and whose adjoint operator is densely defined. The second property is crucial in that it allows to symmetrize the operator and eventually to construct self-adjoint extensions. Our regularization of the Euclidean term can also be applied in vacuum loop quantum gravity to define a Hamiltonian constraint operator for which the adjoint operator is densely defined. This question will be treated in a future work. On a practical level, an important feature of our Hamiltonian is that the volume operator does not appear in it. This implies a considerable simplification of the calculation of the action of the Hamiltonian on spin network states.
The construction presented in this paper gives us a concrete and tractable Hamiltonian operator for loop quantum gravity coupled to a free scalar field. This makes it possible to test the dynamics of the theory, as the time evolution of spin networks under this Hamiltonian can be computed. In particular, a question of interest will be to study the evolution of semiclassical states describing e.g. cosmological spacetimes.
Intertwiners
The intertwiner between three representations j 1 , j 2 and j 3 is given by the Wigner 3j-symbol: The order of the spins in the symbol is indicated by a + or a − at the node. Thus, which is a graphical representation of the relation that switching two columns in the symbol (A.7) multiplies the symbol by (−1) j 1 +j 2 +j 3 . Another symmetry relation is When one of the spins is zero, the 3j-symbol reduces to the epsilon tensor: Any invariant tensor t m 1 ···m N , having indices in representations j 1 , . . . , j N , is an element of the space of intertwiners between the representations j 1 , . . . , j N , and as such, it can be expanded using a basis of the intertwiner space. Expressing a tensor with N indices as a block to which N lines are attached, one has the relations as well as the straightforward generalization of the last relation for tensors of higher order.
Group elements
The representation matrix for a group element is expressed graphically as In computing the action of the Hamiltonian, we need to know the action of a flux operator P S,i on a holonomy h e . In the cases where the intersection v between the surface S and the edge e is the beginning or ending point of the edge, this action is given bŷ where W j = i j(j + 1)(2j + 1). Therefore we can write (A.23) graphically aŝ | 2015-06-24T15:08:35.000Z | 2015-04-08T00:00:00.000 | {
"year": 2015,
"sha1": "b2dd633ae44e9b8856081503fcc2ae5441720c86",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1504.02068",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b2dd633ae44e9b8856081503fcc2ae5441720c86",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
80012039 | pes2o/s2orc | v3-fos-license | Trans-lingual biopsy of the tongue base under local anaesthetic – a new technique
Biopsy of the tongue base is an important component of investigating neck metastasis with an unknown primary.1 Despite it being a common procedure it is considered to be one of the most difficult areas of the upper aerodigestive tract to access.1–4 The difficulty is owing to a variety of factors. The commonest method of accessing the tongue base for biopsy is during a panendocsopy procedure under a general anaesthetic using a pharyngoscope. This is fraught with potential problems due to the need for a general anaesthetic and a high likelihood of false negatives.2 We present a novel technique of trans-lingual tongue base biopsy which can be safely performed under local anaesthetic.
Introduction
Biopsy of the tongue base is an important component of investigating neck metastasis with an unknown primary. 1 Despite it being a common procedure it is considered to be one of the most difficult areas of the upper aerodigestive tract to access. [1][2][3][4] The difficulty is owing to a variety of factors. The commonest method of accessing the tongue base for biopsy is during a panendocsopy procedure under a general anaesthetic using a pharyngoscope. This is fraught with potential problems due to the need for a general anaesthetic and a high likelihood of false negatives. 2 We present a novel technique of trans-lingual tongue base biopsy which can be safely performed under local anaesthetic.
Materials and methods
A 53 year old gentleman re-presented to the otolaryngology department following radiotherapy for a tongue malignancy. He required full work-up for presumed recurrence and therefore was listed for a tongue base biopsy. He underwent this procedure and due to the level of radionecrosis and slough in the tongue base, the biopsy was inconclusive. The patient was therefore referred to the senior author (JDC) for tans-lingual tongue base biopsy. To compound the problem the patient had a complex medical history. He had severe chronic obstructive pulmonary disease (COPD) and had had a previous traumatic brain injury with damage to both cerebral hemispheres. For this reason he was deemed high risk for a general anaesthetic.
The first step of this technique is to ensure adequate infiltration of local anaesthetic. The anaesthetic agent used was LignospanTM (lidocaine 2% and adrenaline 1:80,000) and was injected using a dental needle and syringe. The anaesthetic was injected in the underside of the tongue, either side of the lingual frenulum. One must inject deep into the muscle of the tongue taking care not to damage the lingual nerve, lingual gland and submandibular ducts. Thorough knowledge of the anatomy of the tongue is vital. The next step is to make a small incision through the mucosa on the ventral surface of the tongue, again near to the mid-line. The index finger of the non-dominant hand is placed onto the tongue base, just over the lesion or area to be biopsied. The closed biopsy forceps are placed into the mucosal incision and forced through the muscle of the tongue until felt (through the tongue tissue) with the other hand. The mouth of the forceps are opened, pushed into the lesion and then closed and withdrawn. The position of the forceps and the operators non-dominant hand is depicted in Figure 1. The incision is small and therefore doesn't require formal closure. Bleeding is usually minimal due to the surrounding bulk of the tongue musculature. It is of vital importance to maintain a calm operating theatre. Whilst it is generally well tolerated it is invasive and may be daunting for the patient. One must constantly reassure the patient until the procedure is completed.
Results
The patient described above had, through more conventional methods, previous attempts to gain a tissue diagnosis which failed. Using this novel technique the senior author obtained a substantial deep tongue base biopsy which was diagnostic for neoplastic recurrence. The patient went on to have further treatment to his recurrent disease.
Discussion
Prompt biopsy of the tongue base is vital for the multidisciplinary team to plan further treatment in patients similar to the one presented above. The procedures can be difficult due to the lack of anatomical access to the area, the fact the malignancy is often submucosal and the potential for overlying slough from previous treatment. 3,5 In addition, the lesions may often only be palpated and not visualised. 3 The consequence of these complexities is that samples may be too small and superficial to be diagnostic. 2 The most conventional method of sampling the tongue base is trans-orally using a pharyngoscope. Blind biopsies may also be undertaken. The often inadequate biopsies have prompted surgeons to attempt to develop new techniques. The trans-lingual biopsy has been previously described by the senior author of this paper but has not been described as being performed under local anaesthetic. 3 Fine needle aspiration cytology has been trialled in the past with varying results. 6-10 Pfeiffer et al. Has described the use of transmucosal core needle biopsy of the tongue base. 1 In a small number of patients it has shown promising results, gaining diagnostic tissue samples. A disadvantage of this technique may be the relative difficulty of aiming the biopsy needle at the suspect area. Other aids to transoral biopsy of the tongue base have been described such as the use of a GlideScopeTM and a video laryngoscope. 2,5 Whilst certainly interesting they do not tackle the issue of the submucosal nature of the lesion as they only help to visualise the tongue base. The use of robotic surgery in otolaryngology is gaining popularity. Abuzeid et al. described its use in biopsy of the tongue base. This may have more of a role in the future; currently robotic surgery is limited to very few centres globally.
The advantage of this method is that it is a technically simple procedure that relies on the surgeon's ability to accurately advance an instrument towards the finger of their non-dominant hand and has again been shown to provide positive results where other methods have failed.
We perceive the risks to be similar for a conventional tongue base biopsy including; pain, infection, bleeding and need for further surgery due to an incomplete biopsy. As mentioned above care must be taken to ensure the safety of the lingual nerve, lingual gland and submandibular ducts. Our experience is that this method is safe and effective however further research would need to be conducted to accurately quantify possible risks of the procedure.
Conclusion
We have presented how the trans-lingual tongue base biopsy may be performed under local anaesthetic. We believe that it is a simple procedure which may be considered in medically unfit patients who have failed conventional methods of tongue base biopsy. | 2019-03-17T13:11:53.189Z | 2018-03-20T00:00:00.000 | {
"year": 2018,
"sha1": "4a9863f88e2952fddb6f7ee44768103982a6fdf2",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/JOENTR/JOENTR-10-00317.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "87435b12584f5f215fa4e2cf0be9daa3d216a716",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258486657 | pes2o/s2orc | v3-fos-license | Facilitators of co-leadership for quality care
Olive Cocoman and colleagues argue that national leadership for quality of care requires working in a co-leadership model such that quality and programme units have equal standing and clearly defined individual roles and responsibilities
L eadership is key to delivering health system change, including to improve quality of care. 1 National health leadership is responsible for setting a country's vision and strategy for health and to bring stakeholders together to implement that strategy. 2 Ministries of health face multiple challenges to effective leadership, including political pressures, frequent personnel changes, budget limitations or cuts, and the personal efficacy of the leaders involved. Strategic approaches, implementation paths, and resource allocation also vary, giving rise to different strengths or stresses across health systems.
The World Health Organization describes national leadership as underpinning all efforts to improve quality of care. 2 Leadership is essential for developing policies and strategies on quality or care and deploying the necessary resources (human, technical, and financial) to implement those strategies. Despite the importance of national leadership, most attempts to improve quality of care in low and middle income countries over the past two decades have been externally driven delivered by partners at the "micro" or facility level.
In the absence of national leadership driving improvement in quality of health services, quality of care remains a "project" rather than being integrated into the health system as a core function. 3 Leadership is not always linear and may be difficult to assign or negotiate when multiple health programmes or stakeholders have a joint mandate to improve quality of care. Indeed, the strengths of national and subnational quality of care structures vary widely across countries. 4 Learning from the 11 countries in the Network for Improving Quality Care for Maternal, Newborn and Child Health (Quality of Care Network) suggests that the strength of national and subnational leadership structures depends on achieving successful co-leadership between government units designated to work on quality of care and programme units dedicated to specific health areas such as HIV or non-communicable diseases. Co-leadership occurs when two or more leaders at equal seniority share responsibilities for leadership on a specific effort. 5 Drawing on the experience of the Quality of Care Network we suggest how countries can make the model work.
Co-leadership can be challenging to implement
With growing evidence of the impact and cost of poor care, in 2018, WHO, the World Bank, and the Organisation for Economic Cooperation and Development called for all ministries of health to develop a national policy and strategy for quality of care at national, subnational, and facility levels. [6][7][8] Twenty five countries across all continents have since implemented a national strategy, six of which are part of the Quality of Care Network (box 1). 13 One of the first steps in the process is to establish a quality unit in the ministry of health to develop and implement the national quality strategy. The quality unit decides on the content of national policies and strategies and how to monitor to progress on quality of care at national, subnational, district, and facility level as well as coordinating and aligning the multiple stakeholders, including professional bodies, insurance agencies, institutional boards, and subnational, district, and facility teams to drive quality improvement. 15 16 Quality of care is also affected by the decisions of units delivering specific health programmes on resourcing, service delivery, and accountability. 17 Therefore, the quality unit leadership must work collaboratively with programme units, which also have responsibility for implementing the national quality strategy. Additionally, the quality unit ensures coherence of quality of care activities across programmes, aiming to break any existing fragmentation of activities, building a consolidated quality of care agenda across programmes and provide unified oversight to reduce vertical programming. 17 Multiple challenges exist when establishing co-leadership structures. One critical barrier to progress is how well the leaders responsible for quality of care work together with different budget lines, levels of resource, areas of expertise, and stakeholders and accountabilities. At national and subnational levels, it may be challenging to prevent confusion on roles or responsibilities or to negotiate competition for resources between the team working under the quality unit and the teams working under programmatic leads, even when there are shared goals around quality improvement and health outcomes. 4 15 In practice, one unit may overpower the other. 4 Another risk is that one or more units may be unwilling or unable to work collaboratively, resulting in fragmentation or verticalisation of efforts. 4 Communication breakdowns or other failures in co-leadership between quality and programme units mean that work on specific health areas may not advance and programmes may miss out on valuable guidance from quality leadership . 4
Organisational structure matters
Ensuring that quality and programme units have equal standing in the health ministry is important to overcoming barriers to effective co-leadership. Ethiopia provides a clear example of the way in which the organisational structure of a national ministry of health can limit the ability of quality units and programme units to achieve a co-leadership. In 2016, Ethiopia's national quality strategy and its operational plan were developed by a new quality unit, the Health Service Quality Directorate. 18 Regional health bureaus were assigned to
Key messages
• Country efforts to improve quality of care require joint leadership from national quality departments and specific health programmes • Experience from a network of countries suggests that establishing organisational structures so that leaders have equal influence • Clarifying roles and responsibilities is also essential to support effective co-leadership • Conversely, weaknesses in co-leadership incur losses of time, investments, and support for quality develop operational plans for each region, 18 and maternal, newborn, and child health (MNCH) was used as the first programme of joint work. Efforts to build capacity among health workers were jointly supervised by quality of care leads and MNCH programme leads, who established a robust facility and district learning system for improving quality of care. 19 As such, the implementation of the national quality strategy was co-led by a national healthcare quality steering group comprising the directorate and programme leads.
However, programme leads reported to the state minister of health, whereas the directorate reported to the Medical Services Unit, which in turn reported to the state minister of health. The difference in seniority between the directorate and programme leads generated administrative and communication barriers, 20 and the steering group was unable to coordinate between the quality and programme units. 20 The progress review of the national quality strategy in 2020-21 concluded that the efficient use of resources and support for quality were negatively affected by the unequal organisational structure. 21 22 A new organisational structure has been devised, and the Health Service Quality Directorate has been reconfigured into a larger unit that will report directly to the minister of health, thus creating a parallel hierarchy. 23 Similarly, in Malawi, the Ministry of Health aimed to address verticalisation and fragmentation from various quality of care initiatives by establishing a quality improvement unit in the Directorate of Policy and Planning in 2015. By the end of 2016, however, issues with hierarchy like those experienced in Ethiopia, made it necessary to elevate the small unit to a directorate, with equal standing to programme directorates. 24 Sierra Leone is at an earlier stage in developing a co-leadership model.
Quality of care policy and strategies have rapidly advanced since 2019, and quality standards for MNCH were used as the entry point. A national quality manager sits in the Department for Reproductive and Child Health, and while national quality MNCH roadmaps have been developed, no quality of care plans have been developed for other areas of healthcare. 25 Learning from other network countries, and Ethiopia in particular, suggests that the next step should be to establish a national quality unit at the same organisational level as other health programme units, which will create a national quality and programme co-leadership model relevant to all health programmes.
Clear roles and responsibilities must be established
In addition to organisation of ministries of health to put quality units on the same level as other health programme units, coleadership depends on clearly defining the roles and responsibilities of the different leads. Shared responsibility by the quality management unit at the Ministry of Health and the MNCH programme unit in Ghana has been key to progress. Ghana's health system is pluralistic: the Ministry of Health develops policies, mobilises resources, and evaluates programmes and projects. Ghana Health Service (GHS) is the public services implementing agency with organisational structures at national, regional, and district levels. The quality management unit developed a national healthcare quality strategy in 2017. 26 GHS developed guidelines for the implementation of the strategy, which prioritised maternal and child health. 27 Verticalisation and fragmentation were avoided by carefully defining and upholding the leadership roles and responsibilities at national, regional, and district levels, including the facilities implementing the strategy.
In 2018, for example, a national MNCH quality of care technical working group was established to provide overall leadership. The head of the quality management unit and the deputy director of the family health programme in GHS were appointed co-chairs to lead and jointly guide the use of resources to implement the quality strategy. 28 All roles and responsibilities were defined in the guidelines for implementation. Regional quality and safety management teams were established in all 16 regions, with each responsible for planning, supervising, monitoring, and evaluating the implementation of quality and patient safety programmes within their region in collaboration with programme quality teams. The regional teams ensure that agreed plans are completed successfully and supervise the establishment of programme quality teams in all districts and facilities. 27 The national technical working group allocates funding to all regional teams, aligning resources behind a common agenda. The clarity of structures and roles at national and regional level established clear entry points for implementation partners to support individual regions as required. Based on the efficacy of this model, donors supported financial resource gaps in eight regions. As a result, by 2022, all 16 regions were supported to scale up quality maternal and child healthcare. 29 Malawi's experience corroborates the importance of defining roles and responsibilities for effective co-leadership. In 2017, Malawi's Quality Management Director ate launched the quality management policy and strategy (2017-2022) and national MNCH quality of care roadmap (2017-2021). As in Ghana and Sierra Leone, MNCH was the entry point for improving quality of healthcare, and this is co-led by the Quality Management Directorate and Reproductive Health Department. 30 A steering committee was established and is co-led by the quality and programme leads. However, the responsibilities of the directorate and the and Reproductive Health Department were not defined and agreed on, and this resulted in a lack of clarity regarding how or by whom collaboration would be built with district leadership and with other relevant health departments, such as the community health and nursing and clinical departments. The lack of clarity on responsibilities incurred delays, with periods of inaction on quality of care until 2019 when the organisational structure for the interface between national and district Box 1: The Quality of Care Network The Network for Improving Quality Care for Maternal, Newborn and Child Health (Quality of Care Network) brings together the individuals responsible for leading maternal, newborn, and child health (MNCH) and quality improvement within the health ministries of 11 countries: Bangladesh, Cote D'Ivoire, Ethiopia, Ghana, India, Kenya, Malawi, Nigeria, Sierra Leone, Tanzania, and Uganda. 9 These countries have adopted and implemented WHO technical guidance on improving quality of maternal, newborn, and child care in health facilities. 10 11 As part of a roadmap for progress, two leadership outputs were agreed in 2017. By early 2023 all 11 had put in place national and sub-national governance structures and developed a costed national plan for improving quality of care. 12 The network is delivering learning from both successes and failures on how national institutions can improve quality of care through continual monitoring of implementation processes and impact and outcome data. 9 12-14 Learning is being documented in national and subnational learning forums so it can be distilled and shared within and across the network. levels was completed and communicated to districts. 31 32 Learning from this experience has informed the subsequent phase of work, and roles and responsibilities have been carefully defined in the current national health strategic plan. 31 33
Learning the lessons
Our examples show how lack of adequate organisational structures for quality and poorly defined roles and responsibilities result in loss of time, human, and financial resources for quality of care, and a loss of support for quality. Establishing organisational structures that enable co-leadership between quality and programme leadership are increasingly recognised as critical for successful quality of care. As countries in the Quality of Care Network continue to learn from practice and experience, engage in frequent review of their processes and outcomes, and use this knowledge to inform improvements in practice, this implementation feedback cycle should guide leaders' decision making within and between countries. Contributors and sources: This article arose from a meeting of ministries of health and partners from countries in the Network for Improving the Quality of Care for Maternal, Newborn and Child Health, over three days in Accra, Ghana, in March 2023 to discuss progress and barriers to realising quality MNCH. Five authors on this paper are the programme or quality unit leads from ministries of health and have or are currently leading the work described. The views expressed are those of the authors and do not necessarily reflect those of WHO.
Competing interests: We have read and understood BMJ policy on declaration of interests and have no interests to declare.
Provenance and peer review: Commissioned; externally peer reviewed.
This article is part of a collection proposed by the World Health Organization and the World Bank and commissioned by The BMJ. The BMJ peer reviewed, edited, and made the decision to publish these articles. Article handling fees are funded by the Bill and Melinda Gates Foundation. Jennifer Rasanathan, Juan Franco, and Emma Veitch edited this collection for The BMJ. Regina Kamoga was the patient editor. Olive Cocoman, learning lead 1 Martin Dohlsten, technical officer 2 Ernest Konadu Asiedu, analyst, health and pandemic 3 Desalegn Bekele Taye, adviser to state minister of health 4 Margaret Mannah, national quality manager 5 Bongani Chiikpulo, head of norms and standards 6 Nana Mensah Abramah, PhD candidate 7 Isabella Sagoe-Moses, former deputy director of family health 8 | 2023-05-05T13:04:26.710Z | 2023-05-05T00:00:00.000 | {
"year": 2023,
"sha1": "579ddbe8f06198caf74096d7652b2c8c9f548753",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Highwire",
"pdf_hash": "579ddbe8f06198caf74096d7652b2c8c9f548753",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44749393 | pes2o/s2orc | v3-fos-license | Effect of C 18-polyunsaturated Fatty Acids on Their Direct Incorporation into the Rumen Bacterial Lipids and CLA Production In vitro *
An in vitro study was conducted to determine the effect of C18-polyunsaturated fatty acid on direct incorporation into the rumen bacteria, bio-hydrogenation and production of CLA in vitro. Sixty milligrams of linoleic acid (C18:2) or linolenic acid (C18:3) were absorbed into the 0.5 g cellulose powder was added to the 150 ml culture solution consisting of 120 ml McDougall's buffer and 30 ml strained rumen fluid. Four uCi of 1- 14 C18:2 or 1- 14 C18:3 (1 uCi/15 mg each fatty acid) were also added to the corresponding fatty acids to estimate the direct incorporation into the bacterial lipids. The culture solution was then incubated anaerobically in a culture jar with stirrer at 39°C for 12 h. Ammonia concentration and pH of the culture solution were slightly influenced by the fatty acids. Amount of fatty acid incorporated into the bacteria was 1.20 mg and 0.43 mg/30 ml rumen fluid for C18:2 and C18:3, respectively during 12 h incubation. Slightly increased CLA (sum of cis-9, trans-11 and cis-10, trans-12 C18:2) was obtained from the C18:3 addition compared to that from C18:2 after 12 h incubation in vitro. (Asian-Aust. J. Anim. Sci. 2005. Vol 18, No. 4 : 512-515)
INTRODUCTION
Cellular lipids of rumen microorganisms are known to be generated by de novo synthesis and by the direct incorporation of performed precursor molecules which are of dietary origin.Knight et al. (1979) reported that acetic acid (C 2 ) was utilized for the synthesis of palmitic acid.However, since polyunsaturated fatty acids are not commonly synthesized by bacteria, rumen microbes are likely to incorporate the exogenous preformed fatty acids (Harfoot and Hazlewood, 1988).But the rate and the amount of incorporation of those fatty acids into the rumen bacteria have not been reported.
Meanwhile, conjugated linoleic acid (CLA) is one of the major intermediate products of bio-hydrogenation of C 18polyunsaturated acids by the rumen bacteria (Harfoot and Hazlewood, 1988;Wang et al., 2003;Wang et al., 2005).The CLA has been mostly derived from the dietary linoleic acid (C 18:2 , Kelly et al., 1998).Bessa et al. (2000), however, revealed the possibility of alternative pathway that may be existed in the production of CLA from linolenic acid (C 18:3 ) due to the extreme microbial diversity in the reticulo-rumen.Wang et al. (2002a, b) also found the possibility of CLA production from C 18:3 .
The current in vitro study, therefore, was conducted to determine the effect of C18-polyunsaturated fatty acids on fermentation characteristics, direct incorporation into the rumen bacteria, bio-hydrogenation and production of CLA isomers (cis-9, trans-11 and cis-10, trans-12 C 18:2 ).
Preparation of rumen fluid
Approx. 4 kg rumen contents in total were collected at 2 h after morning feeding (0800) from the two non-lactating ruminally cannulated Holstein cows fed 6 kg of rice straw (50%) and concentrate (50%) on a dry matter (DM) basis twice daily in an equal portion.The rumen contents were brought to the laboratory and were blended in a Waring blender (Fisher 14-509-1) for 20 seconds at high speed to detach the bacteria from the feed particles, and were strained through 12 layers of cheesecloth to remove the feed particles and protozoa.CO 2 was flushed into the strained rumen fluid.
Preparation and incubation of culture
Thirty ml strained rumen fluid was mixed with 120 ml McDougall's artificial saliva (1948) under flushing of CO 2 .For the measurement of direct incorporation of single fatty acid (C 18:2 or C 18:3 , Sigma Co.) into the bacterial lipid, 1.5 g (1% of culture solution) lipid extracted ground corn (0.5 mm screen) and 60 mg of each fatty acid absorbed into the 0.5 g cellulose powder were added to the each 150 ml culture solution in the glass culture jar.Four uCi of 1-14 C 18:2 or 1-14 C 18:3 (1 uCi/15 mg each fatty acid) were also added to the corresponding fatty acids.Gaseous CO 2 was flushed into the culture solution for 1 minute.The culture jar was covered with a glass lid equipped with stirrer and was placed into a water-bath maintaining at 39°C.Culture solution was again flushed with CO 2 through glass tube connected to the jars for the infusion purpose for 1 min., and was incubated up to 12 h.Stirring speed during incubation was adjusted to 120 times/min.The in vitro
Effect of C18-polyunsaturated Fatty Acids on Their Direct Incorporation into the Rumen Bacterial Lipids and CLA Production In vitro*
study was conducted three times with three replicates per treatments per each time under the similar conditions.
Sampling and analysis pH of culture solution was measured at the incubation times of 3, 6 and 12 h, and 5 ml culture solution was collected for ammonia and volatile fatty acid (VFA) analysis.All samples collected were kept frozen at -20°C until analyzed.Ammonia concentration was determined by the method of Fawcett and Scott (1960) using the spectrophotometer (DU-650).Four ml culture solution was mixed with 1 ml 25% phosphoric acid and 0.5 ml pivalic acid solution (2%, w/v) as an internal standard.The mixed solution was centrifuged at 15,000×g for 15 min., and the supernatant was used to determine the concentration and composition of VFA using gas chromatograph (GC, HP 5,890 II, Hewlett Packard Co.).Two hundred ml culture solution was also collected at the incubation times of 3, 6 and 12 h and freeze dried, and lipids were extracted using Folch′s solution (Folch et al., 1957).Methylation of the fatty acids was followed the method of Lepage and Roy (1986) prior to injecting into the GC.A fused silica capillary column (100 m×0.25 mm, i.d.×0.20 µm thickness, Supelco, SPTM-2,560; USA) was used.
For the determination of fatty acid composition of bacterial lipids, 300 ml strained rumen fluid was centrifuged at low speed (2,000×g, 4°C) for 10 min.to remove protozoa and feed particles.The supernatant was collected and was again centrifuged at high speed (22,000× g, 4°C) for 10 min.to separate bacteria.The separated bacterial pellet was then homogenized in Bryant's diluting solution (Bryant and Robinson, 1961) and centrifuged same as for separation of bacteria.Extraction of bacterial lipids, methylation and fatty acid analyses were same for the culture solution.The composition of C18-fatty acids and the total of other fatty acids in the mixed rumen bacteria was shown in Table 1.
For the measurement of incorporation of added fatty acid into the rumen bacteria, 200 ml culture solution after 12h incubation was centrifuged at low speed (2,000×g, 4°C) for 10 min.to remove feed particles.The supernatant was collected and was again centrifuged at high speed (22,000× g, 4°C) for 10min.to separate bacteria.The separated bacterial pellet was then homogenized in Bryant's diluting solution (Bryant and Robinson, 1961) and centrifuged same as for separation of bacteria.Two more times of homogenization and washing were made to remove the free radioisotopes completely and the bacterial pellet was freeze dried.Extraction of bacterial lipids was done same as for culture solution.Five ml chloroform containing bacterial lipids was transferred to the 10 ml scintillation vial and the chloroform was evaporated in the dry-bath (60°C) under the N gas.Eight ml multipurpose scintillation cocktail (Instagel XF, Packard Co.) was then added to the vial, and specific radio-activities (cpm) of 1-14 C 18:2 and 1-14 C 18:3 in bacteria were measured by the β-counter (Beckman LS 5801).Background cpm was also measured.Specific radioactivities of 0.1 uCi 1-14 C 18:2 and 0.1 uCi 1-14 C 18:3 were 569,681 cpm and 484,741 cpm, respectively, and background cpm was 36.Incorporated amount of each fatty acid into the bacteria was calculated by following equation: Amount of each fatty acid added to the culture solution x ((specific radioactivity (cpm) of bacterial lipid)/(total specific radioactivity (cpm) of the culture solution)).
Statistical analysis
The results obtained were subjected to least squares analysis of variance according to the general linear models procedure of SAS (1985) and significances were compared by S-N-K Test (Steel and Torrie, 1980).
RESULTS AND DISCUSSION
pH of culture solution lowered with incubation times but was not influenced by the fatty acid except for 6 h incubation which is slightly (p<0.102)higher from C 18:3 than from C 18:2 addition (Figure 1).No differences were observed in ammonia concentration between fatty acids although C 18:2 slightly (p<0.081)decreased concentration compared to C 18:3 (Figure 2).Total VFA concentration increased with incubation time and was higher (p<0.025)for C 18:3 than for C 18:2 addition at 3 h incubation and slightly increased at the other incubation times in C 18:3 added treatment (Table 2).Addition effect of single C18polyunsaturated fatty acid was not found in molar proportion of VFA, but butyrate proportion was unexpectedly high in both additions of C 18:2 and C 18:3 .The C 2 :C 3 ratio tended to be higher for C 18:3 than for C 18:2 .Incorporation of added C 18:2 into the bacterial lipid during 12 h incubation was higher (p<0.0009) as 1.20 mg/30 ml rumen fluid than that of C 18:3 (0.43 mg) in the present study (Table 3).Percent incorporation of C 18:2 also increased compared to that of C 18:3 .
Composition of C18-fatty acids in culture solution after 12 h incubation was shown in Table 4.As expected, compositions of C 18:2 and C 18:3 were high as 54.5% and 44.25%, respectively, in culture solution but the C 18:1 composition was similar between added fatty acids.Production of CLA consisting of cis-9, trans-11 and trans-10, cis-12 isomers was slightly increased by the C 18:3 addition compared to C 18:2 addition (Table 4).
In conclusion, incorporation amounts of C 18:2 and C 18:3 into the rumen bacteria in the present in vitro study were found to be related to the relevant composition of rumen bacterial lipids.The strong possibility that rumen bacteria can produce CLA from C 18:3 was also found under the current fermentation condition.
Table 1 .
Composition of C18-fatty acids in the rumen bacterial
Table 2 .
Addition effects of single C18-polyunsaturated fatty acid on VFA production in the culture solution Means in the same row with different superscripts differ (p<0.05).
1Standard error of the means. 2 Probability level.
Table 3 .
Direct incorporation of single C18-polyunsaturated fatty acid into the rumen bacteria in the culture solution after 12 h | 2017-09-06T07:56:53.803Z | 2005-04-20T00:00:00.000 | {
"year": 2005,
"sha1": "5ed9a366ea05e84b27fc76c713481c1ee0155694",
"oa_license": "CCBY",
"oa_url": "https://www.animbiosci.org/upload/pdf/18_80.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5ed9a366ea05e84b27fc76c713481c1ee0155694",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
55228960 | pes2o/s2orc | v3-fos-license | A new quark-hadron hybrid equation of state for astrophysics - I. High-mass twin compact stars
Aims: We present a new microscopic hadron-quark hybrid equation of state model for astrophysical applications, from which compact hybrid star configurations are constructed. These are composed of a quark core and a hadronic shell with a first-order phase transition at their interface. The resulting mass-radius relations are in accordance with the latest astrophysical constraints. Methods: The quark matter description is based on a quantum chromodynamics (QCD) motivated chiral approach with higher-order quark interactions in the Dirac scalar and vector coupling channels. For hadronic matter we select a relativistic mean-field equation of state with density-dependent couplings. Since the nucleons are treated in the quasi-particle framework, an excluded volume correction has been included for the nuclear equation of state at suprasaturation density which takes into account the finite size of the nucleons. Results: These novel aspects, excluded volume in the hadronic phase and the higher-order repulsive interactions in the quark phase, lead to a strong first-order phase transition with large latent heat, i.e. the energy-density jump at the phase transition, which fulfils a criterion for a disconnected third-family branch of compact stars in the mass-radius relationship. These twin stars appear at high masses ($\sim$ 2 M$_\odot$) that are relevant for current observations of high-mass pulsars. Conclusions: This analysis offers a unique possibility by radius observations of compact stars to probe the QCD phase diagram at zero temperature and large chemical potential and even to support the existence of a critical point in the QCD phase diagram.
Introduction
The physics of compact stars is an active subject of modern nuclear astrophysics research since it allows to probe the state of matter at conditions which are currently inaccessible in highenergy collider facilities: extremes of baryon density at low temperature. It provides one of the strongest observational constraints on the zero-temperature equation of state (EoS) by recent high-precision mass measurement of high-mass pulsars by Demorest et al. (2010) and Antoniadis et al. (2013). Any scenarios for the existence of exotic matter and a phase transition at high density which tend to soften the EoS may be abandoned unless they provide stable compact star configurations with a mass not less than 2 M ⊙ . Still, there are several possibilities for which it is hard or impossible to detect quark matter in compact stars, namely when: a) the phase transition occurs at too high densities, exceeding the central density of the maximum mass configuration, b) the transition occurs only very close to the maximum mass, beyond the limit of masses for observed high-mass pulsars, or when c) the transition is a crossover or very close to it so that the hybrid star characteristics is indistinguishable from that of pure neutron stars. The latter case has been dubbed "masquerade" problem Alford et al. (2005). The latter case seems to be characteristic to the use of modern chiral quark models with vector meson interactions Bratovic et al. (2013) which are very similar in their behaviour to standard nuclear EoS like APR (Akmal, Pandharipande & Ravenhall , 1998) or DBHF (Fuchs , 2006) in the transition region (see, e.g., Klähn et al. (2007), Klähn et al. (2013)). However, the opposite case is also possible: when the phase transition to quark matter is accompanied with a large enough binding energy release, corresponding to a jump in density and thus a compactification of the star, an instability may be triggered which eventually will result in the emergence of a third family of compact stellar objects, in addition of white dwarfs and neutron stars. About the existence of such a branch of supercompact stellar objects which is disconnected from the neutron star sequence has long been speculated in different context related to phase transitions in dense matter (c.f. Gerlach , 1968;Kämpfer , 1981;Schertler et al. , 2000;Glendenning & Kettner , 2000). This phenomenon has been studied due to the appearance of pion and kaon condensates in Kämpfer (1981) and Banik & Bandyopadhyay (2001) respectively, as well as hyperons in Schaffner-Bielich et al. (2002) and quark matter in Glendenning & Kettner (2000), Schertler et al. (2000), Fraga et al. (2002), Banik & Bandyopadhyay (2003), Agrawal & Dhiman (2009), and Agrawal (2010). All these early studies, however, could be ruled out by the recent observa-tion of high-mass pulsars. The question arose whether the twin star phenomenon as an indicator for a first order phase transition could also concern compact stars with masses as high as 2 M ⊙ . If answered positively, the observation of significantly different radii for high-mass pulsars of the same mass would allow conclusions also for isospin symmetric matter as probed in heavyion collisions.
The ongoing heavy-ion programs at the collider facilities at RHIC (US) and LHC at CERN in Geneva (Switzerland), combined with the success of modern lattice QCD, did lead to the result that the nature of the QCD transition at vanishing chemical potential and finite temperature is a crossover. The physics of the QCD phase diagram at finite chemical potential and finite temperatures will be subject of research within the future highenergy facilities at FAIR in Darmstadt (Germany) and NICA in Dubna (Russia) where one of the main goals is to find a critical endpoint (CEP) of first-order transitions or, indications for a first order phase transition at high baryon density like signatures for a quark-hadron mixed phase. In general, a phase transition in isospin asymmetric stellar matter is directly related to the corresponding phase transition in symmetric matter, and therefore relevant to the understanding of the QCD phase diagram (c.f. Fukushima & Sasaki , 2013;Fukushima , 2014). Since increasing the isospin asymmetry would result in lowering the temperature of the CEP to zero (Ohnishi et al. , 2011) the detection of first order phase transition signals in zero temperature asymmetric compact star matter like the mass twin phenomenon would thus prove the existence of at least one CEP in the QCD phase diagram Alvarez-Castillo & Blaschke (2013); .
The major ingredient of compact star physics is the zero-temperature EoS in β-equilibrium (for recent works, c.f. Steiner et al. , 2013;Masuda et al. , 2013;Orsaria et al. , 2013;Alford et al. , 2013;Hebeler et al. , 2013;Inoue et al. , 2013;Klähn et al. , 2013;Fraga et al. , 2013;Yasutake et al. , 2014;Yamamoto et al. , 2014). More precisely, hybrid EoS can be decomposed into three parts: (a) low-density nuclear matter, (b) high-density exotic matter such as hyperons or quarks, (c) the phase transition region between low-and high-density parts. The conditions for the transition depend on details of the underlying microscopic descriptions of matter. For the EoS to yield a third family and/or the twin phenomenon, the following two conditions should be fulfilled (for details, see Haensel et al. , 2007;Read et al. , 2009;Zdunik & Haensel , 2013;Alford et al. , 2013): (1) The latent heat of the phase transition should fulfill a constraint ∆ε > ∆ε min (Haensel et al. , 2007;Zdunik & Haensel , 2013;Alford et al. , 2013) where ∆ε min ∼ 0.6ε crit for the schematic hybrid EoS investigated in (Alford et al. , 2013) and with ε crit being the critical energy density for the onset of the transition.
(2) The high density part of the EoS should be sufficiently stiff.
The third family of compact objects is attained via an unstable branch, which can be realized by a soft EoS in the transition region, ensured by the condition (1). The condition (2) is necessary for the core matter to withstand the pressure from the hadronic shell and thus to provide stability for the new, disconnected hybrid star branch.
Confirming the existence of high-mass twins represents an outstanding challenge for observational campaigns to develop precise radius measurements for compact stellar objects (c.f. Mignani et al. , 2012;Gendreau et al. , 2012;Miller , 2013). If detected, the twin phenomenon would be a compelling astrophysical signature of a strong first-order phase transition in the QCD phase diagram at zero temperature and thus strong evidence for the presence of at least one critical end point. By invoking that the high-mass pulsars PSR J1614-2230 by Demorest et al. (2010) and PSR J0348+0432 Antoniadis et al. (2013), with their precisely measured masses 1.97 ± 0.04 M ⊙ and 2.01 ± 0.04 M ⊙ , respectively, could be such twin stars, we predict in this work that their radii should differ at least by about 1 km (depending on the model details). It remains to be shown whether these values are within the capabilities of future experimental X-ray satellite missions like the Neutron Star Interior Composition Explorer (NICER) 1 , the Nuclear Spectroscopic Telescope Array (NUSTAR) 2 and/or the Square Kilometer Array (SKA) 3 .
In this work we present a microscopically founded example for the class of hybrid EoS that fulfill the criteria (1), (2) for the occurrence of a third family of compact stars based on a first-order phase transition from hadronic matter to quark matter. In our case, the nuclear matter phase is described by a relativistic mean-field (RMF) model with density-dependent mesonnucleon couplings introduced in Typel & Wolter (1999) using the DD2 parametrization from Typel et al. (2010) with finitevolume modifications. The quark matter phase is given by a Nambu-Jona-Lasinio model (NJL) with higher-order quark interactions, as introduced in Benic (2014). For the phase transition between hadronic and quark matter phases we apply a Maxwell construction. The resulting quark-hadron hybrid EoS allows for massive twin star configurations for which their gravitational masses are in agreement with the present 2 M ⊙ constraint set by Demorest et al. (2010) and Antoniadis et al. (2013).
The paper is organized as follows: in Sect. 2 we introduce our new quark-hadron hybrid EoS and in Sect. 3 we discuss characteristic features of it such as excluded volume, mass-radius relations and twin configurations. The paper closes with the summary in Sect. 4.
Model Equation of State for massive twin phenomenon
For quark matter at high densities we employ the recently proposed NJL-based model of Benic (2014). For the low-density region we use the nuclear RMF EoS (Typel & Wolter , 1999) with the well-calibrated DD2 parametrization of Typel et al. (2010). In order to maximize the latent heat at the phase transition, we correct the standard DD2 EoS by accounting for an excluded volume of the nucleons that results from Pauli blocking due their quark substructure. The latter aspect will be further introduced in the following subsection.
Excluded nucleon volume in the hadronic Equation of State
The composite nature of nucleons can be modeled by the excluded-volume mechanism as discussed e.g. by (Rischke et al. , 1991) in the context of RMF models. Considering nucleons as hard spheres of volume V nuc , the available volume V av for the motion of nucleons is only a fraction Φ = V av /V of the total volume V of the system. The available volume fraction can be written as with the nucleon number densities n i and the volume parameter if we assume identical radii r nuc = r n = r p of neutrons and protons. The total hadronic pressure and energy density are given by the following relations: with contributions from nucleons and mesons. They depend on the nucleon chemical potentials µ n and µ p . The nucleonic pressures are given by with the nucleon number densities and scalar densities: that contain the energies as well as Fermi momenta k i and effective masses m * i = m i − S i . The vector potentials V i , scalar potentials S i and the mesonic contribution p mes to the total pressure have the usual form of RMF models with density-dependent couplings (for more details, see Typel & Wolter , 1999). In conventional RMF models the in-medium nucleonnucleon interaction is modeled by the exchange of (σ, ω and ρ) mesons between pointlike nucleons. The excluded volume causes an additional effective repulsion between the nucleons. Hence, the parameters of the RMF model have to be refitted in order to retain the characteristic properties of nuclear matter. The parameters of the nucleon-meson couplings in the DD2 RMF model were determined by fitting to properties of finite nuclei (for details, see Typel et al. , 2010). This approach leads to very satisfactory results all over the nuclear chart and gives nuclear matter parameters that are consistent with current experimental constraints. In table 1 the parameters of the new parametrization DD2-EV with excluded-volume effects are given assuming a volume parameter v = (1/0.35) fm 3 . This corresponds to a nucleon radius of r nuc ≈ 0.55 fm. See Typel et al. (2010) for the definition of the quantities in the table and their relation to the coupling functions. The saturation density n sat and the particle masses are not changed as compared to the original DD2 parametrization. The DD2-EV parameters were determined such that the binding energy per nucleon E/A, the compressibility K, the symmetry energy J and the symmetry energy slope parameter L are also identical to that of the DD2 effective interaction.
For the hadronic EoS we use the original DD2 parametrization without excluded-volume effects at baryon densities below the saturation density n sat of the model since these densities are well tested in finite-nucleus calculations. At densities above n sat we replace the DD2 model by the DD2-EV parametrization with excluded-volume corrections. The maximum baryon density that can be described by this model is n max = 1/v = 0.35 fm −3 due to the choice of the volume parameter v. At this density the pressure diverges and the transition to quark matter has to occur below n max . In stellar matter the usual contributions to the pressure and energy density of the electrons are added to the hadronic part. Requiring charge neutrality, i.e. n e = n p and β equilibrium, i.e. µ n = µ p + µ e , the pressure and energy density become functions of a single quantity, the baryon chemical potential µ B = µ n .
NJL model with 8-quark interactions
In order to describe cold quark matter that is significantly stiffer than the ideal gas, we employ the recently developed generalization of the NJL model by Benic (2014), which includes 8-quark interactions in both, Dirac scalar and vector channels (NJL8). The mean-field thermodynamic potential of the 2-flavor NJL8 model is given as follows: with and energy-momentum relation, Expressions for M d andμ d are obtained by cyclic permutation of indices in (11) and (12), respectively. The model parameters are the 4-quark scalar and vector couplings g 20 , and g 02 , the 8-quark scalar and vector couplings g 40 and g 04 as well as the current quark mass m and the momentum cutoff Λ which is placed on the divergent vacuum energy. The constant Ω 0 ensures zero pressure in the vacuum.
The model is solved by means of finding the extremum value of the thermodynamic potential (9) with respect to the mean- and the pressure is obtained from the relation p = −Ω.
In this work we use the parameter set of Kashiwa et. al. (2007), g 20 = 2.104, g 40 = 3.069, m = 5.5 MeV and Λ = 631.5 MeV. Furthermore, the vector channel strengths are quantified by the ratios Here, we will concentrate on the parameter space where η 2 is small and use η 4 to control the stiffness of the EoS. Note that small η 2 ensures an early onset of quark matter, i.e. it refers to low densities for the onset of quark matter (depending on the stiffness of the nuclear EoS at corresponding densities). Within this approach we can calculate the partial pressures p f and densities n f = ∂p f /∂µ f for f = (u, d). In neutron stars, neutrino-less β-equilibrium is typically fulfilled, i.e. the corresponding equilibrium weak process in nuclear matter is the nuclear β-decay: n ⇆ p + e − +ν e . In quark matter, it is replaced by: d ⇆ u + e − +ν e , and hence here the following relation holds between the contributing chemical potentials, µ d = µ u + µ e (neutrino escapes from the star so his chemical potential is set to zero). Moreover, the following condition, ensures local charge neutrality. The total pressure in the quark phase is then given by the sum of the partial pressures, p = p u + p d + p e , with electron pressure p e . The latter is calculated based on the relativistic and degenerate Fermi gas. Moreover, the baryon chemical potential and the baryon density in the quark phase (Q) and hadronic phase (H) are obtained as follows, with neutron and proton chemical potentials (µ n , µ p ) and densities (n n , n p ). When no confusion arises indices Q and H will be omitted for simplicity. For the construction of the phase transition, we apply Maxwell's condition in the pressure-chemical potential plane, i.e. pressures in quark and hadronic phases must be equal , in order to ensure thermodynamic consistency. This approach is tantamount to assuming a large surface tension at the hadron-quark interface. The critical baryon chemical potential is obtained by matching the pressures from the hadronic (DD2-EV) and quark (NJL8) EOSs. With this setup, a first-order phase transition is obtained by construction with a significant jump in baryon density and energy density as illustrated in Fig. 1. It will be further discussed in the subsequent Sect. 3.
Results
The model parameters used to calculate the hybrid EoS are as follows. We modify the DD2 EoS with the excluded volume mechanism as described in Subsect. 2.1 (DD2-EV). The high-density part is given by the NJL8 EoS (9), where we use η 2 = 0.08 and consider η 4 as a free parameter.
The rationale behind our choice of a low value for η 2 and the particular value for v is at this stage purely phenomenological. The parameter η 2 controls both the onset of quark matter as well as the stiffness of the quark EoS. Note that a measure for the stiffness (or softness) of the EoS is the speed of sound c s defined via When comparing two EoS, the stiffer one has the steeper slope of p(ε) while its slope of n(µ B ) is lower. In the present model, a larger value for η 2 would result in more similar quark and hadronic EoS, and hence disfavor the anticipated condition of maximized latent heat at the phase transition. Incidentally, since a small value of η 2 ensures a low onset of quark matter, the nuclear EoS is insensitive to the detailed behavior of the Φ function close to the maximum density n max . Thus we use the traditional linear dependence (1)
Hybrid Equation of State
The new quark-hadron hybrid EoS, based on DD2-EV and NJL8, is shown in the upper panel of Fig. 2 illustrating the pressure-energy density plane, for fixed η 2 = 0.08 and varying vector-coupling parameters η 4 . Note that with increasing vector coupling parameters, η 2 and η 4 , the sound speed rises which is shown in the lower panel of We have checked that in all our cases the causality limit is reached only at energy densities beyond which the mass-radius sequences turn unstable. Exploring the available parameter spaces in both, hadronic and quark matter phases, we have found the maximized latent heat in the combination of two aspects: (a) taking into account finite-size effects of the nucleons using the excluded volume and (b) applying small values of η 2 for the NJL8 quark-matter model. This is illustrated on Fig. 1 where we compare the phase transition constructions from DD2 (blue) and DD2-EV (light blue) to NJL8 (red) with η 2 = 0.08 and η 4 = 0.0. The upper panel of Fig. 1 shows pressure vs. chemical potential, from which it becomes clear that our excluded volume approach reduced the critical chemical potential for the onset of quark matter. Furthermore, it also increases the differences between the slopes of the pressure curves for hadronic and quark EoS at the phase transition. The latter aspect results in an increased latent heat, ∆ε, which is shown in the bottom panel of Fig. 1. For the here explored NJL8 parameters (η 2 = 0.08, η 4 = 0.0), we find ∆ε ≃ 0.34ε crit for the transition with DD2 and ∆ε ≃ 0.81ε crit for the transition with DD2-EV.
With the given choice of nuclear matter parameters, the excluded volume correction generates a stiff nuclear EoS at suprasaturation densities, close to the limit of causality, i.e. c 2 H ≃ 1. Furthermore, the choice of small η 4 ensures a soft quark mat-ter EoS at the phase transition densities, i.e. c 2 Q 1/3. The resulting maximized jump in energy density at the phase transition from DD2-EV to NJL8 is illustrated in Fig. 2 for the parameter range η 4 = 0.0−30.0, for which we obtain ∆ε ≃ (0.81−0.70)ε crit .
Our approach for the construction of a quark-hadron phase transition with large latent heat extends beyond the phenomenological model of Zdunik & Haensel (2013) and Alford et al. (2013), known as ZHAHP. In their approach, the latent heat ∆ε is a free parameter and the quark EoS is defined by a constant speed of sound c 2 Q . Nevertheless, in providing as one of the major requirements for the existence of the third family the rule of thumb that the latent heat be around ∆ε ≃ 0.6, the ZHAHP approach proves to be extremely practical (see, e.g., . However, it is unphysical treating c 2 Q and ∆ε as mutually independent parameters. Within a microscopic description for the EoS both quantities are always correlated, e.g., the relative stiffness of the EoS between hadronic and quark phases defines the latent heat In the above formula all the quantities are evaluated at the critical chemical potential of the transition µ B = µ crit B .
Mass-radius relationship
Based on our novel quark-hadron hybrid EoS we calculate the mass-radius relations from solutions of the Tolman-Oppenheimer-Volkoff (TOV) equations. For a selection of quark matter parameters, i.e. constant η 2 = 0.08 and varying η 4 = 0.0 − 30.0, we show the resulting mass-radius curves in Fig. 3. Horizontal colored bands mark the constraints from high-precision mass measurements of the high-mass pulsars PSR J1614-2230 and PSR J0348+0432 by Demorest et al. (2010) and Antoniadis et al. (2013), respectively. In Fig. 3, the green shaded vertical bands mark the results of the massradius analysis of the millisecond pulsar PSR J0437-4715 by Bogdanov (2013), with 1σ, 2σ and 3σ confidence level assuming a mass of 1.76 M ⊙ . These data form the basis of a new Bayesian analysis of constraints for hybrid EoS parametrizations Alvarez-Castillo et al. (2014) which ought to supersede the first study of this kind by Steiner et al. (2010). In addition, we show data from the X-ray spin phase-resolved spectroscopic study of the thermally emitting isolated neutron star RX J1856.5-3754 by Hambaryan et al. (2014), indicating potential compactness constraints. The solid brown line in Fig. 3 corresponds to the purely hadronic EoS DD2, i.e. without excluded volume corrections, for comparison with DD2-EV. The here introduced excluded volume approach results in large neutron star radii, R ≃ 14.75 km for M = 1.5 M ⊙ in comparison to DD2 (R ≃ 13 km). It can be understood in terms of the significant stiffening of the nuclear EoS above saturation density (n sat = 0.149 fm −3 ). At the phase transition the stellar configuration leaves the stable hadronic branch onto an unstable branch, marked by dotted lines in Fig. 3. The mass-radius coordinates where this happens are defined by the critical chemical potential µ crit B , or density n crit B , of the corresponding hybrid EoS. Note that for all hybrid EoS explored in this study, the critical density is n crit B ≃ 1.5n sat . More in detail, the initially stable hadronic configuration at µ crit B grows by a tiny amount of mass (∼ 5 × 10 −4 M ⊙ ) while the radius remains constant, into a still stable hybrid branch. We estimate the size of the resulting quark Antoniadis et al. (2013). For comparison with DD2-EV, the mass-radius curve for the hadronic EoS DD2 is shown in addition (solid brown line). Furthermore, we show results from the mass-radius analysis of the millisecond pulsar PSR J0437-4715 by Bogdanov (2013), and constraints from the X-ray spectroscopic study of the thermally emitting isolated neutron star RX J1856.5-3754 by Hambaryan et al. (2014). core to be ∼ 80 cm with significantly increased density. Only after that, the configuration turns to the unstable branch during which the quark core grows. The unstable branch recovers back to another stable branch due to the strong repulsive force 8-quark interaction of the NJL8 EoS at high densities.
Our selection of nuclear and quark matter parameters allows not only for high mass hadronic and quark configurations, in agreement with the 2 M ⊙ pulsar data from Demorest et al. (2010) and Antoniadis et al. (2013), but also for the consistent transition from the hadronic branch to the quark-hadron hybrid branch. Here we identify the latter as the third-family of compact stellar objects, with maximum masses in the range M max = 1.92 − 2.30 M ⊙ which are above those of the underlaying hadronic model DD2-EV. Moreover, we confirm that all hybrid EoS fulfill the condition of causality, i.e. the maximum speed of sound of the hybrid star configurations is in the range c 2 Q = 0.34 − 0.82. In addition, our results are in agreement with the mass-radius analysis of the millisecond pulsar PSR J0437-4715 by Bogdanov (2013) within 3σ confidence level (see the green vertical bands in Fig. 3), as well as with the compactness study of the isolated neutron star RX J1856.5-3754 by Hambaryan et al. (2014) (see the yellow box in Fig. 3) where radius of around 14 − 18 km at a mass range of 1.5 − 1.8M ⊙ were found within the 1σ confidence level (see also Trümper , 2011).
Radii difference of the high-mass twins
The most striking consequence of a strong first order phase transition in compact star matter is the possible existence of a third family of compact stars, a branch of stable hybrid star configurations in the mass-radius diagram disconnected from the second family branch of ordinary hadronic stars entailing the twin phenomenon: for a certain range of masses there exist pairs of stars (twins) with the same gravitational mass but different internal structure. In order to quantify the unlikeness of the twins as a measure of the pronouncedness of the phase transition we consider the radii difference δR = R max − R twin between the radius at the maximum mass M max on the hadronic branch and that of the corresponding mass twin on the third family branch of hybrid star configurations. In Table 2, we list δR at fixed η 2 = 0.08 for selected values of the dimensionless eight-quark interaction strength η 4 in the range where it allows for the twin phenomenon (see also Fig. 3 for comparison). The largest radii difference we obtain for η 4 = 0.0, however, for M below the current maximum mass constraint of Demorest et al. (2010) and Antoniadis et al. (2013). In agreement with these latter constraints are the parametrizations η 4 = 5.0 − 30.0, with δR = 1.16 − 0.13 km. The reduced radii difference for increasing η 4 can be understood not only from the stiffening of the quark matter EoS at high densities but also from the reduced latent heat ∆ε, i.e. the reduced jump in energy density going from the hadronic EoS to the hybrid EoS (see Fig. 2), also listed in Table 2. From the required condition ∆ε > 0.6ε crit , it becomes clear from Table 2 that twin configurations are only obtained for η 4 = 0.0 − 30.0. For η 4 30.0 the phase transition to quark matter proceeds without developing a disconnected third family branch -all configurations on this sequence up to the maximum mass are stable (see also Fig. 3).
Conclusions
Compact stars harbor central densities in excess of nuclear saturation density, conditions which are currently inaccessible in nuclear high-energy experiments. Their study contributes to a key direction of research in nuclear and hadron physics, i.e. the possible transition from a state of matter with nuclear degrees of freedom to a deconfined state with quark and gluon degrees of freedom. Despite the success of lattice QCD at vanishing chemical potential and high temperatures identifying the nature of the transition as crossover, for finite chemical potentials only phenomenological models can be used (c.f. Lattimer & Prakash , 2010;Klähn et al. , 2013;Buballa et al. , 2014, and references therein). Such models, in particular with the phase transition from nuclear to quark matter, have been very useful also in astrophysical application, e.g., in simulations of protoneutron star cooling (c.f. Pons et al. , 2001;Popov et al. , 2006; and simulations of core-collapse supernovae (c.f. Sagert et al. , 2009;Fischer et al. , 2011;Nakazato et al. , 2014). It is therefore of paramount interest to develop quark-hadron hybrid models from which it is possible to deduce observables that allow us to further constrain the yet highly uncertain QCD phase diagram, e.g., the possible existence of a critical point. Such identification will be possible with the discovery of a first-order phase transition at low temperatures and large chemical potential, conditions which refer to the state of matter at compact star interiors in β-equilibrium.
In this paper, we took on this challenge and developed a novel quark-hadron hybrid EoS. It is based on the nuclear EoS DD2, which is a relativistic mean-field model with densitydependent couplings. While such models treat nucleons as pointlike quasi-particles, here we take in addition finite size effects of the nucleons into account via an excluded-volume approach above nuclear saturation density. The excluded volume correction introduced here is an attempt to account for the Pauli blocking at the quark level. However, at the current status it is still quite basic and will be improved in an upcoming study. For the quark matter EoS we apply the NJL model formalism, including higher order repulsive quark interactions. The latter become dominant in particular at high densities. Note that the current status of research for the vector interactions in quark matter remains unsettled (for details, see e.g. Steinheimer & Schramm , 2014;Sugano et al. , 2014) and its impact on the possible existence of the CEP remains an open question (see Bratovic et al. , 2013;Contrera et al. , 2014;Hell et al. , 2013, and references therein). The quark-hadron phase transition has been constructed applying the Maxwell criterion, which results in a strong first-order phase transition. The excluded volume on the hadronic side, in combination with the stiff quark EoS, results in not only in an early onset of quark matter but also in a large latent heat at the phase transition.
From our novel hybrid EoS which we provide to the community for different values of the higher order quark interaction strength, we have constructed the mass-radius relations based on TOV solutions. Our main findings can be summarized as follows: (1) The excluded volume for the high-density nuclear EoS results in large radii for intermediate-mass neutron stars.
(2) The transition to quark matter results in a first stable hybrid configuration with tiny quark core, which then turns to the unstable branch.
(3) The unstable branch recovers back to a stable hybrid branch due to the strong repulsive higher-order quark interactions, which we identity as third family of compact stars For all configurations explored in this study, we find that the maximum masses belong to the stable hybrid branch and that all EoS remain causal. Moreover, most of our parameter choices fulfil a variety of current constraints on massradius relations, such as large maximum masses around 2 M ⊙ (Demorest et al. , 2010; Antoniadis et al. , 2013) and radii in the range of 14-17 km for canonical compact objects of M ≃ 1.7 M ⊙ (Bogdanov , 2013;Hambaryan et al. , 2014). From an observational perspective, a particularly interesting consequence of a third family of compact objects is the twin phenomenon, where two stars of the same mass have different radii. In the present paper, we even found high-mass twins with M ≃ 2 M ⊙ with radius differences on the order of about 1 km. It remains to be shown whether future surveys that are devoted to neutron star radii determinations, such as the X-ray satellite missions NICER, SKA and NUSTAR, will have the required sensitivity of less than 1 km, in order to resolve the twin phenomenon. It would, in turn, provide a unique signature of a first-order phase transition to exotic superdense matter in compact star interiors.
The aspects discussed in this paper may have important consequences when taken into account consistently in dynamical simulations of supernova collapse and explosions, binary mergers and so on, where during the phase transition the gain in gravitational binding energy will be available to the system as heat, which in turn can trigger the local production of neutrinos due to the different β-equilibrium condition obtained. Furthermore, the current work improves on the previous phenomenological studies of Alford et al. (2013) and , where the latent heat and the speed of sound were considered as mutually independent parameters. | 2015-03-30T06:23:02.000Z | 2014-11-11T00:00:00.000 | {
"year": 2014,
"sha1": "de9b4dbf2e4e818f49ebe238ac7e94ca9a81b26c",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2015/05/aa25318-14.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "de9b4dbf2e4e818f49ebe238ac7e94ca9a81b26c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225178349 | pes2o/s2orc | v3-fos-license | Striational Muscle Antibody Positivity Heralding a Diagnosis of Autoimmune Encephalitis Associated with Epiglottic Squamous Cell Carcinoma: A Case Report
A 59 yo woman presented to our emergency room with confusion, spells of loss of awareness over four weekswith malaise, weakness, gait difficulties, and 30 lb weight loss over one year. Brain imaging was unremarkable. EEGs revealed diffuse slowing, triphasic GPDs, and left temporal epileptiform discharges. Spinal fluid showed an elevated protein count. She received a working diagnosis of autoimmune encephalitis and received steroids and antiepileptic medications. She experienced an improvement in symptoms and went home. Follow up revealed elevated striational muscle antibodies and enhancement of spinal nerve roots suggesting an autoimmune neurological syndrome. She received intravenous immune globulin (IVIg) therapy and had periods of improvement followed by worsening a few months later with each dose of IVIg. She presented to the hospital a year with worsening dysphagia and an endoscopy showing an epiglottic mass. Pathology confirmed squamous cell carcinoma, and she received chemotherapy and radiation. We wish to share this rare case of autoimmune encephalitis presenting with only striational antibodies as a heralding sign of a tumor. We postulate that, in the correct context, the presence of striational antibodies alone, despite low titers, may support a diagnosis of autoimmune encephalitis from underlying malignancy.
complained of generalized weakness and fatigue, which had been slowly worsening over the last year. She also reported difficulty with walking and endorsed back pain in addition to diffuse muscle weakness. She had a remote history of breast cancer 18 years ago, for which she had undergone resection and chemotherapy.
MRI imaging of her brain was unremarkable. Video EEG testing revealed diffuse polymorphic slowing with intermittent generalized rhythmic delta activity (GRDA) in addition to triphasic generalized periodic discharges (GPDs) and occasional left temporal epileptiform discharges (Figures 1, 2, 3). These findings were felt to be suggestive of a diffuse cerebral dysfunction and new-onset epileptogenic potential as well. Spinal fluid analysis revealed elevated protein level of 61 mg/dL (normal range -15-45 mg/dL). CSF cytology was negative for malignancy, and there were no unique oligoclonal bands in the CSF either. CAT scans of her body did not reveal a malignancy. ESR and CRP were elevated at 84 and 18, respectively. We diagnosed her with possible autoimmune encephalitis, given her clinical context with the EEG and CSF findings. She was treated with intravenous steroid therapy and antiepileptic medications (levetiracetam and lacosamide) and returned to normal mental baseline in a few days. She was discharged home and scheduled for outpatient follow up.
Further workup revealed the presence of striational muscle antibodies at a titer of 1:480 in the blood. CSF autoimmune panel was negative. Other auto-antibodies were negative as well. MRI imaging of her spine showed subtle enhancement of her spinal nerve roots, suggestive of a possible inflammatory process. EMG/NCV testing with repetitive nerve stimulation revealed a chronic ulnar neuropathy but no other findings. Imaging and autoimmune antibody evaluation did not capture any abnormalities.
As her symptoms returned a few weeks later, we decided to initiate therapy with intravenous immune globulin (IVIg) for a probable autoimmune neurological syndrome -autoimmune encephalitis without any features of peripheral neuromuscular hyperexcitability. She received three courses of IVIg over the next 1 year F and had significant improvement in her symptoms, both mental (impaired awareness, confusion) and physical (weakness, gait difficulty). She also did not have any noticeable seizures and remained on levetiracetam and lacosamide therapy during this time.
One year after her first presentation, she presented to her PCP with complaints of worsening dysphagia. She was referred to the ENT doctors for evaluation and underwent an endoscopy, which discovered an epiglottic mass of about 1 cm diameter. We performed a biopsy and histopathology confirmed diagnosis of squamous cell carcinoma of her epiglottis (Figures 4, 5). Her 40-pack year smoking habit was contributory to her malignancy.
She saw oncologists and received chemotherapy and radiation. She remains stable, continues to receive antiepileptic medications and immune therapy for her symptoms, and follows up with us in the epilepsy clinic. II.
Discussion
Autoimmune encephalitis is an uncommon presentation of malignancy. Presenting symptoms are often diverse, including but not limited to, memory loss, personality changes, seizures, cognitive impairment, and overall loss of function 1 . The presentation is often subacute and subtle, with symptoms occurring insidiously over weeks to months before family members or friends note frank presentation. Steroid therapy is often therapeutic, produces dramatic improvement, and frequently serves as a diagnostic and therapeutic intervention in such patients. Our patient's symptoms, while consistent with many of the previously described features of autoimmune encephalitis, was still nonspecific.
Isolated striational antibody positivity is not a classic finding in patients with autoimmune encephalitis. The antibody is typically associated with peripheral hyperexcitability syndromes, like Isaac's or Morvan's syndromes or myasthenia gravis. It is seen in patients with autoimmune encephalitis in coexistence with neuromuscular syndromes, especially with elevated titers of other autoantibodies 2, 3, 4 . Titers of striational autoantibodies are also felt to play a role, with lower titers usually seen to represent evidence of autoimmunity, rather than the presence of an underlying malignancy, especially with the absence of other autoantibodies 5 . A striational antibody titer of at least 1:7680 is reportedly suggestive of an underlying malignancy, especially in conjunction with antibodies like VGKC complex, GAD 65, or others. Prior malignancy is causative of a low titer of striational antibodies, rather than a current active malignancy 5 . Our patient had a relatively low titer of striational antibodies at 1:480, which, coupled with her previous diagnosis of breast cancer, did not initially support the probability of a newer malignancy. Absence of a amalignancy on the chest and abdominal imaging further suggested a lower possibility of a tumor, since lung and thymus tumors are most commonly associated with paraneoplastic syndromes 3, 5, 6 . EMG/NCV testing ruled out peripheral hyperexcitability as well, making autoimmune encephalitis less likely 3,4 .
We approached the diagnostic dilemma and treatment plans for patients with a presumptive diagnosis of possible autoimmune encephalitis. Our patient had subacute onset of symptoms inclusive of encephalopathy and confusion, new-onset seizures, and a reasonable exclusion of other causes, meeting all three criteria for a diagnosis of possible autoimmune encephalitis (Table 1) 7 . We decided to pursue empirical therapy with intravenous steroids due to the potential benefits and relatively low probability of risk in this clinical context. Our patient responded to steroid therapy, which seemed to strengthen our presumptive diagnosis of autoimmune encephalitis. We do concede that other conditions, like lymphomas, would also respond to steroid therapy. Still, we were reasonably confident that such diagnoses could be excluded based on the negative results seen on our extensive testing. Immune globulin therapy represents the standard of care, and she had responses to multiple courses of IVIg followed by a slow progression of symptoms over weeks-months after treatment, supporting our working diagnosis of autoimmune encephalitis. Her EEG tests did show diffuse slowing with triphasic waves and left temporal epileptiform discharges -findings that would favor diffuse cerebral dysfunction and new-onset seizures. While these findings were not specific for autoimmune encephalitis, they did add supportive evidence to the diagnosis of possible autoimmune encephalitis. Table 1: Diagnostic criteria for possible autoimmune encephalitis 7 -Diagnosis can be made when all three of the following criteria have been met: 1) Subacute onset (rapid progression of less than 3 months) of working memory deficits (short-term memory loss), altered mental status, or psychiatric symptoms 2) At least one of the following: • New focal CNS findings • Seizures not explained by a previously known seizure disorder • CSF pleocytosis (white blood cell count of more than five cells per mm 3 ) • MRI features suggestive of encephalitis 3) Reasonable exclusion of alternative causes Epiglottic tumors are not commonly associated with autoimmune neurological syndromes, and our workup did not reveal such a tumor during our initial round of testing, including whole-body PET scans. Lung cancers and thymomas are most commonly associated with autoimmune encephalitis and striational antibodyrelated paraneoplastic neurological syndromes 6 , and we were able to exclude these relatively common conditions with our imaging tests, especially given her history of smoking.
In summary, we can state that our patient represents a rare case of autoimmune encephalitis presenting with subacute symptoms, heralded by a diagnosis of positive striational autoantibodies, in association with epiglottic cancer. All these findings constitute a rare constellation of results and symptoms and make this patient's case worthy of publication and scientific study.
Conclusions
Autoimmune encephalitis must be considered in any patients presenting with subacute cerebral dysfunction and new-onset seizures. Extensive cerebral and whole-body imaging and additional testing are essential to confirm the diagnosis and exclude other conditions as well. We recommend empirical steroid or immune therapy at the earliest due to the potential for improvement with minimal potential risk. Consultation by an expert neurologist, epileptologist, oncologist, or rheumatologist is also advised at the earliest, especially if the patient is unstable or rapidly declining. Serial imaging and surveillance are often required before the diagnosis is confirmed and may take months to years to achieve. Empirical immune therapy must be considered with correct clinical and serological guidance and should probably be performed under expert consultation only. | 2020-08-26T01:02:22.871Z | 2020-08-20T00:00:00.000 | {
"year": 2020,
"sha1": "77f8eac8afec05bf97b670ee876b47bf87957daf",
"oa_license": null,
"oa_url": "https://medicalresearchjournal.org/index.php/GJMR/article/download/2186/2075",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "77f8eac8afec05bf97b670ee876b47bf87957daf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
115293366 | pes2o/s2orc | v3-fos-license | Experimental Study on High Strength Steel-Fiber Concrete
Concrete as one of the most used materials has been widely studied on its behavior, strength, etc. Concrete characteristic could be observed from the cracks that are commonly found in concrete. Micro crack is one of the types of the concrete cracks. The appearance of micro cracks in the mortar aggregate interface is caused by the inherent weakness of plain concrete. The weakness can be reduced by randomly spreading micro reinforcement into the mixture. This study aimed to investigate the effects on the use of steel fiber on the strength of concrete with planned characteristic compressive strength of 60 MPa. Mix Design on this study referred to ACI 211 4R-08 and 4 (four) varieties of the concrete mixture is used. The varieties in the mixture will be determine by steel fiber content on the mixture, which is 0%, 1%, 2% and 3% of concrete volume. The mechanical properties to be tested in this study are compressive test, split tensile test, shear test and flexural test were conducted. Result data obtained has been analysed and compared with a control specimen which is concrete with 0% steel fiber. The specimen planned characteristic compressive strength is not achieved. The largest compressive strength obtained from this test is 56.38 MPa obtained from concrete with a 2% steel fiber mixture. However, result data clearly shows percentage increases in compressive strength, split tensile strength, shear strength and flexural strength due to the increases of the used steel fiber on concrete, while the ductility effect start occurred on the use of 2% steel fiber. Keywords— high strength concrete; steel fiber; compressive strength; split tensile strength; shear strength; flexural strength.
I. INTRODUCTION
The behavior and concrete mechanical properties that have become common knowledge and also one of the weaknesses of concrete as a material are brittle and its minor ability to restrain tension force. Therefore, the use of concrete as a structural material can never be separated by the use of steel reinforcement. The addition of steel reinforcement in the concrete material is related to the lack of concrete ability in withstanding tensile stress. Thus, the combined material of concrete and steel reinforcement is commonly known as a reinforced concrete material.
On reinforced concrete structures design, the tensile stress that occurs is held by the reinforcement, while the concrete is being calculated in holding the compressive stress. The same principle is used to prevent cracks that occur when the tensile strength of the concrete is exceeded. Supposedly, the addition of steel reinforcement could reduce the appearance of the crack. The plasticity properties of the steel assist the brittle nature of the concrete. However, even after the addition of steel reinforcement, microcracks in the mortar aggregate interface mostly still appear. Microcracks are a microscopic crack on the material. On reinforced concrete, microcracks often appear near the tensile reinforcement and over the time, those fine cracks open the entrance for air and water. Combinations that can occur when air and water enter then trapped in the same space with a metal material, in this case, is steel, chemical processes would occur, which is corrosion. This should be avoided because the corrosion that occurs in reinforcing steel will reduce the performance of the steel bar as a tension restraining component. So, the question is how to cope with or even prevent the emergence of microcracks on the concrete. The principle of adding steel to the concrete material is expected to remain applicable on a smaller scale. Steel in the form of small pieces known as steel fibers is used to test the idea. Steel fiber spread into the concrete mixture that expected to be micro reinforcement that scattered randomly.
On the other hand, with the rapid development of construction and building technology, for example, high rise buildings, long-span, and bridges, implicate with the demands of the availability of structural material with high specifications. The need for research on concrete as one of the leading construction materials is indispensable. High strength and high-performance concrete research is the most suitable one.
Therefore, the study on the combination of high strength concrete and steel fiber seems promising. Over the last four decades, numerous research articles, international conferences, and research committees demonstrated that the interest in the potential use of steel fibers as a replacement of stirrups in RC structures is increasing in the concrete industry. Until now, however, the use of steel fibers is limited in the construction industry due to a lack of design guidelines in the codes. [1] II. MATERIAL AND METHOD [3], Concrete with steel fiber concrete is defined as concrete made from a mixture of cement, fine and coarse aggregates and little steel fibers. Generally, the size of steel fiber used is not more than 76 mm in length, while its diameter is less than equal to 1 mm. The purpose of adding fiber is to provide fiber reinforcement in concrete, which is uniformly distributed to prevent concrete cracks in the area due to heat and hydration or due to loading. [3] 3) Steel Fiber: Dramix is an innovative steel fiber and applied in industrial floor construction, warehouse floor, road pavement, basement parking, and widespread beam. Dramix is produced through a cold withdrawal process with a tip curve that will provide optimal binding. The advantage of using dramatic is to increase load bearing capacity due to redistribution of stresses, reinforcement on all sections provides excellent crack control, resistance to shock loads and dynamic loads, an enhancement which could lead to fatigue.
B. Methods
The literature study is conducted as a reference to get a complete picture of the research. Literature studies include understanding the concept of high strength concrete with steel fiber, as well as the material selection, and mix design and testing methods to be used.
The experimental approach is used to find mechanical properties of steel fiber concrete. The tests were conducted at the Structure Engineering Laboratory of the Catholic University of Parahyangan. The material used for this experimental are cement, aggregates, water, steel fibers, and superplasticizer [4]. The mix design and the trial mix is calculated according to ACI 211.4R-08 [5]. As a preliminary test, the aggregate that will be used were tested using the standards of ASTM C127 [6] and ASTM C128 [7]. By varying the amount and the types of fibers in the concrete mix also the mechanical behavior varies. However, some shortcomings have been observed like their brittleness. The addition of steel fibers into the mix can improve the brittle behavior of High-Performance Concrete materials [8]. Hence, in this study, experimental conducted for 4 (four) variation of steel fiber content in the concrete mixture. Fiber content used in this experimental is 0%, 1%, 2% and 3% of the volume. Concrete with 0% steel fiber considered to be a control specimen.
C. Material Preparation
In advance of researching this study, it is necessary to prepare the material first. The materials used in the preparation of this research test are as follows: PCC Cement, fine aggregate with size less than 0.475 cm, coarse aggregate with size 0.9525 cm to 1.27 cm, water, superplasticizer and steel fiber. Steel fiber used for the experiment has characteristic given in Table I. The fine and coarse aggregate properties strongly influence concrete as a composite material. The examination aims to obtain the quality of the planned natural compressive strength. The examination of the aggregate covers various aspects including water content test, gradation and modulus of fineness, specific gravity, absorption, and weight of contents. The recapitulation of material inspection is given in Table II.
D. Test Specimen
There are 3 sizes of test specimens for each variation (0%, 1%, 2% and 3% steel fiber content: • Cylinders with 10 cm diameter and height 20 cm for each variation of steel fiber content with the amount of 36 pieces used for concrete age factor testing, compressive strength test, and tensile strength test. • Beam with size 10 cm x 10 cm x 30 cm for each variation of steel fiber content with the amount of 12 pieces used for the shear strength test. • Beam with size 15 cm x 15 cm x 60 cm for each variation of steel fiber content with the amount of 12 pieces which is used for the flexural strength test.
E. Mixing Procedure
The mixture design of concrete which contain a proportion of cement, water and aggregates are essential. Concrete mix design is intended to achieve a specified compressive strength of the concrete. The mix design and the trial mix are calculated according to ACI 211.4R-08 [5]. After mix design is obtained, a trail mix is necessary for addition to test the planned slump value, the mixture cohesive, and the planned compressive strength, before mixing on a large scale.
Since trial mix result is suitable with compressive strength estimation, material and concrete mixing could be prepared. The concrete mixing procedure on this study was done the same as procedure mixing for regular concrete. The additional steps are the addition of superplasticizer into the concrete mixture (cement, fine aggregate, coarse aggregate, and water) and the addition of steel fiber. After the mixture is evenly distributed and achieved the desired workability, the fresh concrete could be poured into specimen formwork.
High Strength Concrete and Steel-Fiber Concrete has low workability. Hence, Superplasticizer is being added to the mixture and works to improve the workability of the concrete based on ASTM C494 [4]. Fig. 2. shows the slump of the fresh concrete in this experiment with and without Superplasticizer.
Due to the low workability of High Strength Steel-Fiber Concrete, materials mixing became very important to ensure the homogeneity of the mixtures.
F. Testing Method
There are 4 (four) main tests to be performed on the specimen to know the mechanical properties of steel fiber. [11]. The UTM record the load and displacement data.
A. Concrete Mixture Proportion and Trial Mix
The planned characteristic compressive strength of this concrete is 60 MPa. The mix design and the trial mix is calculated according to ACI 211.4R-08 [5]. The material used on this calculation is based on the material test which had previously been described in Section II. Table IV shows the proportion of mixed concrete designs of each fiber content variation. Following the known proportion of the mix design, a trial mix is conducted before mixing on a real scale with the same type of materials that will be used on the real scale. The trial mix trial results are given in Table III. Normally, at the age of 3 days, concrete has reached 40% of the planned natural compressive strength. Trial mix result shows that the specimen exceeded 24 MPa, which is the 40% of 60 MPa. The specimen number 2 which is on the age of 7 days did not reach 65% of the planned natural compressive strength. On the other hand, specimen number 3 with same age has exceeded 39 MPa, which is the 65% of 60 MPa. Then based on trial mix result, with proportion which has been determined by mix design, have been appropriate to reach the compressive strength of characteristic equal to 60 MPa.
Specimen density of the concrete in every test was conducted and shows in Table V. The standard concrete density test shows the range of 2.355 to 2.494. This is the same value on standard concrete in general. It means the density of high strength concrete is not affected by the addition of steel fiber, because the added percentage is relatively small to specimen volume.
B. Compressive Strength
The compressive strength of cylindrical specimen is tested after 28 days in Compressive Testing Machine (CTM) Fig. 3 shows the average compressive strength of 3 specimens of each variation.
The compressive strength of plain concrete is lower than steel fiber concrete. Average values of compressive strength increased by increasing the fibers volume content. On concrete with 1% steel fiber showed a slight improvement of the peak compressive strength of about 2.5% compared to the plain concrete. On concrete with 2% steel fiber showed better improvement of the peak compressive strength of about 25.7% compared to the plain concrete. On concrete with 3% steel fiber showed improvement of the peak compressive strength of about 11.4% compared to the plain concrete. However, the compressive strength shows a better result on concrete with 2% steel fiber. This is may due to the uneven mixing of concrete because the more steel fiber added to the mixture, the workability become difficult. The addition of more metal fillers resulted in a decrease in compressive strength due to the presence of porosity. Further research needs to be done to ensure the statement is applied on steel fiber concrete.
On the other hand, the specimen planned characteristic compressive strength (60 MPa) is not achieved. Most likely, this is caused by uneven mixing and the incorrect usage of cement type. However, the final compressive strength on every specimen is still classified as High Strength Concrete.
C. Split-Tensile Strength
The split tensile strength of cylindrical specimen is tested after 28 days in the compressive testing machine (CTM). Fig. 4. shows the average split tensile strength of 3 Specimen of each variation. The tensile strength of plain concrete is lower than steel fiber concrete. Average values of tensile strength increased by increasing the fibers volume content. On concrete with 1% steel fiber showed improvement of the tensile strength of about 39.4% compared to the plain concrete. On concrete with 2% steel fiber showed better improvement of the tensile strength of about 96.3% compared to the plain concrete. Similar to compressive test, concrete with 3% steel fiber shows a slightly differences with 2% steel fiber. On concrete with 3% steel fiber showed a tensile strength of about 95.98% compared to the plain concrete. The addition of polypropylene fiber enhances the tensile strength of the lightweight foamed concrete. Generally, on normal/ordinary concrete split tensile test, the concrete cylinder is entirely split after the peak load is reached. However, that is not the case with the specimen of this study. The concrete cylinder was not entirely separate after the test was conducted and the pieces of the split specimen were tightly attached to one another. Steel fiber held the pieces together by interlock/binding each other hooks.
D. Shear Strength
The shear strength of beam specimen is tested after 28 days in Universal Testing Machine (UTM). Beam specimen size of 100 x 100 x 300 is prepared on Universal Testing Machine. The setup of the shear test is given in Fig. 6. Fig. 7. shows the average shear strength of 3 Specimen of each variation. Overall, test results show the average tensile strength and shear increases proportionally due to the addition of steel fiber. The result of the shear test is slightly different from results on the compressive and split-tensile test. On concrete with 1%, 2% and 3% steel fiber shows the increase of the peak shear strength of 78.4%, 104,6%, and 124.1%, respectively compared to plain concrete specimens. The value os shear strength of concrete with 2% and 3% is worth two times the strength of the plain concrete. This result indicates the addition steel fiber greatly help concrete to retain shear.
E. Flexural Strength
The flexural/bending strength of beam specimen is tested after 28 days in Universal Testing Machine (UTM). The three-point loading system is applied where each loading is placed on one-third of the supporting distance of the concrete beam tested by UTM machine. This system is chosen to ensure that to the failure of the specimen would be on the middle span of the beam which is no shear force due to an external load. The test is carried out with a loading speed of 0.6 mm/minute. Fig. 8. shows the specimen settings on flexural/bending test. The crack pattern of the post-test specimen shows that the specimen was failed on restraining flexural load. This was in accordance to the setup test. Then before analyzing further, it should be explained that the failure pattern on the whole test is on bending/flexural failure.
Based on the load and deflection data recorded by UTM, essential points of the test which are yielding and the ultimate point can be obtained. Once we know the value of these points, the ductility value of this concrete can be calculated. Fig. 9. to Fig. 12 shows Flexural Test Data in the form of Load versus Deflection on each Specimen with 0%, 1%, 2% a 3% fiber content respectively.
The deflection and load data on specimen 0% steel fiber content show a typical and predicted behavior. This result similar to the research conducted by Kaïkea et al. [8]. Specimen with 0% fiber content (plain concrete) is failed immediately when it reaches a maximum load of 27.57 kN. One of the disadvantages of concrete is its brittle characteristic, and on this tests that characteristic visible. On the other hand, a specimen with 1% fiber shows that after reaching the maximum load of 29.92 kN, did not sustain an immediate collapsed the specimen yet could sustain a gradually decrease loads. The specimen could not bear another added loads. Hence, the yielding point and ultimate of the test are on the same time/point on concrete with 1% steel fiber. However, The steel fiber holds the specimen altogether after specimen started to crack and as expected works as the micro reinforcement.
The condition of the specimen with 2% fiber and 3 % fiber shows similar condition with 1% fiber specimen. After reaching the maximum load, the specimen did not sustain an immediate collapsed yet could sustain a gradually decrease loads. The maximum load of 2% fiber specimen up to 41,64 kN and the maximum load of 3% fiber specimen up to 57,46 kN. The average flexural strength increases proportionally due to the addition of steel fiber.
Similarly to the result of the splitting-tensile test, a specimen with steel fiber was not entirely separate after the test was conducted. Whereas standard/plain concrete (0% steel fiber) shows a brittle behavior, steel fiber concrete specimen shows more ductile behavior.
However, a better ductility is shown on 2% and 3% steel fiber specimen. On concrete with 1% steel fiber, the specimen could not gain more load after the first cracks. On concrete with 3% steel fiber could reach ductility value of 4.16. This indicates the addition of steel fiber helps the brittleness on concrete, which has been one of the most significant weaknesses of concrete. The steel fiber concrete did not fail immediately because the steel fiber had become the bridging agent when microcracks occurred. Displacement on yield load, displacement on ultimate load, peak load, ductility and Modulus of Rupture on every specimen of flexural/bending test is given in Table IV.
IV. CONCLUSIONS
The conclusions can be drawn from the experimental test that the effect of steel fiber on the strength of high-quality concrete is: The average compressive strength and splittensile strength increase marginally due to the addition of steel fiber. The most optimum average compressive strength is at 2% steel fiber content.
The specimen planned characteristic compressive strength is not achieved. However, nonetheless, the final compressive strength is still classified as High Strength Concrete. The average shear strength and flexural strength increases proportionally due to the addition of steel fiber. Concrete ductility is indicated by the addition of ≥ 2% steel fiber. | 2019-04-16T13:28:28.351Z | 2018-06-26T00:00:00.000 | {
"year": 2018,
"sha1": "1e2858244c9169e636f244b3c222f43fe0258e69",
"oa_license": "CCBYSA",
"oa_url": "http://www.insightsociety.org/ojaseit/index.php/ijaseit/article/download/3962/1491",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "72d08a77b5ab604f3b600a51348248d4af11504c",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
244039917 | pes2o/s2orc | v3-fos-license | Ethical opportunities in deep‐sea collection of polymetallic nodules from the Clarion‐Clipperton Zone
Abstract Infrastructure supporting the transition of human societies from fossil fuels to renewable energy will require hundreds of millions of tons of metals. Polymetallic nodules on the abyssal seabed of the Clarion‐Clipperton Zone (CCZ), eastern North Pacific Ocean, could provide them. We focus on ethical considerations and opportunities available to the novel CCZ nodule‐collection industry, integrating robust science with strong pillars of social and environmental responsibility. Ethical considerations include harm to sea life and recovery time, but also the value of human life, indigenous rights, rights of nature, animal rights, intrinsic values, and intangible ecosystem services. A “planetary perspective” considers the biosphere, hydrosphere, and atmosphere, extends beyond mineral extraction to a life‐cycle view of impacts, and includes local, national, and global impacts and stakeholders. Stakeholders include direct nodule‐collection actors, ocean conservationists, companies, communities, interest groups, nations, and citizens globally, plus counterfactual stakeholders involved with or affected by intensification of terrestrial mining if ocean metals are not used. Nodule collection would harm species and portions of ecosystems, but could have lower life‐cycle impacts than terrestrial mining expansion, especially if nodule‐metal producers explicitly design for it and stakeholders hold them accountable. Participants across the value chain can elevate the role of ethics in strategic objective setting, engineering design optimization, commitments to stakeholders, democratization of governance, and fostering of circular economies. The International Seabed Authority is called to establish equitable and transparent distribution of royalties and gains, and continue engaging scientists, economists, and experts from all spheres in optimizing deep‐sea mineral extraction for humans and nature. Nodule collection presents a unique opportunity for an ambitious reset of ecological norms in a nascent industry. Embracing ethical opportunities can set an example for industrial‐scale activities on land and sea, accelerate environmental gains through environmental competition with land ores, and hasten civilization's progress toward a sustainable future. Integr Environ Assess Manag 2022;18:634–654. © 2021 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals LLC on behalf of Society of Environmental Toxicology & Chemistry (SETAC).
INTRODUCTION
The deep sea is a potential source of critical metals soon to be needed in massive quantities to build renewably powered economies as one tactic to fight climate change (World Bank, 2020a). In this paper, we delve into the ethical issues and opportunities of using deep-sea metals-specifically, polymetallic nodules in the Clarion-Clipperton Zone (CCZ) of the eastern North Pacific Ocean.
The 2011 commodity boom brought a renewed interest in metals from the deep sea so that, by 2017, the International Seabed Authority (ISA; a 168-member international body responsible for both the regulation and protection of the seabed) had circulated working-draft documents toward a mining code to govern commercial access to CCZ metals (Ardron et al., 2018). The ISA Council's plans to finalize and adopt regulations by 2020 were delayed by COVID-19 (Shukman, 2021), and recent press has highlighted drawbacks and uncertainties related to accessing these metals. The question of whether to disturb the abyssal seabed for these metals requires inputs from physics, biology, economics, law, and other disciplines-among those, ethics. Although accessing the metals would necessarily entail harming sea life, it could also help accelerate the green transition and marginally offset some of terrestrial mining's damaging effects . Challenging ethical discussions are therefore provoked-including tradeoffs between the rights of different groups or geographies of species, utilitarian concerns for a society facing multiple global crises with conflicting solutions, diverse needs of a wide stakeholder set, and management of competing concerns in the face of uncertainty. If or when the regulations pass, supply-chain participants will furthermore face numerous decisions with potential ethical consequences-or ethical opportunities-for those wishing to take advantage.
Ethical and moral frameworks
There is no single definition of ethics, and it is often used interchangeably with morals (e.g., Gardiner, 2011). We use morals as the sense of right and wrong that individuals develop in the context of values and behaviors shared within their culture(s) (Jia & Krettenauer, 2017;Saucier, 2018) and ethics to indicate how such moral values are expressed in decisions and actions. Morals vary among cultures, religions, sects, classes, and so forth, so no single ethical framework is available to guide all people in all situations. Moreover, individuals typically apply ethical principles using several approaches: the Utilitarian (Consequentialist) Approach favors actions resulting in the most good for anyone directly or indirectly affected; the Rights Approach seeks to protect the moral rights of others; the Fairness (Justice) Approach emphasizes equal treatment for equals; the Duty Approach aligns actions with a perceived sense of duty to a god, leader, employer, or social group; the Common Good Approach aims to enhance connected life in community, including respect, compassion, and relationships; and the Virtue Approach mirrors actions of a fully virtuous person developing their own humanity and character to the highest level (Bonde & Firenze, 2013;Markkula Center, 2015). These diverse ethical mindsets and approaches play important roles in how different actors view right and wrong in a given situation, and they complicate the development of a shared ethic.
Morals and ethics also evolve with changing knowledge and circumstances. Perceptions of the relationship between humans and nature are shifting from "dominion" to "stewardship"; from "outside of nature" to "part of nature"; and from "local" to "global" as human activities expand to globally impactful scales. Utilitarian ethical approaches to produce "the greatest good for the greatest number" or "maximize the amount of happiness" (Shermer, 2018) were traditionally anthropocentric (and culture-centric), but now nonhuman actors play increasingly larger roles. Environmental ethics (Palmer et al., 2014), rights of nature and nonhuman entities (e.g., rivers, forests; Borràs, 2016;Stilt, 2021;Stone, 2010; see also Kurki, 2021), species rights (e.g., Soulé, 1985, "Species have value in themselves, a value neither conferred nor revocable but springing from a species' long evolutionary heritage and potential"), and "personhood" for nonhuman species (e.g., Staker, 2017) have entered social consciousness. The precept, "Above All, Do No Harm," Hippocrates'~480 BCE guidance for the physicians-patient relationship, now extends to human interactions with the living and nonliving environment (e.g., Niner, 2018;Van Dover et al., 2017).
A planetary perspective for an ethical dilemma
In a world confronting ominous global change (e.g., IPCC, 2021) and sensitive to historical failures to anticipate harmful consequences of industrial innovations (pesticides: Carson, 1962;plastics: Stubbins et al., 2021;ozone-depleting halocarbons: Miller, Zaelke, et al., 2021), the public's trust in science, politicians, and government has decreased (Bergeron, 2021;Pew Research Center, 2021). These factors, along with insufficient scientific knowledge of the deep sea , have fostered resistance to deep-sea mining as a whole, with some calling for a temporary or permanent moratorium (Fauna and Flora International [FFI], 2020; Greenpeace International, 2019;World Wildlife Fund [WWF] [Reuters, 2021]; Deep-Sea Mining Science Statement, 2021; International Union for the Conservation of Nature [IUCN], 2021).
Pausing to learn more about the consequences of nodule collection may initially seem ideal. We know that using nodule metals would harm sea life , kill nodule organisms, disrupt food web integrity, and reduce biodiversity (Stratmann et al., 2021), disrupt sediment structure (Gausepohl et al., 2020), disrupt microbial communities (Vonnahme et al., 2020) and benthic fauna (Simon-Lledó et al., 2019), create sediment plumes that would affect water-column fauna (Christiansen et al., 2020;Drazen et al., 2019Drazen et al., , 2020Robison, 2009), and potentially affect ecosystem services (Armstrong et al., 2012;Le et al., 2017;Thurber et al., 2014); and much is still unknown about the extent of likely impacts or the best ways to mitigate them. However, realistic constraints add complication. Contracting firms and their investors support a large portion of deep-sea research (Shabahat, 2021). A long and uncertain pause before industrial commencement poses the risk that the needed research might not be done, or that the industry might never commence (Minerals in Depth, 2021). The increased regulatory uncertainty may simultaneously affect the investment environment, as it is unclear precisely how much knowledge or time would satisfactorily create the social license to commence. Equally important are the ecological, social, and economic consequences if nodule collection does not occur (the "counterfactual" option). A sharp increase in the demand for metal to build a global green energy infrastructure is expected (International Energy Agency (IEA), 2021;Habib et al., 2020;World Bank, 2017, 2020a. Terrestrial mining project pipelines respond to the expected demand for metal with technological innovations, investment, and/or price levels adjusting so that enough projects enter the pipeline; absent deep-sea metals, terrestrial mining pipelines would likely expand to meet demand World Resources Institute, 2003). This would intensify known impacts from land mining: pollution of air, water, and soil (Agboola et al., 2020;Sergeant & Olden, 2020); degradation of habitats including rainforests and harm to biodiversity (Sonter et al., 2018); increased human morbidity and mortality (Cornwall, 2020;Lyu et al., 2019;Mucha et al., 2018;Nkulu et al., 2018); disruption of indigenous cultures and societies (Bainton, 2020;Tolvanen et al., 2018) and harm to their traditionally used sacred sites, habitats, and biota (Aborigen Forum, 2020;BBC, 2013;Cultural Survival, 2018;FIDH-KontraS, 2014).
Thus, ethical discussions of nodule collection necessarily include comprehensive analysis of the consequences of not commencing, with comparison of two potential futures. Instead, reports opposing deep-sea mining (e.g., Chin & Hari, 2020;FFI, 2020;Greenpeace International, 2019) and calls for a moratorium (e.g., Deep-Sea Mining Science Statement, 2021, signed by 571 marine science and policy experts from more than 44 countries, and IUCN, 2021) have not included such counterfactual analysis. Meanwhile, the ISA and actors in the nodule-metal supply chain, including contractors, design engineers, and others, focus mainly on the ISA's (2019) scientific requirements for assessing environmental impacts and the economic dimensions of mining. There is occasional mention of broader principles, but many further opportunities exist to expand the ethical dimensions of their activities.
Therefore, we suggest that a broad discussion of ethical considerations could help establish a path forward that meets the physical and intangible needs of humans, while equally representing the interests of nature and its systems of support. Taken from a comprehensive and global systems standpoint, we refer to such a framework as a "planetary perspective" (Figure 1). This spans ecosystems, species, stakeholders, and geographies, and provokes consideration of metal sources' roles in major global concerns-climate change, the biodiversity crisis, water supply, equity, and so forth. The planetary perspective underpins the discussion in this paper.
We are not the first to consider the role of ethics in preserving and restoring the ocean's health. Auster et al. (2008) recognized two main perspectives for marine conservation: Utilitarian, concerned with valuing and maintaining the ocean's ability to produce ecological goods for extraction and other ecosystem services, and Ethical, concerned with conserving organisms and ecosystems for their own sake simply because they exist. As framed, both are ethical, the former emphasizing sustainable production, the latter rights of nature. Thatje (2021), who also called for a moratorium on deep-sea mining, advocated ethical reasoning to mitigate differences between the scientific knowledge-based approach, resulting ecosystem management challenges, and economic demand, and to inform the decision-making process on deep-sea mining and its regulations, but did not provide details. As Hallgren and Hansson (2021) stated, if debate is only about whether to permit deep-sea mining to proceed or not, the opportunity to set the best possible mining practices from the start and the wider scientific and societal debates concerning moral implications, equity, and risk trade-offs are at risk of being overlooked.
This paper adds to the ethical discussion by (1) providing planetary context relevant to ethical questions about deepsea mining, (2) summarizing potential inherent ethical advantages of CCZ nodules, (3) outlining key ethical objections to their use, and (4) highlighting practical opportunities for ethical action in operating the industry.
BUILDING AN ETHICAL CONTEXT FOR PRODUCING METALS FROM NODULES
Decisions governing the polymetallic nodule resource involve a diverse constellation of stakeholders with varying interests. First and foremost, the ISA, established under the 1982 United Nations Convention on the Law of the Sea (UNCLOS) and the 1994 UN Implementation Agreement (1994 Agreement) and comprising 167 states plus the European Union, oversees ocean resources in areas beyond national jurisdiction (ABNJ, also known as the Area), or 54% of the world's ocean. It is assisted by a 30-member Legal and Technical Committee that approves work plans and proposes technical and environmental regulations to a Council; and by the Council, consisting of 36 member states including major consumers, investors, and exporters with an aim of equitable representation of developmental status and geography. The ISA's most immediate task is to establish environmental and economic policies governing exploitation of CCZ polymetallic nodules; it has already granted 18 exploration contracts and set protocols for required environmental impact assessments. Other stakeholders in the regulation of polymetallic nodule exploitation include mining contractors, metal supply-chain participants, scientists, and nongovernmental advocates for marine or terrestrial conservation. The full set of stakeholders with a vested interest in CCZ nodules (and the decisions of whether and how to collect and process them) is quite extensive ( Figure 2). It must include counterfactual stakeholders who participate, benefit from, or are affected by terrestrial mining today; local and global communities affected by externalities of nodule collection or terrestrial mining, such as local tailings dam collapses, and global climate change; and consumers of metal products. Thus, most world citizens will be affected to varying degrees by whether CCZ nodules are exploited.
Integrating the various moral and ethical perspectives of these stakeholders into a single shared ethic is not simple. But although individual stakeholders may prioritize localized concerns relevant to a state or special interest group, they also generally agree on a shared set of priorities encapsulated by the UN Sustainable Development Goals (SDG; UN, 2015)-and among them, that urgent priorities of climate change, biodiversity loss, rainforest preservation, and water scarcity be strongly considered. Indeed, there is global agreement that humanity must find a sustainable path forward for human development that conserves living and nonliving resources, preserves ecosystem services, strengthens human well-being, and protects intergenerational equity (Convention on Biological Diversity [CBD], 2020;UN, 2015).
Historically, miners paid little attention to harm done or remediating damage; for example, approximately 500 000 abandoned hard rock mines exist in the United States with cleanup costs estimated as high as $54 billion (US House of Representatives, 2016). As environmental and social impacts become internalized through economic (EU, 2021;OECD, 1992), reputational (Farha et al., 2017), or other costs, including for mining (e.g., Carvalho, 2017; University of Victoria, 2019), business models and profit equations are evolving. Although financial viability is still foundational for any enterprise, the dominion model of doing business is somewhat shifting toward sustainability and stewardship (Bennett et al., 2018;Heuer, 2010).
Mining of primary metals is not sustainable per se, because the extracted ores are not available for future generations and can only be replaced in geological time. However, the produced metals are durable goods, which can be available for future recycling (i.e., as secondary metals). As societies approach stable, low-growth populations and economic systems, circular metal economies can emerge, reducing or eliminating the need for primary mining and its associated impacts (EU, 2020). Until then, new primary metals are required, and obtaining them in the least harmful way is ethically desirable and practically urgent.
Are deep-sea metals needed?
Some opponents of deep-sea mining argue this point, suggesting there is no need for the new metal source (e.g., FFI 2020, Greenpeace International, 2019;Harris, 2019;LaBossiere, 2018;Miller, Brigden, et al., 2021). Three common suggestions to eliminate the need are reduction in overall metal consumption, development of new battery technologies, and/or increased recycling. All are aspirational objectives, but they are currently insufficient to meet projected mid-term metal and environmental needs.
The first contention is that perhaps metal consumption can be reduced. However, a substantial rise in metal demand in the next 20 years appears highly likely, driven by global trends and government targets (e.g., Campagnol et al., 2017). In a projected scenario for constraining temperature rise by 2050 to 2°C, base metal demands for electric vehicle (EV) batteries could increase 11-fold beyond today's levels (Watari et al., 2020;World Bank, 2017); or even more, as Xu et al. (2020) estimated a 17-19 fold increase for cobalt and a 28-31 fold increase for nickel from 2020 to 2050 if lithium nickel-manganese-cobalt (NMC) oxide batteries continue to dominate. Still, Teske et al. (2016) asserted that land-based metal resources are more than adequate to support a transition toward a 100% renewable energy supply, even assuming aggressive growth rates and ambitious energy standards. However, adequate in-the-ground resources do not equate to an economically viable project pipeline. Multiple factors such as ore grades; the costs of obtaining, transporting, and refining ores; and available extraction and processing technologies determine the extent to which resources in the ground can be economically mined. Based on project pipeline and investment levels compared with expected demand, shortages of cobalt and/or class 1 nickel are expected as soon as 2025 (Azevado et al., 2020;Campagnol et al., 2018;Desai, 2019). Indeed, the Scientific and Technical Committee (STAP) of the Global Environmental Facility has recently added Oceanic Minerals to its agenda for technical critical elements (Ali & Katima, 2020).
The second contention is that battery manufacturers might innovate away from heavy reliance on these critical metals. Indeed, although much of the global EV industry relies on NMC-type batteries, innovations are underway. Whether and how quickly advances in batteries might obviate the need for seabed metals has substantial material significance, but also ethical consequences related to the types and amounts of environmental or social harm that result. Efforts are underway to reduce or eliminate cobalt-largely to reduce corporate exposure to unregulated mining in the Democratic Republic of Congo (DRC) that employs children (Lichner, 2020). For instance, batteries for Ford Motor Company's 2022 all-electric F-150 pickup truck will reduce cobalt and manganese proportions to 5% each, although increasing nickel content to 90% ("NMC nine-half-half"; Halvorson, 2020). In addition, cobaltand nickel-free battery formulations are being investigated (Grey & Hall, 2020), with lithium-iron-phosphate (LFP) batteries a viable solution for low-cost, low-range EVs (Ali & Katima, 2020) although with lower energy density and poorer low-temperature performance (Randall, 2020;Rudisuela, 2020). Lithium-iron-phosphate batteries make up 14.3% of the EV battery market as of 2021 (Els, 2021), with Tesla equipping them in some Model 3 EVs for Chinese markets (Els, 2021), and China's BYD, the second-largest EV producer, using LFP for its entire fleet (Mining.com, 2021). Lithium-iron-phosphate batteries may complement NMC for differing product requirements; meanwhile manufacturing lines to build LFP and nickel-based batteries at high volume are under development. Once they are in place, if a more favorable battery chemistry is invented and brought to market, large-scale replacement of NMC batteries would likely require a decadal time scale. Twenty-five years has been a typical investment cycle for major refurbishments (IEA, 2020), and generational improvements in heavy industry have historically taken several decades (Pae & Lehmann, 2003). Supply-chain development, production, and fleet-wide deployment of a new battery technology (or fuel, such as hydrogen) that eliminates the need for nodulederived metals might proceed somewhat faster, but it would likely still require a decade or more (Grey & Hall, 2020).
If LFP or other chemistries cannot displace nickel-based batteries, then the third contention remains: Perhaps recycling could eliminate the need for seabed metals. However, recycling cannot meet current battery-metal demand, let alone mid-term future battery-metal demand in a growing market, because there is insufficient metal in circulation or in landfills to be reclaimed (Ali et al., 2017;Ali & Katima, 2020;Herrington, 2021;World Bank, 2020a). As one example, in 2020 recycled material supplied only 37% of the nickel for its major current use, stainless steel production (International Nickel Study Group [INSG], 2021). Moreover, many of today's EV batteries are constructed in ways that hinder recycling (Morse, 2021), and repurposing used EV batteries for stationary energy storage, although environmentally beneficial, could also slow the rate of recycling (Xu et al., 2020). Because metal economies are global, with numerous consumers and producers and intricate supply chains (e.g., Melin et al., 2021), creating a circular economy is very complex, and doing it quickly would require cooperation at unprecedented speed and scale (Ghisellini et al., 2015).
Because demand reduction, technology innovation, and/ or circular economies cannot quickly meet metal demands for the renewable energy transition, primary metals will be extracted in the interim. These marginal metal quantities can come from terrestrial mines and/or the deep sea. In their demand-reduction argument, Teske et al. (2016) did not evaluate the incremental environmental, social, and economic effects of increased terrestrial mining should deepsea metals not be used. Nickel mining relies increasingly on laterites underlying biodiverse tropical rainforests. Hein et al. (2020) stated that use of deep-ocean metals might help to reduce the pressure for deforestation in ecologically sensitive ecosystems; this is particularly important when forests are needed to fulfill recent climate treaties and UN SDGs. Environmental, social, and economic effects of terrestrial mining are further exacerbated by declining ore grades of copper and nickel, with greater ore required for the same metal output, so a greater life-cycle footprint to produce the same metal (see e.g., dynamic life-cycle analyses by van der Voet et al., 2018). Amid a shrinking time frame to mitigate multiple global crises, considering potential roles and impacts of both terrestrial and deep-sea metals is critical.
Ethical advantages of polymetallic nodule metals
We continue our discussion with an exploration of inherent ethical advantages that polymetallic nodules may provide. Subsequently, we discuss their ethical objections.
Direct harm to humans largely avoided. The CCZ's remote location, hundreds of miles from any human habitation, avoids direct harm to communities often encountered in terrestrial mining. Such avoided impacts include pollution of air or freshwater, drawdown of freshwater resources, impacts on human health, desecration or taking of ancestral lands, and reduction or elimination of large, often toxic, tailings ponds, as nodules' high ore grades produce less waste and contain lower levels of heavy elements (e.g., Paulikas et al., 2021;Sommerfeld et al., 2018).
Indirect harm to indigenous peoples can be reduced. Sections of UNCLOS may support heightened rights of adjacent coastal states and indigenous peoples if their coastal communities value and depend on highly migratory species culturally, socially, and economically, including for their food security, and when the life histories of such species span entire oceans and encounter threats and pressures beyond the control of any one entity (Dunn 2017). Impacts on food species such as tuna might be largely avoided by discharging riser water below 1000 m (van der Grient & Drazen, 2021), but near-surface impacts typical of ship operations could affect them and nonfood species, including noise (Erbe et al., 2019;Jones, 2019;Weilgart, 2018), lights (Miller et al., 2017), exhaust emissions, wastewater discharge, and the possibility of oil spills. Very slow speeds of the collector machine and production ship (~0.5 kt) strongly reduce the likelihood of striking marine mammals, turtles, or other large animals; although relatively slow speeds (<15 kt) are expected for ore transport ships, even lower speeds might be preferable depending on season, location, and known presence of such animals (e.g., Rockwood et al., 2017).
Indigenous peoples are guaranteed opportunities for input to ABNJ resource issues, both as humans (Hunter et al., 2018) and because their traditional knowledge can broaden the diversity of perspectives and solutions for ABNJ resource governance (Dunn et al., 2017; see also Tilot et al., 2021). Additionally, the UN Declaration on the Rights of Indigenous Peoples (UN, 2007) codifies the standard for states to obtain "free, prior and informed consent" from indigenous peoples before initiating or approving projects with potential harmful impacts on their traditional lands, territories, and resources, although notably, it does not provide a right for groups' direct consultation with the ISA, other than through the state.
Child labor avoided. Employment of children below the legal working ages of 15 years or 18 years (International Labour Organization [ILO], 1973) is a major problem globally, particularly in Africa and Asia. UNESCO reported that, in 2012, approximately 40 000 children aged 7-17 worked in cobalt mines in the Katanga region of DRC, making up approximately one-third of the total number of workers (Walther, 2012) and exposing them to numerous hazards and potential health consequences (Broom, 2019;ILO, 2019;O'Driscoll, 2017). Cobalt from deep-sea nodules will not solve the global problem of child labor, but it can avoid that abuse, because there is no place for children to work in the deep-sea mining supply chain.
Reduced opportunities for armed conflict. Resources and commodities with inelastic demand curves, including battery metals, have played important roles in shaping territorial war incentives (Acemoglu et al., 2012;Chin & Hari, 2020). National interests may differ, for example, on whether to permit exploitation or on the details of royalty sharing, and sediment plumes from mining in one country's contract area could drift into another's. However, any such disagreements would more likely be solved diplomatically, not militarily.
Public ownership. UNCLOS (Part XI, Sec. 2, Art. 136, 137) designates ABNJ resources and rights thereto as the "common heritage of mankind" (not particular states), giving all people rights to participate in resource and benefit decisions through existing channels-or emerging ones (e.g., decisions governing how marine genetic resources [MGR] and other biodiversity beyond national jurisdiction [BBNJ]) will be used (Collins et al., 2020, Young & Friedman, 2018. Broadly shared economic benefits. On 9 September 2020, the ISA requested proposals for creating a "Seabed Sustainability Fund" (or Global Fund) as an instrument for channeling some or all financial benefits from seabed mineral exploitation into "programmes, projects and activities consistent with the status of seabed minerals as the common heritage of mankind" (ISA, 2020). UNCLOS (Part XI, Sec. 2, Art. 140) specifies that ABNJ activities be done for the benefit of mankind as a whole, "…taking into particular consideration the interests and needs of developing States and of peoples who have not attained full independence or other self-governing status recognized by the United Na-tions…." The necessary payment regime for distributing financial and other economic benefits is not yet complete. Feichtner (2019aFeichtner ( , 2019b, and Van Nijen et al. (2019) summarized the history of the legal and philosophic framework for equitable distribution of financial and other economic benefits and progress on ISA's payment regime negotiations. Types and rates of royalties and/or profit shares to sponsoring countries and mining contractors, shares to ISA, equity of payments to countries, and reparations for losses to countries with existing mines, among others, remain contentious (e.g., African Group, 2019). Formulae under consideration generally performed similarly (Kirchain et al., 2019): payouts calculate each country's population as a percentage of the world's population, weighted to redistribute income from higher income states to developing countries.
Reparations illustrate an ethical dimension of deep-sea mining that is absent on land; creation of a new terrestrial mine entails no obligation to consider negative economic impacts of the new production on other mining companies or countries. In contrast, the definition of Area resources as the common heritage of humankind provides a unique basis for such reparations. As can occur with any new mineral supply, deep-sea metals could reduce metal prices relative to the counterfactual, including for those mined from terrestrial ores. This could cause economic losses in countries economically dependent on terrestrial mining. A report to the ISA (Lapteva et al., 2020, see Table 8.2) identified 13 countries in which mining makes up a large share of export revenues and/or of GDP, including Zambia, DRC, Eritrea, Chile, Lao People's Democratic Republic, Mongolia, and Peru (copper); Madagascar and Zimbabwe (nickel); DRC (cobalt); Gabon (manganese); Mauritania, Namibia, and Papua New Guinea (cumulative). Whether reparations could make up for such losses remains to be seen.
Potential for lower environmental impacts. Depending on the processes selected, nickel, manganese, cobalt, and copper from nodules could have a lower environmental impact than obtaining those same metals from land ores, including up to 70% less global warming emissions to make one billion EV batteries and up to 94% less sequestered carbon loss (Paulikas, Katona, Ilves, & Ali, 2020), substantially reduced waste streams (Paulikas et al., 2021), up to 90% less freshwater used, fewer toxic exposures, and fewer injuries and fatalities to miners and inhabitants of surrounding communities (Hein et al., 2020;Paulikas, Katona, Ilves, Stone, et al., 2020).
If society's objective is to obtain metals (or other materials) to support the green transition with the lowest possible negative impact, then deep-sea mining is not a yes-or-no question, but rather a broader consideration of choosing the best overall strategy for sustainability. "Best" can be viewed from a broad, global standpoint: What will be best for ecosystems and species on land and at sea, along with what will be best for the atmosphere, freshwater cycle, human health, and economic health of nations and their societies-in short, a planetary perspective. Accomplishing this equitably will require representative input from all stakeholders-a tall order in any case, but especially when crucial decisions are time limited.
Ethical objections to deep-sea nodule collection
Most opposition to deep-sea mineral extraction focuses on the large area to be disturbed, impacts on biodiversity, and possible impacts on broader oceanic or atmospheric processes. We evaluate each of these in turn.
Large size of area. As one scenario to provide intuition about impact scale, we consider the seabed area that would be needed to supply metals for a global automotive fleet of one billion Tesla 3-type EVs with NMC-811 (75 kWh) batteries. This total area is large-~432 000 km 2 over a 30-year period-in part because nodules are collected at the seabed surface, a two-dimensional problem. By comparison, an estimated 156 000 km 2 of land would be affected for metals to build those batteries, including deforestation of 66 000 km 2 (Paulikas, Katona, Ilves, Stone, et al., 2020). The billion-EV seabed area is~36% of the CCZ area contracted for exploration (1.2 million km 2 );~10% of the entire CCZ (4.5 million km 2 ); 2% of the North Pacific Ocean's abyssal seabed; 0.2% of the global abyssal seabed; and equivalent to~1% of the world's agricultural land (World Bank, 2020b) or~9% of the seabed annually trawled by industrial fishers globally (Sala et al., 2021).
The remaining risk perspectives are not unique to deepsea mineral extraction and are ethically relevant considerations for any mineral-exploitation activity, but they command heightened interest because nodule exploitation may begin in the near future.
Recovery time. Nodule exploitation will indeed harm deepsea organisms and the abyssal seabed. Recovery time will be very long, and recovered systems will probably differ from predisturbance baselines in species composition, diversity, and population densities. We suggest that recovery for most groups would proceed at a comparable, slow pace as occurs after disturbance in terrestrial habitats; and in similar nonlinear fashion, with different species and ecosystem functions (e.g., carbon fixation, nitrogen fixation, nitrification, denitrification, and mineralization) recovering at different rates. As is typical for recovery after disturbance, widely distributed species that are actively mobile or transported by currents or wind may pioneer areas within years or a few decades after disturbance ceases, with serial addition and turnover of species as well as recovery of community functions continuing for hundreds of years or more (Haynes, 2014).
Considering the ocean first, responses to a few short (days), small-scale (≤tens of km) experimental commercial and scientific disturbance studies are available for the abyssal seabed, although results are confounded by major methodological differences . At seven sites in the Pacific, multiple surveys assessed faunal recovery over periods of up to 26 years. Almost all exhibited some recovery in faunal density and diversity for meiofauna and mobile megafauna, often within one year, but very few faunal groups returned to baseline or control conditions after two decades , and different faunal and functional groups responded differently. At the best studied site, the DIS-turbance and reCOLonization (DISCOL) experiment in the Peru Basin, the loss of the surface sediment layer caused long-term (i.e., beyond several decades) reduction in microbial activities, organic matter turnover, nitrogen cycling, and microbial growth rates. Because microbial communities form the very basis of the abyssobenthic food web, all fauna directly or indirectly dependent on microbial biomass production would take longer to recover than microbial communities themselves (Vonnahme et al., 2020). Processing of fresh phytodetritus by bacteria, nematodes, and holothurians in DISCOL plow tracks had not fully recovered after 26 years compared with reference sites (Stratmann et al., 2018). In general, mobile and smaller organisms tended to have greater potential for recovery of both population density and diversity, but high variance in recovery rates among taxa obscured any general pattern of recovery or successional stages (Gollner et al., 2017). Recovery times for all organisms and functions would be even longer in the CCZ, where sedimentation rates are much lower than in the Peru Basin, and certainly with the prolonged disturbance of full-scale industrial nodule exploitation.
Large sessile fauna exhibited no recovery during the 26 years since the DISCOL disturbance , and it is clear that nodule-dependent megafauna would suffer most harm and recover only very slowly. Fauna attached to any collected nodules would die. Similar specimens may survive on nodules that remain uncollected within a contract zone or elsewhere in a designated Area of Particular Environmental Interest (APEI), reference zone, or undesignated portion of the CCZ, which makes proper design of APEIs and ecological studies of different CCZ regions critical. To the extent that organisms capable of bioturbation recovered from disturbance, bioturbation could in time uncover buried nodules (Dutkiewicz et al., 2020) as radiochemical studies of nodules from North Pacific sites near the CCZ indicated average rollover rates of 1000 to 100 000 years (Hun & Teh-Lung, 1984), so some buried or partially buried nodules could be available for recolonization faster than the million years or so needed to form new ones. This might be more likely in areas with greater bioturbation depths, such as portions of the eastern CCZ where it is usually limited to the upper 7 cm but reached 13 cm at one site (Volz et al., 2018). In any case, it seems likely that time scales of many millennia will be needed for recovery of nodule-dependent organisms, although initial recolonizations would begin sooner. By comparison, centuries or millennia are also required for recovery of forest ecosystems disrupted by terrestrial mining. For example,~60-year-old secondary forests in Amazonian Brazil contained just over 41% of the average carbon density and 56% of the tree diversity as in the nearest primary forests (Elias et al., 2020). Furthermore, after cutting in the Atlantic Rain Forest, an estimated 100-300 years were needed for animal-dispersed species, nonpioneer species, and understory species to reach levels found in mature forests. However, regaining pre-impact levels of endemism would need 1000-4000 years (Liebsch et al., 2008).
Species extinction. The risk of species extinctions is problematic (e.g., Heffernon, 2019). As summarized in Jones et al. (2021), it is hypothesized that the CCZ benthos includes both species with widespread distributions and many rare species. Extinctions could occur in species endemic to very restricted distributions within exploited zones, for example, in meiofauna such as nematodes (Macheriotou et al., 2020). Such extinctions may be very difficult to confirm, because of the effort needed to baseline truth distributions with sufficient resolution to demonstrate the lack of living individuals of rare species. If high percentages of nodules are removed, nodule-obligate megafauna may also face the highest risks of extinction ; such fauna contributed approximately 50% of all morphotypes observed in the UK-1 exploration contract area, eastern CCZ (Amon et al., 2016).
The ISA has set aside nine APEIs totaling~1.4 million km 2 (31% of the CCZ), intended to "Protect biodiversity and ecosystem structure and function by a system of representative seafloor areas closed to mining activities" (ISA, 2011). In addition, portions of each contract area would remain undisturbed, including areas with lower nodule cover, topographies too steep to mine, and ISA-required set-aside preservation reference zones (PRZ) within the contracted areas. Impact reference zones (IRZ), also required by the ISA, will further aid in measuring, managing, and preventing impacts including species harm within each contractor zone.
The ISA's APEI system and other set-asides should reduce the likelihood of extinctions, but it is uncertain whether this will be enough to prevent them. Not all APEIs have been surveyed in sufficient detail to determine whether they are adequate to fulfill their proposed purpose, and some may not be. Surveys of APEI-6, for example, found a lack of large nodules and the habitats they create, thereby differing from areas specifically targeted for mining activities (Jones et al., 2021). This led the authors to suggest that additional APEIs, and/or other management activities beyond the APEI network alone, could be needed to fulfill the ISA mandate.
Harm to global systems. Taken globally, the utilitarian planetary perspective leads one to consider whether polymetallic nodule collection can adversely affect systems at a planetary level. For instance, concern has been expressed that deep-sea mining may contribute to climate change by releasing organic carbon from sediments (e.g., Chin & Hari, 2020;Greenpeace International, 2019). However, Atwood et al. (2020) stated that carbon in deep-sea sediments along the continental slope, abyssal basin, and hadal zones may be more resistant to disturbances than coastal continental shelf sediments; and that even if it were remineralized, this would not influence atmospheric CO 2 in the near future because deep-sea carbon cycling works on millennial time scales. A second systems concern is ocean acidification.
Ocean uptake of anthropogenic CO 2 has already begun dissolving sedimentary CaCO 3 -which normally neutralizes excess CO 2 and prevents runaway acidification-in sediments of the deep Atlantic Ocean (Sulpis et al., 2018). However, the impact from deep-sea mining would be dwarfed by ocean uptake of CO 2 from the atmosphere: CCZ nodules needed for one billion batteries would displace 5.83 × 10 8 g of sequestered carbon and disrupt sequestration of an additional 2.44 × 10 8 g for a total of 8.27 × 10 8 g over a~30-year period (Paulikas et al., 2021)-seven orders of magnitude less than the 2.5 ± 0.4 PgC/year (±2.5 × 10 15 g) annually absorbed by the ocean (Watson et al., 2020). A third concern is pollution by toxic metals (Chin & Hari, 2020;Christiansen et al., 2020;Greenpeace International, 2019). Yet, currently, there is no evidence that dissolved metals would be released along with the sediments and fines (Muñoz-Royo et al., 2021). Paul et al. (2021) evaluated the risk of toxicity from dissolved copper released from pore water by deep-sea mining as negligible; they also called for further research on different size fractions of copper, co-release of several metals, and variations of pH.
Other utilitarian concerns. Local utilitarian concerns may arise, particularly regarding interference with fishing or bioprospecting. Estimated annual fish catches (mainly tuna) in the CCZ within 200 km of nodule contract areas are 35 000-89 000 metric tons, with approximately 10% each taken by Ecuador, Mexico, Panama, and Spain, and 15% by the United States (van der Grient & Drazen, 2021). Substantial negative impacts from nodule collection may be unlikely given the depth of nodule collection if riser-water discharge occurs deeper than 1000 m (van der Grient & Drazen, 2021), but impacts should be monitored by countries highly dependent on those fisheries; stakeholders are encouraged to express concerns during the ISA's Environmental Impact Assessment (EIA) process. Extensive bioprospecting opportunities are also likely to remain in the region, given the exploited area may be only~10% of the CCZ or 2% of the abyssal seabed of the North Pacific Ocean. Moreover, because contractors biosample at greater density than bioprospectors can often afford, surveying for seabed metals and genetic resources together could have a positive impact on this utilitarian concern by reducing costs, improving economic viability, and allowing more comprehensive assessment of environmental issues (Royal Society, 2017). Metal contractors can also make archived samples available to bioprospectors, providing more material and at lower cost than might otherwise be possible.
Intrinsic rights and rights of nature. Beside harming groups (i.e., nature, species, populations), some animal-rights supporters advocate the intrinsic rights of individual animals (and sometimes plants) to exist and not be harmed. Intrinsic rights refer to an organism's worth independent of how others, including humans, use it (de Vere et al., 2018;Francis, 2015). Increased sentience-variously defined as the ability to sense pain, "the ability to feel, perceive, or be conscious, or to experience subjectivity" (Bekoff, 2013), or "[an animal] which has feelings and such animals may have some ability to evaluate the actions of others in relation to it and third parties; remember some of its own actions and their consequences; assess risk; and have some degree of awareness" (International Whaling Commission [IWC], 2011) -magnifies the importance of intrinsic rights. Whether ecosystems themselves might have moral rights or legal standing is discussed by Dasgupta (2021).
Ethical concerns based on existence rights of nature (Harvard Law Review, 2016;Surma, 2021) and intrinsic rights of organisms or species, although well intentioned, can become problematic if only considering a narrow ecosystem such as the seabed. This is because competing rights of nature become relevant once the planet as a whole is considered. If CCZ nodules are not used, the footprint and impacts of terrestrial mining will increasingly affect nature on land as terrestrial mining expands to meet rising batterymetal (or other green transition) demands. Falling terrestrial ore grades imply that existing mines will either need to excavate more ore-with the consequent increases of greenhouse gases, wastes, and toxins-or/and find new higher-grade ores in virgin territory. Laterites that underlie tropical forests and grasslands are the most promising new source of high-grade nickel ore. Mining them brings the potential for increased conflict with indigenous cultures and with the high biodiversity of those regions.
In the CCZ, some animals (e.g., marine mammals) possess the higher "IWC" rank of sentience, and perhaps some others possess the lower "de Vere" rank; but most multicellular animals likely feel some version of pain. Animals at risk in some terrestrial habitats where battery metals are mined contain many more species of "higher rank" sentience, including some that may qualify for personhood, such as orangutans, elephants, whales, and great apes and gorillas (Grant, 2020;Nowlan, 2019;Staker, 2017). An ethical rights-of-nature-based approach to CCZ nodule collection includes generating awareness of the unavoidable harm or pain being caused, while also weighing the pressing reasons for it, including the harm it can avoid to humans, plants, animals, and other organisms on land.
Enablement of future undesirable action (slippery slope).
Permitting nodule exploitation in the CCZ could spur more of it, or other types of deep-sea mineral exploitation (or other uses) in the Area (Ramirez-Llodra et al., 2011) or elsewhere. At the same time, opposing it based on a slippery-slope argument risks ignoring investigation of benefits it could produce. Counterintuitively, a decision by the ISA not to permit nodule collection in the Area could spread environmental and social impacts elsewhere, with reduced opportunity for international oversight, if countries respond by mining within their exclusive economic zones. Economic supply-demand forces would provide some limit to expanding mineral extraction initiatives, and each project would require thorough evaluation on its own. Any projects in territorial waters should be governed by state regulations and procedures that are no less effective than those required by ISA for projects in the Area . If nodule exploitation is approved, the ISA should judge any further proposals for Area projects on their own merits, but with increased attention to cumulative effects, and applying ethical lessons from the nodules case when appropriate.
Practical opportunities for ethical choices
Numerous opportunities exist for ethical input in the nodule-metals industry to reduce harm, accelerate emergent opportunities for environmental or social gains, and even have positive spillover effects on other industries. Figure 3 summarizes a full range of ethical opportunities by category. Figure 4 shows specific opportunities for each value-chain step.
Broadly, ethical results of deep-seabed mineral exploitation can manifest through: • Strategic objective setting, leading to long-term strategic plans guided by ethical principles • Specific engineering and operational decisions in offshore collection and onshore processing • Stakeholder commitments, and incorporation of stakeholder viewpoints • Democratization of ethical governance through ISA processes and individual contractor actions • Investment and alignment around circular-economy futures and recycling • Deepening ethical opportunities manifestly present with CCZ nodule exploitation Strategic objective setting. As Billett et al. (2019) stated, "Unless contractors include ecosystem services, and the costs associated with their loss or impairment, as part of their decision-taking, and as part of an ethical approach to ensuring the health of the oceans for the Common Heritage of Mankind, it is unlikely that resource and engineering managers will be stimulated to devise technical solutions to reduce environmental harm." In addition to the ISA's explicit biodiversity-preservation objectives, incorporating heartfelt engagement with the environment into system-design processes, for instance, as an explicit component of costbenefit analyses, can lead to more successful solutions. Ethics and environmental impact management can also be set as a highest-level objective, directly driving subsystem requirements, to help ensure that engineering optimizations are not made in a vacuum and do not cause unintended environmental problems or harm (e.g., Melin et al., 2021). The Environmentally Responsible Company/Entity Ethic section of the International Mining and Minerals Society's Code of Conduct (IMMS, 2011) is a good start, but it could be strengthened.
Engineering and operational design decisions. Cuvelier et al. (2018) summarized options available for mitigating harm during offshore collection and afterward. Their potential effectiveness can be estimated after each contractor completes ISA-required, on-site tests of reducedscale collection systems, and more accurately known after deployment of full-scale systems, about 2026 or later (Shukman, 2021). Among other objectives will be designing collection robots that minimize harmful lighting and noise, sediment compaction and disruption, and that direct the elevated sediments in directions or patterns that reduce sediment flow to the riser system and deposition thickness on the seabed. Operation of collectors could also be informed by real-time feedback from video monitors. Minimizing sediment disturbance depth will be a priority to limit the number of organisms directly harmed, reduce benthic plumes, and reduce the sediment content of riser water. Collection patterns and practices can be designed to minimize collateral damage in ways analogous to forestry initiatives, such as reduced impact logging (RIL; Bicknell et al., 2014) and RIL-C for climate (The Nature Conservancy [TNC], 2019). Riser systems can be engineered to minimize the temperature difference between slightly warmed deep-sea water and ambient water at its point of discharge. Fine filtration or centrifugation of water on the collection vessel can capture metal-containing nodule fragments and reduce the sediment load in discharge water. Discharge depth, whether in the water column or at the bottom, can be chosen carefully to minimize harm to inhabitants of the seabed and overlying water column, which has until recently received less attention (Christiansen et al., 2020;Drazen et al., 2020;Robison, 2009). Even if the sediment content of riser streams is minimized by reducing sediment entrainment and optimizing on-board filtration, discharge plumes will contain large quantities of small particles that can spread for hundreds of kilometers or more during the year or more they require to sink (Muñoz-Royo et al., 2021). The important unknowns are (1) how long it will take for dilution of plume sediments to background levels (~20 μg/L in the CCZ); and (2) over what area or volume will sediment loads exceed tolerable levels that do not clog feeding or respiratory structures, compromise neutral buoyancy of jelly organisms, or jeopardize visual range and acuity in organisms that depend on vision for communication, reproduction, or predation (Drazen et al., 2020;Robison, 2009). Dilution of sediment load to background level (10 µg/L) reportedly occurred within 1 km for flocculated benthic plumes produced during experimental mining of cobalt-rich crust on a seamount 300 nm SSW of the Canary Islands (Spearman et al., 2020). Based on flocculation and the higher levels of background sediment present in the CCZ, those authors expressed confidence that the area over which plume turbidity exceeds natural turbidity would be similarly limited, but such data will not be available until reduced-scale collection system tests are completed. The utility and feasibility of actions to restore habitats or populations (e.g., distribution of artificial substrata for colonization, larval seeding, transplantation of organisms) are not yet known. Collected nodule ores would be transported by ship to a port, then additional ground transport as needed, to reach onshore processing plants and refineries. Transport will Integr Environ Assess Manag 2022:634-654 © 2021 The Authors DOI: 10.1002/ieam.4554 FIGURE 4 Smart nodule collection: opportunities for ethical influence from cradle to grave. This paper specifically focuses on metal production, which has a "cradle" (seabed) to "gate" (refined metal) scope, but a circular economy encompasses processes from "cradle" to "grave." This figure includes the entire cradleto-grave process, adding in gate-to-grave steps of product manufacture, use, and disposal to illustrate opportunities for ethical influence along the entire life cycle. Photo credits: Columns 1, 2, 4: Bjarke Ingels Group, Copenhagen. Column 3: The Metals Company, Vancouver. Column 5: General Motors. Column 6: Volkswagen generally use fossil fuels and emit gaseous and particulate pollutants; all have ethical opportunities for future upgrades by incorporating renewable energy. Preferential use of ships seems beneficial; at approximately 0.2 MJ/ton-km, shipping is currently approximately 1.5 and 13 times more energy efficient than rail or truck, respectively, with associated CO 2 emissions of approximately 0.14 kg CO 2 e/ton-km (Wakeland et al., 2012;see also Fenhann, 2017) also proportionately lower, although particulate emissions are much higher as a result of diesel combustion of less refined fuel. Environmental impacts also decrease with shorter transport distance, slower speeds, and use of newer carriers. Batterypowered ferries, coastal freighters, and coastal tankers are operating or under construction (Crider, 2021;Hockenos, 2018). Renewably powering ocean-going vessels is harder to achieve, but Maersk (2021) is developing large container ships operable with carbon-neutral methanol or biofuel; and hydrogen or ammonia fuels could further decarbonize ocean shipping (Timperley, 2020). Some trains already operate with overhead grid power, and new versions will incorporate rechargeable batteries (Halvorson, 2020). Batteryoperated trucks are on the horizon. Operation of EVs of all sorts will be most environmentally advantageous if they are recharged by renewably powered electric grids.
Stakeholder commitments. Actors are called to make commitments to stakeholders that uphold sustainable and equitable principles (Figure 3), including ongoing funding of scientific research, real-time monitoring of impacts, and a commitment to cease operation if serious harm is caused. In alignment with their strategic objectives, contractors can add environmental, social, and governance (ESG) commitments to their bylaws; at a minimum, they can commit to transparency in sustainability reporting. All parties up and down the value chain can commit increased attention to the ethical implications of their decisions.
Democratization of ethical governance. Multiple opportunities for governance democratization present themselves. The ISA is charged with representing its member states; members can push to set ambitious impact standards and hold contractors accountable, and to set high environmental standards in the EIA process. Contractors can offer scientific and stakeholder engagement with real-time adaptive management systems and create open discussion forums. Stakeholders can hold contractors accountable for ESG practices and encourage best-practice transparency. Contractors employing ethical practices could also gain opportunities in the marketplace through third-party certifications, as has occurred in the seafood industry (Marine Stewardship Council, Aquaculture Stewardship Council), forest products industry (Forest Stewardship Council), environmental management systems (International Organization for Standardization [ISO 14001]), and others. Those bodies drew their power from the ethical framework shared by many consumers who, in turn, encouraged decisions based on sustainability and long-term health of a resource and its consumers and workers. Whether there will be a callout for a "best practice label" for metals remains to be seen. Circular economy. Also uncertain is whether participants in the CCZ nodule industry and its supply chains might undertake collaborative actions toward sustainability that could support such a label, perhaps by striving to create circular economies at micro scale, emphasizing cleaner production and energy conservation, sharing information, and always prioritizing environmental health. Many opportunities exist for larger scale, longer term actions and policies by industry, governments, and international organizations to hasten development of circular economies (e.g., Haugan et al., 2020;Söderholm & Ekvall, 2020).
However, if new metal supplies from nodules (or other sources) do ease supply-chain concerns for nickel and cobalt, then pressure to invest in upgrades in recycling technology or other supply-chain improvements could decrease. Deep-sea mineral players could counter this effect by prioritizing working with manufacturers to invest in circular metal economies, using a portion of profits to promote recycling research, and committing within their bylaws to endto-end ownership of the eventual recycling and reuse of all metals they mine.
Deepening inherent opportunities. Some opportunities are particularly available to a new industry with no infrastructure to retrofit, potentially able to take advantage of global momentum for sustainability and ESG reporting (WEF, 2020) in its design, and to benefit from lessons learned from the terrestrial mining industry. In such ways, CCZ nodules create the potential for unique benefits in the nascent industry. The ISA can enforce strict environmental mandates in their regulations, raising the bar far higher than typically seen in terrestrial mining. Individual contractors can also set an example of clean onshore processing, due to nodules' inherent high ore grades and low levels of heavy elements; new metallurgical processing plants can be optimized for waste-free processing and refining of nodule ores (Sommerfeld et al., 2018). Ore transport by ship allows greater optionality in the location of processing facilities, greater ethical choices in the use of renewable sources, low land ecosystem impact, and proximity to end markets to ensure byproducts are not stockpiled or turned into waste streams. Setting this environmental example with nodule processing could force similar competitive progress for terrestrial mining as well.
In concluding, we emphasize that the opportunities listed in Figures 3 and 4, as well as discussed in the text, represent situations in which ethical input could improve decisions. The various stakeholders involved in CCZ nodule exploitation may not all capitalize on such opportunities in the same way or to the same degree. Yet, taken as a whole, participants in the new industry have a novel chance to demonstrate ethical leadership. In doing so, actors ranging from regulating authorities, states with vested interests in deep-sea mineral extraction, mining supply-chain actors, and communities are called to take ethical action. If these ethical opportunities are taken, they may help improve overall outcomes to biodiversity, ecosystems, and the entire planet.
CONCLUSION
On 25 June 2021, Nauru exercised its right to invoke the two-year rule, pursuant to the Annex to the 1994 Implementing Agreement. In doing so, Nauru triggered a firm deadline for the ISA to adopt rules, regulations, and procedures for approving work plans for exploitation in the CCZ (ISA, 2021). Four days later, a response came from the Deep-Ocean Stewardship Initiative (DOSI)-a global union of interdisciplinary experts who pool research, skills, and expertise to advise on sustainable deep-ocean governance and resource management. They wrote to the ISA that two years is not sufficient to understand deep-sea mining's potential impacts on species and ecosystems (DOSI, 2021), and that this ran counter to the precautionary approach required by the ISA (see Jaeckel, 2017).
On the other hand, Article 3 of UNESCO (2017) provides conflicting guidance: "Precautionary approach: Where there are threats of serious or irreversible harm, a lack of full scientific certainty should not be used as a reason for postponing cost-effective measures to anticipate, prevent, or minimize the cause of climate change and mitigate its adverse effects." This leaves stakeholders to question which is the greater threat: commencing nodule exploitation before impacts are completely understood? Or risking a long delay in nodule-metals delivery that could compromise terrestrial species and habitats, and possibly lead to metal shortages that delay transitions to a renewably powered economy?
On one hand, a clear, shared ethic has emerged and persisted in recent years, prioritizing environmental protection and sustainable production of the very materials required to construct the global green transition. In 2019, Amnesty International launched an "ethical batteries campaign," calling for action by governments, industry, innovators, investors, and consumers to create ethical and sustainable batteries free of human rights abuses, conflict minerals, and climate harm within five years (Amnesty International, 2019;Church & Crawford, 2018 Yet other aspects of the shared ethic remain less clear. Some corporations and organizations prefer inaction until risks are understood and alternatives are exhaustedincluding Google, BMW, Volvo, Samsung SDI (Reuters, 2021), and the IUCN (2021). The IUCN resolution's exhaustive requirements before commencement of the nodule industry go far beyond impact risk assessments, to include implementation of "polluter pays principle," reduction in primary metal demand, transformation to a circular economy, responsible terrestrial mining practices with public consultation mechanisms, informed consent of potentially affected communities, and reformation of the ISA for greater transparency, accountability, inclusion, and effectiveness. Accomplishing these laudable objectives may be improbable within the proposed <10-year time frame, the same critical period during which use of nodule metals might help mitigate crises of climate, biodiversity, water, and other nested emergencies.
Furthermore, the IUCN (2021) resolution seems to ignore impacts of accelerated terrestrial mining on the 4885 species animals and 5740 species of plants included on its Red List, for which "Mining" and "Quarrying" are shown as threats, despite having passed another resolution that 30% of Earth's surface be designated as "protected areas" to halt and reverse the loss of wildlife. Calls to halt deep-sea mining also seemingly fail to acknowledge the large role that deepsea metal contractors play in sponsoring the research needed for the rigorous impact assessments called for in moratoria.
It is for these reasons that an ethical approach to deepsea mining demands broad and comprehensive consideration. Can the question, Is polymetallic nodule collection an ethical choice? be answered: by mapping the nexus of competing objectives and needs of many stakeholders, and holding ethics as a top priority across the value chain? It requires an inclusive and ethically purposeful effort to amass factual scientific information, acknowledge the risks of commencing the industry with incomplete scientific understanding, and acknowledge the risks of a moratorium shifting the environmental burden to terrestrial mining and introducing nickel and cobalt supply-chain risks.
An ethical approach to deep-sea mining would have the industry see itself playing a critical role in the global quest for sustainability, as set out in UN (2015), CBD (2020), and related plans. Academic scientists can evaluate existing evidence of favorable climate, waste, water, and human health advantages that nodule mining might provide and encourage contractors to prioritize environmental best practices (see, e.g., Hein et al., 2020;Paulikas, Katona, Ilves, & Ali, 2020;Paulikas, Katona, Ilves, Stone, et al., 2020;Paulikas et al., 2021). The industry and its stakeholders must honestly and transparently acknowledge the harm it inflicts, estimate its extent, and do everything reasonably possible to minimize it. The terrestrial mining industry is encouraged to do the same. Voices championing preservation of the ocean should be balanced by attention to harm to rainforests and other terrestrial ecosystems affected by the decision. Those defending rights of nature would also seek that balance.
As Hein et al. (2020) stated, the growth of the deep-ocean mining industry offers an opportunity to develop green technologies and policies as the industry develops, while also initiating new strategies to mitigate its environmental effects. All stakeholders would need to take responsibility for shaping the new industry, while recognizing its interconnectedness to the global economy and environment, as well as humanity's interconnectedness with nature. Bennett et al. (2017) advocated the development of a comprehensive, broadly accepted code of conduct to ensure marine conservation processes and actions are fair, just, and accountable, as well as ecologically effective. Many of the social concerns those authors noted do not exist in the CCZ per se, but developing an agreed-upon code of conduct shared by members of the CCZ value chain (and periodically updating it as needed) would contribute to the credibility of conservation efforts, both within the Area and beyond it.
It is also worth internalizing, both in our individual consciousnesses and into corporate strategic planning and institutional visioning, a new paradigm: that primary extraction of any metal ores from Earth should be a timelimited industry. Achieving this would require widespread anticipation, encouragement, and participation in one of the greatest challenges humanity faces-the creation of sustainable circular economies that will allow people and nature to live in greater harmony. Such harmony would not be pain free, as humans may still need to fish, plow, practice forestry, and animal husbandry, and cull some species to protect others (Barkham, 2020). While granting the assertion by Griffin et al. (2020) that compassion should not be the sole basis for conservation, we still may hope that such empathy, along with care and respect even for species less likely to kindle emotions in humans (Miralles et al., 2019;Tonino, 2020), may help guide our actions in the deep sea and elsewhere.
Given the variety of moral and ethical backgrounds, vested and financial interests, and problem-solving approaches represented among the many polymetallic nodule stakeholders, compromises will be needed. The best hope may be for policies, decisions, regulations, and agreements that produce the fewest negative impacts on air, water, land, sea, people, nature, and species, while providing the most broadly equitable suite of benefits across those categories. That formula, a Utilitarian Approach, would by definition include the Rights Approach, Fairness (Justice) Approach, Common Good Approach, and-if each stakeholder acted with the integrity, fairness, generosity, and tolerance of which we humans are capable-the Virtue Approach.
The day before we submitted the revised manuscript for this paper, Claudet et al. (2021) published "Transformational opportunities for an equitable ocean commons" online. Without mentioning ethics, those authors nevertheless framed pragmatic goals in ethical terms: an equitable future, equity for people and nature, expansion beyond anthropocentric notions of equity and rights in ABNJ to explicitly encompass the natural world and its components, intrinsic value of the ocean, rights of nature, and the ocean as a rights-bearing entity. Expanding such considerations to all aspects of the green transition, including where and how metals are sourced, will produce a more beneficial future for all.
ACKNOWLEDGMENT
We thank guest editors Guy Gilron and Samantha Smith for editorial suggestions on preliminary versions of the manuscript for this paper and three anonymous reviewers for critique of a submitted draft. Their suggestions substantially improved the final product.
CONFLICT OF INTEREST
S. K. and D. P. are independent consultants to The Metals Company (TMC), formerly DeepGreen Metals, and were paid for their time researching and writing this paper. G. S. S. is an employee of TMC. G. S. S. and D. P. are TMC shareholders. TMC did not review or constrain this paper in any way.
DATA AVAILABILITY STATEMENT
This paper was prepared using information published in the literature cited. No primary data were generated. | 2021-11-13T06:18:08.998Z | 2021-11-12T00:00:00.000 | {
"year": 2021,
"sha1": "2233d25cef19d33807964ae040057ddcdbf8f4d7",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/ieam.4554",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "08d6b423da84965fc4a0357067e5b942e1a2b871",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55049699 | pes2o/s2orc | v3-fos-license | Corporate Governance , Strategic Choices and Performance of Financial Institutions in Kenya
It has been argued that corporate governance plays a critical role in determining the strategic direction of corporations. This is achieved through formulation of strategic choices that utilizes firm’s internal resources to align corporations to the external environment for optimal performances. In this study, we sought to determine the influence of corporate governance and strategic choices to the performance of financial institutions in Kenya. A cross sectional descriptive research design was adopted and, primary data collected from top executives of 108 financial institutions. We analyzed data using regression analysis. Results indicate that corporate governance and strategic choices significantly influences organizational performance. Further, the study revealed a partial mediation of strategic choices to corporate governance and organizational performance. It was concluded that besides corporate governance being a key determinant of performance in organizations, adoption of appropriate strategic choices greatly enhances the performance. The results are important to organizational leaders, policy makers, investors and all corporate stakeholders in determining optimal strategies for organizational posterity.
Introduction
The ultimate growth and success of corporations is largely determined by the strategic choices adopted.Actualization of these strategic choices typically depends on the governance framework embraced by organizational leaders.As such, corporate governance lays foundation for optimal utilization of firms' resources through formulation of strategies that aligns with the environment (Daily, Dalton, & Cannella, 2003).These strategies are largely sanctioned by board of directors, making them a key pillar in corporate governance (Kiel & Nicholson, 2003).Further, codes of corporate governance grants directors the formal authority to ratify management's initiatives, evaluate their performance, determine corporations purpose, ethics and strategic direction (OECD, 2004;CMA, 2015).Intrinsically, as directors engage in strategic management processes, each board member's perception and interpretation of strategic issues facing corporations subsequently affects strategic choices championed (Hambrick, 2007).Ultimately, these strategic choices impact firm's performance and organization's value.
Strategic choices have been highlighted as the vehicles through which organizations align to the environment thus enhancing performance.Scholars have long recognized that firms' survival and success depend on both environmental forces and strategic choices (Child, 1972;Judge et al., 2015).The alternatives made depends on variety of contextual influences arising from past events, present circumstances, and perspectives of the future (March, 1991).Strategic choices are the optimal objectives that a firm adopts to pursue value maximization.It is also viewed as the espousal of intended courses of actions by an organization, in consideration of available resources and required commitment (Van den Steen, 2013).Thus, strategic choices enhance clarity of generic strategy and organization's strategic intent thereby leading to high performance (Parnell, 2013).Supporting generic strategies are specific primary or secondary strategies that are either internal or external related. in organizations.This is achieved through linking firms' mission and vision to the strategic choices made.As such, good governance is a key determinant of the strategies adopted by firms and their implementation.Through good governance, corporate strategies are aligned to organizations internal processes and prevailing external environmental forces.Another key role played by good governance is matching of corporate strategic choices for optimal utilization of firms' resources (Daily, et al., 2003).It is argued that boards of directors are taking a more central oversight role in running organizations.As a result, board's performance is directly linked to the firms' performance (Pettigrew, 1992).Thus, researchers and practitioners have sought better understanding of the processes and behaviors involved in effective board performance.Some of the considerations sought include board demographics, composition and skills (Forbes & Milliken, 1999).Despite all the evidence that links corporate governance to strategic choices for enhanced performance, inconclusive and inconsistent conceptual and empirical findings have been recorded.On one hand, board of directors have been viewed as responsible in directing corporation's strategic direction for value maximization (Holderness, 2003;Carpenter & Westphal, 2001;Pugliese et al., 2009).A contrary perspective views board members as passive in strategy making and subject to management and CEO's manipulation (Lipton & Lorsch, 1992).Further, while Heracleous (2001) argued that board influences strategic choices and their implementation, consequently affecting firm performance, Essen, Oosterhout and Carney (2011) recorded no association between firm's governance stance, and strategic choices adopted to performance.These inconsistencies led to the need for further interrogation on how these variables interact thus the impetus for the study.In this study, corporate governance was viewed from the various manifestations that include code of corporate governance, board skills, independence, diversity, size and board committees.Strategic choices comprised alliances, mergers, acquisitions, diversifications, divestments, adoption of information technology, innovation and product development.Further, performance was analyzed using the six perspectives of sustainable balanced scorecard (SBSC); financial, customer focus, internal processes, growth and development, social issues and environmental consciousness.
Literature Review and Conceptual Hypothesis
In this study, industrial organization economics (IOE) theory was adopted for conceptualizing the linkage between corporate governance, strategic choices and organizational performance.We found IOE theory to be most suitable because of its structure-conduct-performance (S-C-P) paradigm as suggested by Bain (1956) and Porter (1981).The theory emphasizes that industry structure determines firm conducts, which in turn determines its performance (Scherer, 1980;Conner, 1991).The conduct is viewed as firm's choice of key strategies which are vital economic dimensions for performance (Porter, 1981).Structure provides stability in economic and technical environment in which firms compete.Further, the structure determines the conduct to be adopted.These include information communication technology (ICT) embraced, degree of product differentiation, level of integration and barriers to entry (Porter, 1981;Scherer, 1980).In the current study, among key strategies adopted by financial institutions to enhance performance include strategic alliances, mergers and acquisitions, products differentiation, adoption of information technology for transacting which significantly reduces operational costs and enhancing innovativeness.The respective industry structure and conduct sets the regulatory framework to be adopted by all organizations.
Further, they determine the governance mechanisms to be adhered to.This is achieved through formulation of codes of corporate governance at firm, industry and sector levels.In this study IOE theory is further complimented by stakeholder theory.The theory recognizes that firms operate within an environment composed of different interest groups aside of the immediate owners, with diverse interests (Harrison & Wicks, 2013).Thus, there is needed to take all their interests into consideration while making corporate strategic decisions (Freeman, 1984;Lawal, 2012).Further, organizations are expected to expand their fiduciary duty to the local community and the environment in which they operate (Freeman, 1984).Thus, stakeholder theory provides a mechanism for connecting ethics and strategy.Therefore, firms that diligently seek to serve the interests of a broad group of stakeholders creates more value overtime leading to high performance (Freeman, 1984;Harrison & Wicks, 2013).This study views the role of organizational leaders, as that of making optimal strategic choices that maximizes firm value for all stakeholders.
Strategic choices are viewed as the optimal objectives that a firm adopts to pursue for value maximization.It is also viewed as the espousal of intended courses of actions by an organization, in consideration of available resources, required commitment, persistence, irreversibility and presence of uncertainty (Van den Steen, 2013).The objectives are recognized as strategic when they represent matters of importance to an organization particularly, those bearing upon its ability to prosper in a competitive environment or where there is needs to maintain credibility (Child, 1997).Strategic choices are also regarded as the goals and plans that an organization sets to adapt and to align with the internal and external environment.It can also be viewed as the outcome of the intent and analysis of options available in reflection of the feasibility, prudence, consensus and acceptability (Gellerman & Potter, 1996).
Further, organizations exist to create value for stakeholders to posterity.This is achieved through accomplished corporate governance structures and practices.Structures identify distribution of rights and responsibilities among various corporate stakeholders (Aguilera & Jackson, 2003).In addition, corporate governance practices involve board operations such as appointment, functioning, compensation and directing corporations' strategic direction (CMA, 2015;OECD 2004).Subsequently, as directors engage in strategic management processes, each board member's perception and interpretation of strategic issues facing the organization affects strategic choices made (Hambrick, 2007).As such, various attributes of the board permeate firms core strategic decisions.Once these strategic decisions are actualized, they dictate the level of performance and the overall value of the firms.It is therefore important to consider individual board members' attributes at appointment to ensure a mix of competences and diversity required.Some of these attributes include board composition, board independence, board sizes, busy directors serving in multiple boards and board members individual characteristics (Dewji & Miller, 2013).
The discussion on board involvement to strategy has been fuelled by a combination of contextual factors, alternate theoretical perspectives, and inconclusive empirical results.Machuki and Aosa (2011) found organizations performance to be influenced by strategic behavior adopted in response to external environment.As such, organizational effectiveness depends, in part, on achieving a match between control strategies and the strategic context of the firm (Hoskisson, 1987).Holderness (2003) argued that boards are responsible for developing firm's nexus of contracts thereby aligning the actions and choices of managers with the interests of shareholders.Moreover, board of directors are argued to be legally responsible for the strategy of firms.This is due to their leadership position to direct firm's strategic direction hence influencing the outcomes (Carpenter & Westphal, 2001;Pugliese et al., 2009).
On the contrary boards are perceived to be passive in firms' strategy and subject to CEO's and executives' manipulation (Lipton and Lorsch, 1992).Furthermore, anecdotal evidence suggests that boards might destroy value when they become involved in strategy, due to their distance from day-to-day firm operations (Jensen, 1993).In addition, the presence of information asymmetries, and the need for board to remain independent contributes in making them inert to firm's strategy making (Hendry and Kiel, 2004).Further, it is argued that boards' participation in strategic decisions would make them co-responsible thus jeopardizing the required distance between board members and managers (Boyd, 1995).It emerges, there is no consensus whether board of directors does or should execute their strategic decisions roles effectively, thus leading to the need for further interrogation.Further, limited literature exists elaborating how corporate governance influences firm performance.It is argued that strategic choices align organizations' mission and vision, to the operating environmental forces.Yet, the intervening effect of strategic choices to the relationship between corporate governance and firm performance remains unsettled.This lead to the question, does strategic choices significantly intervene the relationship between corporate governance and organizational performance in Kenya's financial Institutions?Thus, the objective of this study was to establish the effect of strategic choices on the relationship between corporate governance and organizational performance.Strategic choices were viewed as the mechanisms through which board of directors influence organizational performance.As such, strategic choices would intervene the relationship between corporate governance and organizational performance.This objective was presented by the hypothesis below; H 1 : Strategic choices significantly intervenes the relationship between corporate governance and organizational performance.
Methods
This study used cross-sectional descriptive research design necessitated by the need to describe the variables interaction's between and across the four categories of financial institutions (Zohrabi, 2013).This design attempts to define and describe study subjects by classifying them into various categories and relating the variables' interaction.In this study, financial institutions were categorized into four sub categories, that is, banks, micro finance institutions (MFIs), insurance companies and deposit taking SACCOs.The ideal sample size was determined in consideration of data homogeneity, the level of precision required and the desired degree of confidence.Thus, we used Israel's (1992) formula; n = where n= sample size; N the population size and e the error term of 0.05 (95 percent confidence level).From a population of 271 financial institutions a sample of 162 was established to be ideal for the study.Subsequently, the identified sample size was distributed across the financial institutions as follows: banks 40, MFIs 12, insurance companies 55 and 55 deposit taking SACCOs.
We developed a semi-structured questionnaire for collecting primary data.The questionnaire was segregated along four main sections that included organization's demographics, corporate governance, strategic choices and organizational performance.The questionnaire was issued to one top executive in each sampled financial institution.Response was received from 108 financial institutions as analyzed in Table 1.The three variables of the study were corporate governance, strategic choices and organizational performance.
Corporate governance was operationalized along the six dimensions that included code of corporate governance, board skills, independence, size, committees and diversity.The second variable entailed key strategic choices adopted by financial institutions such as strategic alliance, mergers and acquisition, diversification, divestment, ICT adoption, products development and innovation.The sustainable balanced scorecard (SBSC) operationalized organizational performance along the six perspectives of financial, customer focus, internal business processes, learning and growth, social equity, environmental consciousness.Composite indices of each variable were used for regression modelling.
The analysis was undertaken using Baron and Kenny's (1986) four steps regression model.The first step involved regressing corporate governance on organizational performance.The second step entailed regressing corporate governance on strategic choices.In the third step, strategic choices were regressed on organizational performance.Finally, both corporate governance and strategic choices were regressed on organizational performance.These are summarized below.
Figure 1. Path Analysis
Path A depicts the direct relationship between corporate governance and organizational performance.This relationship was found to be significant.Path B shows interaction between corporate governance and strategic choices.In path C, both corporate governance and strategic choices' effect on organizational performance was outlined.
Results
Out of a sample size of 162 financial institutions, 108 responded with analyzable data, translating to 67% response rate.Data were collected from the top executives of financial institutions including the CEO, the company secretaries and the directors.We found the financial institutions to be leanly staffed, 58% having below 250 permanent employees.Only about 10 percent had above 1000 permanent staff.Further, the data was found to be highly reliable as demonstrated by Cronbach's Alpha of between 0.78 to 0.92.Results also confirmed data validity through factor analysis.
The results obtained on stepwise regression analysis are presented in Tables 2(a) through 2(d).Intervention step 1 involved regressing corporate governance on organizational performance.Results are presented in Table 2 (a).The results obtained (R2 = 0.261, p≤0.05,F statistics= 30.341) as presented in Table 2 (c) indicate that the relationship between strategic choices and organizational performance was statistically significant.In this model, strategic choices explained 26.1 percent of the variation in organizational performance.The p value of 0.000 and F statistics of 30.341 depicts a robust and significant model explaining relationship between the variables.Consequently, the analysis proceeded to step four (4).
In the final step (4) both the independent variable (corporate governance) and intervening variable (strategic choices) were regressed on dependent variable (organizational performance).The results presented in Table 2 (d) demonstrate that there was statistically significant intervention by strategic choices on the relationship between corporate governance and organizational performance.Further, the results indicate that both the independent variable (model 1) and intervening variable (model 2) were also statistically significant.This implies partial mediation/ intervention.Inspection of the model summary in Table 2 (d) demonstrate a significant change in R square (ΔR 2 = 15.2) from 21.1 percent to 36.3 percent revealing evidence of mediation.The beta coefficient of 0.401 (β = 0.401) imply that for every 1 percent change in strategic choices, there was a variation of 0.401 percent in organizational performance.The F statistics of 20.806 and p value of p=0.000 (below p≤0.005) affirms a strong, statistically significant model.Thus, the hypothesis was supported.
Additional tests for intervention were done using two panels of Pearson correlation matrix.The first panel involved testing the relationship between corporate governance and strategic choices.In this panel, corporate governance was the predictor variable while strategic choices were the outcome variable.In panel two, strategic choices become the predictor variable while organizational performance was the outcome variable.The results are presented in Tables 3(a) and 3 (b) where both the correlation matrices were positive.Panel 2 correlation matrix was testing the relationship between strategic choice and organizational performance.
The results are presented in Table 3(b) revealing a significant positive correlation between the two variables.The Pearson correlation value of 0.511 indicate a strong relationship between strategic choices and organizational performance.The model was tested at 0.05 level of significance.The results of the two-correlation matrix were both significant, supporting intervening effect of strategic choices to the relationship between corporate governance and organizational performance.
Discussion
Corporate governance plays a critical role in determining the strategic direction for corporations.Board of directors is responsible for formulating and sanctioning firms' key strategies (Carpenter & Westphal, 2001).As such, scholars and practitioners have generally acknowledged the importance of adequate board control and independence in effectively executing their strategic decision making roles (Jensen & Zajac, 2004).The current study sought to examine the intervening effect of strategic choices to the relationship between corporate governance and organizational performance.This was achieved by analyzing the extent to which the financial institutions in Kenya adopted the various strategic choices.These included mergers, acquisitions, strategic alliances, diversifications, divestiture, innovation, technology adoption, and products or services development.
The results revealed a significant partial mediation of strategic choices to corporate governance and organizational performance.This suggest that corporate governance through strategic choices as a conduit influences firm performance.This was consistent with other studies that have shown a strong relationship between corporate governance, strategic choices and organizational performance.Heracleous (2001) argued that board influences strategic choices and implementation, consequently affecting firm performance.The study however, cautioned against excessive regulation on corporate governance, to avoid too restrictive and impractical adoption and implementation.On the contrary, Essen, Oosterhout and Carney (2011) found no statistically significant mediation by strategic choices on the relationship between governance and organizational performances.
The findings of the current study point towards two key issues.First, the statistically significant partial mediation of strategic choices on corporate governance and organizational performance imply that adoption of corporate governance by firms enhances their performance even without any form of strategic planning.This further underscores the importance of corporate governance in corporations that extends beyond defining organizations strategic direction such as enhanced disclosure, risk mitigation and resources acquisition.In addition, corporate governance brings about efficiency, business ethics and corporate citizenship that all enhances value in organizations.
Secondly, the study highlights the importance of strategic choices in determining firm performance.It demonstrates that by formulating and adopting optimal strategic options, organizations can greatly enhance their wealth and higher value.The study accentuates the important role of board of directors in formulating and sanctioning the strategic direction of corporations.Board of directors is viewed as the linkage between organization's financiers and those that use the capital to create value.Therefore, the most effective way of achieving optimal performances is through formulating and sanctioning optimal strategic choices.This study offered valuable insights to policy makers, regulators, investor and management of financial institutions.The study highlights that good governance coupled with adoption of key strategies in financial institutions can lead to optimal performance.
Conclusion
The findings of this study revealed strategic choices to be a key conduit in which corporate governance, through board of directors' influences performance in organizations.In the study, a significant variation in performance of financial institutions was explained by both corporate governance and strategic choices.The study suggests that one of the key roles of successful boards is to set out the strategic direction of corporations.This should be entrenched in the appointment, code of corporate governance and performance evaluation of boards.Further, the findings insinuate that corporations should evaluate their internal and external environments when formulating strategic choices.This means, that one-size-fits-all strategies would not optimize performance in all corporations, even within the same industry.
The study provides important insights to players and investors.The analysis of the adoption of corporate governance and strategic choices in the financial sector was a key indicator of critical areas to improve and what to maintain.Further, the study points to areas that need greater emphasis by the various industries.The study also informs future strategic choices that financial institutions can adopt, such as strategic alliances, mergers and acquisitions with the most compatible players within the industry and sector.The study recommends replication of the study in other sectors such as manufacturing, mining and public benefit organizations (PBOs).Further research is recommended on the variables interaction over a period of time using longitudinal methodology to especially study the influence of implementation of the strategic choices to the firm's performance.
Table 1 .
Sample distribution and response rate
Table 2 (
Thus, confirming a strong relationship in the first model In step 2, corporate governance was regressed on strategic choices and results presented in Table 2 (b).Table 2 (b).Step 2 of intervening effect of strategic choices on corporate governance and organizational performance a). Step 1 of intervening effect of strategic choices on corporate governance and organizational performanceResults indicate that 20.2 percent (R 2 =0.202) variation in organizational performance is explained by corporate governance.The model was statistically significant and robust with F value of 20.003 and p value of 0.000 (p<0.05).Moreover, the beta coefficient predicted that for every 1 percent change in corporate governance, Results indicate that 5.1 percent (R 2 = 0.051) percent variation in strategic choices was explained by corporate governance.The model was significant with F statistics of 4.455 and p value of 0.038 (p<0.05)depicting a robust model.In the third step, strategic choices were regressed on organizational performance.Results are presented in Table 2 (c).
Table 2 (
c). Step 3 of intervening effect of strategic choices on corporate governance and organizational performance
Table 3 (
a). Results of correlation for panel 1
Table 3 (
a) presents result for correlation matrix for corporate governance and strategic choices.Results indicate there was a significant positive correlation between corporate governance and strategic choices.However, a weak correlation was observed at Pearson correlation value of 0.226.The significance level was 0.05 (P≤0.05). | 2018-12-14T18:29:13.330Z | 2018-06-17T00:00:00.000 | {
"year": 2018,
"sha1": "2f72d9aebd07b00971e49c0a655180b7dc5aa0ae",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/ijbm/article/download/74520/42043",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2f72d9aebd07b00971e49c0a655180b7dc5aa0ae",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
229247700 | pes2o/s2orc | v3-fos-license | The development of sports industry in South Korea, 2009–2016
Abstract In this study, changes and trends in South Korea’s sports industry were examined to explore the effects of South Korea’s sports industry on the country’s economic growth. To this end, specific topics—such as the definition and characteristics of South Korea’s sports industry, the classification system for the country’s sports industry, and economic implications presented from the examination of the annual status of the country’s sports industry—were analyzed based on the 2009–2016 data that were deemed the output of a concrete analysis on the status of South Korea’s sports industry. As a result, the following conclusion was obtained: Over the review period, the number of business establishments, the number of employees, and domestic sales increased, but exports decreased in South Korea’s sports industry. This indicates the growth of domestic demand during the period. However, the future expansion of export sectors in the country’s exports industry is likely to play a pivotal role in growing its economy further.
PUBLIC INTEREST STATEMENT
In this study, changes and trends in South Korea's sports industry were examined to explore the effects of South Korea's sports industry on the country's economic growth. To this end, specific topics-such as the definition and characteristics of South Korea's sports industry, the classification system for the country's sports industry, and economic implications presented from the examination of the annual status of the country's sports industry-were analyzed based on the 2009-2016 data that were deemed the output of a concrete analysis on the status of South Korea's sports industry. As a result, the following conclusion was obtained: Over the review period, the number of business establishments, the number of employees, and domestic sales increased, but exports decreased in South Korea's sports industry. This indicates the growth of domestic demand during the period. However, the future expansion of export sectors in the country's exports industry is likely to play a pivotal role in growing its economy further.
Prologue
Germany, which was defeated in the Second World War, recovered from the ravages of war in a short period and achieved economic growth. This amazing event was described as "The Miracle on the Rhine" around the world.
Until the 1940-1950 period, South Korea was an economically stricken country that underwent Japanese colonial rule and damage from the Korean War. Political and economic experts all over the world predicted that South Korea would not easily get out of its poverty.
However, South Koreans achieved remarkable economic growth that surprised the world by exhibiting their national character despite their difficult circumstances, and this achievement was praised as "The Miracle on the Han River" by being likened to "The Miracle on The Rhine." Of 85 countries that became independent after the Second World War, South Korea is the only country that has succeeded in industrialization and democratization in about 70 years. The country has also joined the ranks of advanced countries with the former status of an underdeveloped county and a developing country.
In addition, South Korea's gross domestic product (GDP), which indicates the economic status of a country and the living standards of its people, exceeded 1,400 billion USD (world's 11th) as of November 2016. The country's gross national product (GNP) is about to achieve 30,000 USD, registering 27,340 USD (world's 28th) as of March 2016. 1 With this economic growth, South Korean society has recently taken an active interest in how the country's citizens can increase and effectively use their leisure time. Moreover, the South Korean government is seeking to prepare various national-level measures to help its citizens pursue and realize an enriched life.
In other words, citizens' quality of life is emerging as important in South Korea in accordance with the country's economic growth. Moreover, the country has entered the era of centenarians due to an increase in the average life expectancy, and it is also facing national problems, such as an aging population and low birthrates.
For this reason, social problems-such as rapidly rising medical costs for the elderly and a shrinking workforce-are emerging, and the government is actively carrying out national policies that encourage citizens to engage in sports activities, with a growing emphasis placed on the need for maintaining and improving public health.
In addition, South Korea has achieved a grand slam among the world's major sporting events by hosting the Summer and Winter Olympics, the World Cup, and the World Championship in Athletics, which are the world's top four international mega-sporting events.
The fact that South Korea hosted not only one but four of the world's largest mega-sporting events means that the country's status has been enhanced in the international community. As present, South Korea is the fifth country that has hosted all the world's top four mega-sporting events, following France, Germany, Italy, and Japan. 2 Based on a review of South Korea's past sports policies, the country initially carried out sports policies with a focus on training elite players, but South Koreans' perception of sports has changed since the successful hosting of various international mega-sporting events.
Accordingly, an amendment of the National Sports Promotion Act (which proposed the integration of the Korean Olympic Committee that was the country's representative sports organization centered on elite sports and the Korea Council of Sport for All, with a focus on sports for all citizens), was drafted to establish an advanced sports system and approved in the National Assembly plenary session on 3 March 2015. Consequently, the two organizations were combined and reborn into the Korean Sport & Olympic Committee in March 2016.
As this shows, the successful hosting of various international mega-sporting events enabled South Korea to continue the scientific and systematic training of elite players. Moreover, changes in the perception of sports and sporting activities among South Koreans are generating significant interest in both sports for participation and sports for spectatorship (such as professional sports) in the present social setting, which is characterized by increased free time following economic growth and the government's active support of sports.
Ultimately, it is not an understatement that sports-related industries are positioning themselves as a collective industrial medium that enhances national competitiveness and spurs national growth.
The sports industry creates added value through sports-related commodities and services. Specifically, it creates added value by producing and distributing tangible and intangible commodities and services, such as the goods, equipment, facilities, services, games, events, and lessons that are required for sporting activities. Moreover, in relation to the National Sports Promotion Act, the sports industry may encompass manufacturing, construction, and service businesses that support individuals to make good use of their leisure time through physical activities such as sports, games, and outdoor exercises, as well as sports information provision and sports event businesses that produce and distribute commodities and services to offer sports as passive entertainment. 34 In modern society, this sports industry has essential functions and implications for citizens' ability to pursue a healthy and high-quality cultural life. Moreover, because the promotion of the sports industry is becoming a basic common interest of South Korean society, the sports industry has a public interest-oriented characteristic. The sports industry has a social mission of contributing to the lives of citizens by supporting the development of sports in various directions, and fulfilling this task requires ties between schools, industries, and government agencies. At the same time, the sports industry itself should always carry out its projects from the viewpoint of consumers and enable sports to fully perform their various functions. In addition, sports share globally standardized technologies and rules, have an extensive market base as a worldwide common culture, and are emerging as important business players along with the rapid growth of information and communication technology. 5 In other words, given that health, a common goal shared among most global citizens, is largely pursued through exercise, and given that the technologies and rules associated with sports are globally standardized and identical, the sports industry does not remain a single country's policyoriented industry. Rather, it is connected with various industries and has a wide consumer base that comprises a majority of individuals in the global market. This also explains why the sports industry is treated as an independent policy domain for the economic growth of a country.
In this study, South Korea was selected as the research subject because it is the only country that has succeeded in industrialization and democratization in only about 70 years among the countries that became independent after the Second World War, and because it is one of the few countries to rise to the ranks of an advanced country from the former status of an underdeveloped country and a developing country. In view of South Korea's remarkable economic growth in such a short period, which surprised the entire world, this study aimed to analyze changes and trends in South Korea's sports industry to explore the effects of South Korea's sports industry on the country's economic growth. To this end, the following detailed research tasks were established: first, it is aimed to examine the definition and characteristics of South Korea's sports industry; second, to examine the classification system for South Korea's sports industry; third, to analyze the economic aspect of South Korea's sports industry, which includes the number of business establishments, the number of employees, turnover, domestic sales, and exports that were presented by surveys of the annual status of the country's sports industry.
This study intends to examine the real status of sports industry change and the development trend for investigating how Korean sports industry has an influence on the national economic growth. Therefore, this study investigated and analyzed the real status of the materials abstracted from 2016 to 2019, taken from "Survey Report on Actual Condition of Sports Industry", classified with respective years, and issued by the Ministry of Culture, Sports and Tourism of Korea. Concretely, this study investigated the definition and the characteristic of Korean sports industry, looked into the classification system of Korean sports industry, and analyzed the economic aspect of business number, employees, sales, domestic consumption, and exportation suggested by the investigation real status of Korean sports industry classified with respective years.
Understanding of sports industry in South Korea
According to Article 2 (2) of South Korea's Sports Industry Promotion Act, the sports industry creates added value through sports-related commodities and services. This sports industry is largely divided into "sports goods business" that produces and consumes various items related to sports events, "sports facility business" that involves the construction, lease, and management of stadiums, and "sports service business" that covers professional sports, racing (bicycle, motorboat, and horse racing), sports marketing (agents, etc.), and sports-related information, education, game, and tourism businesses. 6 In addition, the sports industry has various characteristics as a collection of businesses that belong to different industrial categories in each field. 7 For example, the sports industry is characterized as an "industry that has a complex industrial classification structure," a "space and location-centered industry," an "industry that typically consumes time," an "industry that deals with final consumer goods and services," and an "industry that inspires people and enhances their health." 8 The sports industry is considered an important industry as an independent policy domain for each nation's economy due to its characteristics, such as "a high value-added industry," "infinite growth potential," "its values as a media tool," and its "contribution to public welfare." Concretely, sports industry produces the products of star players with higher value added by international megasports events or prosports. It holds the productivity of star player's ability with the unique value of sports combines, offers informational value preferred by consumers, produces added value through the type of sponsorship or player endorsement. If it becomes a complex industry linked with existing industries such as manufacturing business, service business, and distribution business, it will grow limitlessly better than other contents. That is, it is an industry holding endless growth potential that can create a new market through fusion and complex with other industries. So, it is effective in creating added-value and occupation. In addition, sports event meetings are broadcasted as important contents of various kinds of media. Many sports event meetings, arenas, and sports hold values as media. So, they are used for important marketing means of businesses. Furthermore, it contributes to the enhancement of people's life quality more than other industry through the participation in sports. So, it is an important industry as an independent policy area for national economy.
Classification of South Korea's sports industry
In January 2000, the Special Classification for the Sports Industry V1.0 was established in consideration of the characteristics of South Korea's sports industry, and it consisted of three large categories, 12 middle categories, and 23 small categories.
In June 2008, the Special Classification for the Sports Industry V2.0 was established as a modified version of the Special Classification for the Sports Industry V1.0 to meet the requirement for designation among Nationally Approved Statistics. This renewed version comprised four large categories, 15 middle categories, and 46 small categories. Here, "sports media" (sports broadcasting businesses and sports newspaper businesses) was added as a small category to reflect the current conditions of the sports industry.
In December 2012, the Special Classification for the Sports Industry V2.0 was amended. As a result, the Special Classification for the Sports Industry V3.0 has been applied since the 2012 survey on the actual status of South Korea's sports industry. This latest version consists of three large categories, seven middle categories, 20 small categories, and 65 smallest categories. Here, the large categories include sports facility, sports goods, and sports service businesses. The sports facility business is again divided into sports facility operation business and sports facility construction business. The sports goods business is divided into exercise and sporting event goods-manufacturing business and exercise and sporting event goods-distribution and lease business. The sports service business is divided into sporting event service business, sports information service business, sports education institution business, and other sports service business. 9
Examination of the status of South Korea's sports industry between 2009 and 2016
Regarding the surveys on the status of South Korea's sports industry, the data for up to year 2008 were not officially surveyed and produced upon their designation as nationally approved statistics; rather, they were produced to raise the practicality of policy decision-making in the Ministry of Culture, Sports and Tourism, the office of primary responsibility for the sports industry. Accordingly, the data for the year 2009, which were surveyed in 2010, were the first produced based on the standards for Nationally Approved Statistics (No. 113021). For this reason, the present study intended to analyze the data derived from the annual surveys on the status of South Korea's sports industry, specifically the 2009-2016 data surveyed between 2010 and 2017, because these data were deemed the output of a concrete analysis on the actual conditions of South Korea's sports industry. In the sports industry, there were 62,184 business establishments, and the total turnover was 33.456 trillion won. Of the turnover, domestic sales and exports amounted to 32.575 trillion won and 874 billion won, respectively. In addition, the total number of employees was 210,000, and operating profits were 4.994 trillion won. 10 The analysis of the status of South Korea's sports industry as of 2010 (surveyed in 2011) based on the old classification (The Special Classification V2.0) presented the following results: In the sports industry, there were 69,315 business establishments, and the total turnover was 34.482 trillion won. Of the turnover, domestic sales and exports accounted for 32.627 trillion won and 1.855 trillion won, respectively. In addition, the total number of employees was 234,000, and operating profits were 3.930 trillion won. 11 The analysis of the status of South Korea's sports industry as of 2011 (surveyed in 2012) based on the old classification (The Special Classification V2.0) showed the following results: In the sports industry, there were 69,027 business establishments, and the total turnover was 36.513 trillion won. Of the turnover, domestic sales and exports accounted for 35.234 trillion won and 1.279 trillion won, respectively. In addition, the total number of employees was 236,000. Of the turnover of 36.513 trillion won, operating expenses and operating profits accounted for 33.195 trillion won and 2.958 trillion won, respectively. 12 The analysis of the status of South Korea's sports industry as of 2012 (surveyed in 2013) based on the new classification (The Special Classification V3.0) in which 20 new business types were added, showed the following results: In the sports industry, there were 84,246 business establishments, and the total turnover was 57.479 trillion won. Of the turnover, domestic sales and exports amounted to 56.309 trillion won and 1.170 trillion won, respectively. In addition, the total number of employees was 342,000. Of the turnover of 57.479 trillion won, operating expenses were 56.309 trillion won, and operating profits were 4.203 trillion won at a profit ratio of 7.3%. 13 The analysis of the status of South Korea's sports industry as of 2013 (surveyed in 2014) based on the new classification (The Special Classification V3.0) produced the following results: In the sports industry, there were 9,493 business establishments, and the total turnover was 61.853 trillion won. Of the turnover, domestic sales and exports registered 59.978 trillion won and 1.875 trillion won, respectively. In addition, the total number of employees was 355,000. Of the turnover of 61.853 trillion won, operating expenses accounted for 54.471 trillion won and operating profits accounted for 7.382 trillion won at a profit ratio of 11.9%. 14 The analysis of the status of South Korea's sports industry as of 2014 (surveyed in 2015) based on the new classification (The Special Classification V3.0) showed the following results: In the sports industry, there were 92,293 business establishments, and the total turnover was 63.149 trillion won. Of the turnover, domestic sales and exports amounted to 61.654 trillion won and 1.494 trillion won, respectively. In addition, the total number of employees was 373,000. Of the turnover of 63.149 trillion won, operating expenses accounted for 57.304 trillion won, and operating profits were 5.845 trillion won at a profit ratio of 9.3%. 15 The analysis of the status of South Korea's sports industry as of 2015 (surveyed in 2016) based on the new classification (The Special Classification V3.0) showed the following results: In the sports industry, there were 93,350 business establishments, and the total turnover was 65.145 trillion won. Of the turnover, domestic sales and exports registered 64.135 trillion won and 1.314 trillion won, respectively. In addition, the total number of employees was 383,000. Of the turnover of 65.145 trillion won, operating expenses amounted to 59.233 trillion won, and operating profits amounted to 5.912 trillion won at a profit ratio of 9.1%. 16 The analysis of the status of South Korea's sports industry as of 2016 (surveyed in 2017) based on the new classification (The Special Classification V3.0) exhibited the following results: In the sports industry, there were 95,387 business establishments, and the total turnover was 68.432 trillion won. Of the turnover, domestic sales and exports amounted to 67.142 trillion won and 1.290 trillion won, respectively. In addition, the total number of employees was 398,000. Of the turnover of 68.432 trillion won, operating expenses were 62.218 trillion won, and operating profits were 6.214 trillion won at a profit ratio of 9.1%. 17
Conclusion
The purpose of this study was to examine changes and trends in South Korea's sports industry to explore the effects of South Korea's sports industry on the country's economic growth. As a result, the following conclusion is presented.
First, Article 2 (2) of South Korea's Sports Industry Promotion Act states that the sports industry creates added value through sports-related commodities and services. This sports industry is largely divided into "sports goods business," "sports facility business," and "sports service business." Moreover, the sports industry is an "industry that has a complex industrial classification structure," a "space and locationoriented industry," an "industry that typically consumes time," an "industry that deals with final consumer goods and services," and an "industry that inspires people and enhances their health." Second, in January 2000, the Special Classification for the Sports Industry V1.0 was established. This consisted of three large categories, 12 middle categories, and 23 small categories. Based on this initial version, in June 2008, the Special Classification for the Sports Industry V2.0 was introduced to meet the requirements for designation among Nationally Approved Statistics. This version comprised four large categories, 15 middle categories, and 46 small categories. In December 2012, the Special Classification for the Sports Industry V3.0 was established with three large categories, seven middle categories, 20 small categories, and 65 smallest categories. This latest version has been applied to date.
Third, according to the data derived from surveys of the status of South Korea's sports industry, which were conducted based on the Nationally Approved Statistics, most areas-including the number of business establishments, the number of employees, turnover, and domestic sales- In conclusion, South Korea's sports industry is witnessing growth in such areas as the number of business establishments, turnover, and domestic sales, but it is seeing declines in exports, which indicates a much larger proportion of domestic demand over overseas demand. In other words, South Korea's sports market has been booming based on international megasporting events that are hosted in the country, domestic professional sports, and sports for all. However, most individuals around the world seek to enhance their health through exercise, and the technologies and rules for sports are globally standardized and identical. Therefore, South Korea's sports industry should expand its export sectors by considering that the sports industry has most individuals as its consumer base in the global market and benefits from connectivity with various other industries. In doing so, South Korea's sports industry is likely to play a pivotal role in accelerating the country's economic growth. | 2020-11-19T09:17:39.177Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "0c17efafe9127d9578b0053cb76d23b24ff3f17e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/23311886.2020.1840799",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "32889c2dfef9f7a6780f958e1a65843dc78ca8e5",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Political Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.