id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
139112454 | pes2o/s2orc | v3-fos-license | Surface texture and hardness of dental alloys processed by alternative technologies
Technological developments have led to the implementation of novel digitalized manufacturing methods for the production of metallic structures in prosthetic dentistry. These technologies can be classified as based on subtractive manufacturing, assisted by computer-aided design/computer-aided manufacturing (CAD/CAM) systems, or on additive manufacturing (AM), such as the recently developed laser-based methods. The aim of the study was to assess the surface texture and hardness of metallic structures for dental restorations obtained by alternative technologies: conventional casting (CST), computerized milling (MIL), AM power bed fusion methods, respective selective laser melting (SLM) and selective laser sintering (SLS). For the experimental analyses metallic specimens made of Co-Cr dental alloys were prepared as indicated by the manufacturers. The specimen structure at the macro level was observed by an optical microscope and micro-hardness was measured in all substrates. Metallic frameworks obtained by AM are characterized by increased hardness, depending also on the surface processing. The formation of microstructural defects can be better controlled and avoided during SLM and MIL process. Application of power bed fusion techniques, like SLS and SLM, is currently a challenge in dental alloys processing.
Introduction
Alternative manufacturing methods of dental restorations using base metal dental alloys are of ongoing interest, having a growing impact in the field of dental technology. Technological developments have led to the implementation of novel digitalized manufacturing methods for the production of metallic structures in prosthetic dentistry [1][2][3][4][5]. The potential for fabricating metallic dental components with a complex geometry directly from digital data using automated equipment and appropriate materials is very significant. These technologies can be classified as based on subtractive manufacturing, such as the milling of premanufactured materials assisted by computer-aided design/computer-aided manufacturing (CAD/CAM) systems [2,[5][6][7], or on additive manufacturing (AM), such as the recently developed laser-based methods. Although CAD/CAM has long been directly associated with the milling procedure in dental literature, it should be mentioned that AM procedures are also classified as CAD/CAM technologies. Understanding of structural and microstructural defects present in metallic frameworks processed through AM techniques and post-processing protocols such as heat treatment to reduce these defects, are important in order to transform the microstructures to those acceptable for practice [8].
Selective laser melting (SLM) and Selective laser sintering (SLS) belong to Powder bed fusion, and consist in a thermal energy that selectively fuses regions of a powder bed [9]. Direct metal printing methods can generally be categorized as laser-based, electron beam-based, arc-based, and ultrasonic welding-based. Laser-based metal AM methods are classified into laser sintering (SLS), laser melting (SLM), and laser metal deposition (LMD) [10]. SLS is a powder bed fusion technique in which a scanning laser is used to consolidate sequentially deposited layers of a metal powder [11]. Different types of lasers including CO2, disk, Nd:YAG, and fiber lasers are used. The principal consolidation mechanism is liquid-phase sintering involving partial melting and coalescence of the powder. SLM is a second powder bed fusion technique involving the consolidation of metal powders using powerful lasers. While the equipment setup and configuration and processing methodology are similar in SLS and SLM, in SLM, the powder is completely or nearly completely melted to produce a fully dense or nearly fully dense structure. SLM thus produces metal articles with a higher level of microstructural homogeneity compared with SLS.
Objective
The aim of the study was to assess the surface texture and hardness of metallic structures for dental restorations obtained by alternative technologies: conventional casting, computerized milling, AM power bed fusion methods, respective SLS and SLM.
Materials and methods
For the experimental analyses metallic specimens made of Co-Cr dental alloys were prepared using traditional casting (CST), computerized milling (MIL), selective laser sintering (SLS) and selective laser melting (SLM), as indicated by the manufacturers. Round plates of 20 mm diameter and 2 mm thick were fabricated using different technologies. laser sintering and melting processes. These were set according to two recommendations for dental prostheses, respective metal processing, and using fibre laser for power bed fusion. Relief-firing was conducted under argon by heating up to 450°C within 60 minutes, holding for 45 minutes. The specimens resulted after CAD/CAM technologies were not additional prepared. Oxide-firing (at 950 -980°C) was performed, the metal surface was sandblast again with fresh aluminium oxide (approx. 150 μm). All substrates were ground until 2000-grit SiC paper, polished with universal polishing paste (Ivoclar Vivadent AG, Schaan, Principality of Liechtenstein). They were after cleaned in alcohol, rinsed in distilled water and dried with adsorbent paper towels.
The specimen structure at the macro level was observed by an optical microscope Leica DM500 (Leica Microsystems, Wetzlar, Germany) using a reflective mode. Microstructural defects were measured using an image processing program, Image J. Micro-hardness was measured in all substrates using Mitutoyo SJ 201 device (Mitutoyo Corporation, Kanagawa, Japan). The hardness value was obtained as a mean value of at least 5 indents for samples polished and sandblast with fresh aluminium oxide of approx. 75 μm and approx. 200 μm).
Results
Microstructural defects were observed by optical microscopy on well-polished surfaces of Cr-Co-Mo alloy specimens prepared by SLS, SLM, MIL, CST (Fig. 1).
a. b.
c. d. Regarding microstructure the best results were obtained for SLM and MIL samples. Isolated or interconnected voids originating from hard agglomerates or insufficient packing of powder granules prior to sintering with sizes up to 45 µm were observed inside the specimens prepared by SLS. The surface of the specimens fabricated by CST contains grooved holes as a result od incomplete melting, with dimensions between 20 and 350 µm. In MIL samples traces were identified which were probably caused during the end-mill process and rare small voids (size up to 20 µm). SLM obtained specimens revealed the best microstructure, with the fewest voids, with dimensions up to 25 µm.
The hardness values recorded for specimens sandblasted with aluminium oxide of 75 μm were the highest for all samples, followed by those sandblasted with aluminium oxide of 200 μm and lastly from the polished (Table 1). Thus the preparation of the frameworks for ceramic veneering using
Discussions
Given the large differences in the manufacturing process between casting, which uses the complete melting and overheating of casting materials, the milling of a prefabricated metal block and AM of a fine metallic powder, large differences in microstructural characteristics and hardness values can be anticipated [12]. AM allows obtaining of complex geometries more than subtractive methods. The application of the AM methods, recently introduced in dental technology, and studies using the clinical implications are necessary. By identifying differences in processing methods and material characteristics, future studies should look up for reducing disadvantages [13,14]. Surface texture metrology can be used as a means of gaining insight into the physical phenomena taking place during the manufacturing process and process variables, through examination of the surface features generated by the process. It becomes a powerful exploration tool, increasing knowledge of the process and ultimately allowing the creation of improved manufacturing processes [15][16][17]. Surface texture is the geometrical irregularities present at a surface. Surface texture does not include those geometrical irregularities contributing to the form or shape of the surface [18]. The diversity of AM processes as well as the large number of key process parameters that change from build to build makes the use of traditional means of process qualification less than satisfactory [19]. These new challenges require a more profound understanding of the AM technology and process, and will ultimately require the development of AM surface texture good practice guidance, specifications and standards [20].
The potential for fabricating metallic dental components directly from digital data represents an opportunity.
2.
Metallic frameworks obtained by AM are characterized by increased hardness, depending also on the surface processsing.
3.
The formation of microstructural defects can be better controlled and avoided during SLM and MIL process.
4.
Application of power bed fusion techniques, like SLS and SLM, is currently a challenge in dental alloys processing. | 2019-04-30T13:03:11.561Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "542253053bed49d42bfa9b9f50a1c782f2ad874d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1742-6596/885/1/012004",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9ad55f8b352ec750cb653705b44cf250d1dbdee0",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
7049768 | pes2o/s2orc | v3-fos-license | ‘Forget me (not)?’ – Remembering Forget-Items Versus Un-Cued Items in Directed Forgetting
Humans need to be able to selectively control their memories. This capability is often investigated in directed forgetting (DF) paradigms. In item-method DF, individual items are presented and each is followed by either a forget- or remember-instruction. On a surprise test of all items, memory is then worse for to-be-forgotten items (TBF) compared to to-be-remembered items (TBR). This is thought to result mainly from selective rehearsal of TBR, although inhibitory mechanisms also appear to be recruited by this paradigm. Here, we investigate whether the mnemonic consequences of a forget instruction differ from the ones of incidental encoding, where items are presented without a specific memory instruction. Four experiments were conducted where un-cued items (UI) were interspersed and recognition performance was compared between TBR, TBF, and UI stimuli. Accuracy was encouraged via a performance-dependent monetary bonus. Experiments varied the number of items and their presentation speed and used either letter-cues or symbolic cues. Across all experiments, including perceptually fully counterbalanced variants, memory accuracy for TBF was reduced compared to TBR, but better than for UI. Moreover, participants made consistently fewer false alarms and used a very conservative response criterion when responding to TBF stimuli. Thus, the F-cue results in active processing and reduces false alarm rate, but this does not impair recognition memory beyond an un-cued baseline condition, where only incidental encoding occurs. Theoretical implications of these findings are discussed.
INTRODUCTION
Humans need to manage their cognitive resources in order to control their behavior. We are therefore able to ignore irrelevant stimuli and withhold pre-potent automatic responses to remain focused on a current task, although this is effortful and there are clear limits to human capacities for cognitive control (e.g., Botvinick et al., 2001). In episodic memory, as in other cognitive domains, there is constant need for selection to keep memory up-to-date with current demands. Both everyday-life and scientific research demonstrate our ability to selectively encode and retrieve memory contents (Levy and Anderson, 2009). In school, as well as in legal or more mundane contexts, we might be presented with information that we are supposed to remember as important for the future. Still, every now and then this information might turn out to be unimportant, irrelevant, or even false after presentation and we may be then told to forget it. Scientifically, variants of the directed forgetting (DF) task provide a means to study selection and updating processes in memory (Golding and MacLeod, 1998). In listmethod DF, participants are shown pairs of lists and after the first list of such a pair they are instructed to either remember all items on the previous list for future testing or to forget this list. Then, in both cases a second list is presented for further learning. At the end of the experiment, unexpectedly for the participant, items from both lists are tested. The between-list forget instruction typically results in poorer memory for list 1 items and better memory for list 2 items, whereas the reverse is true following the remember instruction. Because this pattern is only apparent in free recall, but not in recognition testing, retrieval inhibition has been a dominant account for the list-method DF effect (for review, see Anderson and Hanslmayr, 2014).
In item-method DF, individual items are immediately followed by an instruction. To-be-remembered items (TBR) are followed by a 'remember' (R) cue while to-be-forgotten items (TBF) are followed by a 'forget' (F) cue. Later, memory is tested for all items, regardless of their initial instruction. This typically leads to a DF effect, better memory for TBR than for TBF. The effect is apparent both in recall and recognition (Basden and Basden, 1996) and has been shown for a variety of materials (Lehman et al., 2001;Hourihan and Taylor, 2006;Hauswald and Kissler, 2008;Hourihan et al., 2009;Quinlan et al., 2010;Nowicka et al., 2011;Zwissler et al., 2011).
Although originally thought to reflect repression in a Freudian sense (Weiner, 1968), item-method DF has been subsequently mainly attributed to selective rehearsal (Basden and Basden, 1998), assuming that TBR are rehearsed more than TBF: upon presentation, each item is held in a standby-like mode and its processing is postponed until the instruction appears. An R instruction then leads to further rehearsal, while an F instruction is supposed to terminate any further processing, leading to passive decay of the item's representation. As a consequence, only TBR are selectively encoded and therefore better remembered than TBF.
Recent evidence suggests that participants either consciously of unconsciously make use of quite elaborate strategies to facilitate forgetting. For instance, item-method DF has been shown to interact with the loudness illusion in memory (Foster and Sahakyan, 2012): This illusion refers to the observation that when items that vary in loudness are presented for learning, participants have the subjective impression of remembering loud items better than quiet one, although objectively this is not the case (Rhodes and Castel, 2009). However, specifically in a situation where loud and quiet items are embedded in an item-method DF task, loud items are really recalled better than quiet ones. The same is not true for various control conditions, including ones that differently emphasize, via value assignment, the importance of remembering loud items, suggesting a specificity of the effect to a situation that requires intentional forgetting. Selectively rehearsing loud items, given an adequate opportunity, may be used as either an explicit or an implicit strategy to forget.
Somewhat reminiscent of the original repression account, recent behavioral evidence also demonstrates that active inhibitory processing is triggered by the forget cue in this paradigm (e.g., Fawcett and Taylor, 2010;. Zacks and Hasher (1994) first proposed mechanisms of attentional inhibition to operate in item-method DF and a wealth of behavioral data now indicates that the instruction to forget in item-method DF amplifies effects of inhibition of return (IOR; Taylor, 2005;Taylor andFawcett, 2011, 2012;Thompson and Taylor, 2015). Although originally thought to affect only motoric IOR magnitude (Taylor, 2005;Taylor andFawcett, 2011, 2012), greater slowing of return to target location following F-cue than following R-cue has recently been demonstrated in both motoric and visual IOR (Thompson and Taylor, 2015). The greater IOR effect following the F-cue has been also shown to be due to genuine IOR magnification, rather than due to facilitation of reorientation to the other side (Taylor and Fawcett, 2012). Together, these data are consistent with the interpretation that inhibition of spatial attention is increased by the forget instruction. This has led to the speculation that TBF-item's memory representations along a spatial saliency map are rendered less accessible than those of the TBR items (Thompson and Taylor, 2015). However, interactions between DF patterns and attention mechanisms seem to be paradigm-specific: whereas there is evidence that attention withdraws from forget items and reduces the processing of other information that is presented in temporal or spatial proximity (Fawcett and Taylor, 2008;Taylor and Fawcett, 2012;Lee and Hsu, 2013), very recent data demonstrates that distractibility is not generally increased following a forget instruction. For instance, reaction times to interspersed attentional orienting probes are not affected by a preceding F-cue (Taylor and Hamm, 2015).
Therefore, inhibitory mechanisms seem to be invoked by the forget instruction, but effects are paradigm-specific rather than domain-general.
Neuroscientific studies indicate more frontal and less parietal activation in response to the F-than to the R-instruction (Paz-Caballero et al., 2004;Wylie et al., 2008;van Hooff and Ford, 2011;Rizio and Dennis, 2013) as well as a positive correlation between frontal brain activity and magnitude of the DF-effect (Hauswald et al., 2011) indirectly supporting the view that some form of active inhibition is at work in item-method DF.
Whereas inhibition of spatial attention has been convincingly demonstrated in item-method DF, the mnemonic consequences have been less clearly specified. For instance, in the clinical literature the Freudian suppression metaphor is still discussed (e.g., Cottencin et al., 2006). It is clear that TBF is associated with poorer memory than TBR and that the F-instruction induces active, in the case of spatial attention also inhibitory, processing. Still, the relationship between IOR reaction time and the memory DF effect is uncertain. Fawcett and Taylor (2010) found that for successfully forgotten TBF, IOR was bigger than it was for remembered TBF, suggesting a link between the processes involved. However, this association is not reported in Taylor and Fawcett (2011) or Thompson et al. (2014).
Thus, extant evidence demonstrates that people are indeed able to selectively encode some material while ignoring, perhaps even actively inhibiting, other material presented for the same period of time. However, a different line of evidence indicates that for instance thought suppression is often ineffective and can result in paradoxical effects (Wegner et al., 1987). Regarding DF, it has been shown that prolonging cue presentation results in better memory for TBF and TBR items alike (Lee et al., 2007;Bancroft et al., 2013). This contradicts the assumption that TBF items decay passively and is also difficult to reconcile with the idea of effective memory inhibition. As a consequence the question arises, how TBF and TBR compare to a condition where items are encoded only incidentally because they are not followed by a specific memory instruction. If prolonging cue presentation improves rather than impairs memory of TBF items, suggesting that active, but not inhibitory processing is induced by TBF, how will no cue at all or an unspecific cue compare? Evidence from the Think-No Think paradigm underscores the possibility of successful intentional memory suppression of paired-associates, even below a baseline level (Anderson and Green, 2001;Anderson et al., 2004). Similarly, automatic memory inhibition of some items below a given baseline has been shown for the retrieval-induced forgetting paradigm (Anderson et al., 1994).
A wealth of research on thought control mechanisms has demonstrated ironic processes when people try to suppress their thoughts (Wegner et al., 1987;Wegner, 1994Wegner, , 1997Wenzlaff and Wegner, 2000), although there are important differences between thought suppression and item-method DF paradigms. For instance, in ironic thought control the effect disappears when alternative thoughts are instructed. Still, by analogy, in itemmethod DF, any cue might initially re-orient participants to the preceding stimulus. If TBF cues were perceived as 'suppress' commands, the success and behavioral consequences of such suppression attempts might be uncertain (Wegner et al., 1987;Wegner, 1994Wegner, , 1997Wenzlaff and Wegner, 2000), although the presence of other items to which processing resources could be redirected may counteract any ironic processes.
Here, we addressed the status of forget items in item method DF by introducing un-cued items (UI) into the paradigm. We tested, whether memory for TBF is equally bad (selective rehearsal) or perhaps even worse (memory inhibition) than if no instruction were given, and items were only incidentally encoded. The presence of UI may provide participants to redirect their processing resources to these items, further reducing TBF encoding. If, however, F-cues initiate re-alerting (or ironic monitoring as found in thought suppression research), TBF could still be actively processed and highlighted to a certain extent. In that case UI would be remembered worse than both TBR and also TBF.
As in several previous studies we use a recognition memory design with complex pictorial stimuli and similar paired distracters (Quinlan et al., 2010;Hauswald et al., 2011;Nowicka et al., 2011;Zwissler et al., 2011). This facilitates a separate analysis of recognition accuracy and response bias. We have been using picture stimuli in an effort to obtain more languageand culture-independent results and in order to be able to work with linguistically heterogeneous clinical populations (e.g., Zwissler et al., 2012;Baumann et al., 2013). So far, the basic mechanism of selective rehearsal has been shown to apply also to pictorial stimuli (Hourihan et al., 2009), but differences may exist precluding generalization of results to studies using word stimuli.
To increase motivation to show full effort on the final test, participants received a performance-dependent monetary bonus encouraging performance accuracy (see also MacLeod, 1999).
We expect differential processing of TBR, TBF, and UI items to be reflected in memory performance. Selective rehearsal should improve recognition accuracy for TBR over both TBF and UI. We test, whether memory accuracy differs between incidental encoding of UI and intentional forgetting as instructed for TBF. The different instructions also could affect participants' readiness to respond to an item given a similar amount of mnemonic information. This would be reflected in distinct response biases: Because strengthening an item's memory representation leads to a more conservative response bias (Hirshman, 1995), according to selective rehearsal, TBR items should be responded to most conservatively. If TBF cues prompt a distinct, potentially inhibitory effect on response criterion setting, response bias for TBF items should be most conservative.
To investigate the effect of implicit encoding in item method DF, the fate of TBF items is compared with both TBR and UI items. Four experiments were conducted: Experiment 1 presents a basic comparison of recognition memory for the three item types, Experiment 2 uses a longer item list, and Experiment 3 replaces the instructions by three symbolic cues, addressing the possibility that physical cue characteristics affect performance. Experiment 4 tests the effects of symbolic cues with a different item set.
EXPERIMENT 1
Method Participants Thirty-one students at the University of Konstanz, Germany, (24 women; mean age = 21.67, SE = 0.44; range: 18-28 years) participated in return for course credit or 3 € basic compensation. They could earn additional performance-dependent bonus. In all experiments, participants gave written informed consent and the research was conducted in accordance with the Declaration of Helsinki. The experiment was approved by the Ethics Committee of the University of Konstanz.
Stimuli
Seventy-five target-distracter pairs of images were used for memory testing. Pairs were thematically unique within the set and differed only in perceptual detail (see Figure 1 for examples), thus allowing for a separate analysis of hits and false alarms in response to the differently cued items. The images showed people, landscapes, animals, or social scenes. One member of each pair was assigned to each of two sets (A and B), image-set assignment was counterbalanced, and image-cue assignment was randomized. During learning, all set A images were presented in random order. During recognition, all images from both sets were shown at random, set B images serving as related lures.
Procedure: Learning Phase
Participants were explained that they would be presented pictures some of which would be relevant to successful task performance and others would not. Relevant pictures would be followed by either a 'remember it' (R) cue or by a 'forget it' (F) cue. Irrelevant pictures were not further instructed ('un-cued' ∼ U). The exact wording of the instruction was: "You will see a series of pictures. Some will be followed by a 'MMM' cue. Then it is important to remember the preceding picture for later testing. Some will be followed by a 'VVV' cue. Then it is important to forget the preceding picture. Some pictures will not be followed by a cue." Up front, there was no instruction on how to behave in response to items that were not followed by a cue. If participants asked what the purpose of the un-cued pictures was, they were told that these served to ensure stable time lags between the cued pictures in a subsequent imaging study. Then, all pictures from one set were shown in sequence, each for 2 s. Immediately after each picture either the F instruction symbolized by 'VVV' ('vergessen' ∼ 'forget'), the R instruction signaled by 'MMM' ('merken' ∼ 'remember') or a blank screen appeared for 2 s. Then, a fixation cross was presented for 1 s, after which the next picture was shown.
After learning, a break of 10 min took place during which participants were asked to perform a speeded attention endurance test (d2; Brickenkamp, 1994) to ensure that they did not further rehearse the material. This paper-pencil test requires participants to identify and mark target symbols embedded among similar distracter symbols.
Procedure: Recognition Phase
Before the recognition test, participants were told that they now should try to accurately recognize ALL initially presented images, regardless of their previous instruction and that they could earn 0.2 € for each correctly recognized picture, but would lose the same amount for false alarms, perfect performance resulting in a maximum of 15 € (75 × 0.2 €). Thereby, recognition accuracy was reinforced and guessing was discouraged.
During the test, a random sequence of the 75 old and 75 similar new pictures (thematically paired distracters), was administered. Each picture was shown for 300 ms and participants were asked to decide by button press whether they had seen it before. Presentation time for recognition was kept short to encourage spontaneous responses, but reaction time was not limited. After each response, a fixation cross was presented for 700 ms before the next picture appeared. Experimental material was presented on a laptop computer (Dell Latitude D830) using Presentation Software (Neurobehavioral Systems Inc., Albany, NY, USA). Upon completion of the experiment, participants were paid and debriefed about the purpose of the study.
Statistical Analyses
Statistical calculations were performed with SPSS 20.0 (SPSS Inc., www.spss.com). Data were analyzed using repeated-measures ANOVAs with the within-factor cue (Forget, Remember, Uncued). ANOVAs were calculated for hits and false alarms, as well as for discrimination accuracy and response bias. Post hoc comparisons were calculated with an alpha level of 0.05 using Fisher's Least Significant Differences test. If the sphericity assumption was violated, degrees of freedom were corrected according to Greenhouse-Geisser. Table 1 presents mean hit and false alarm rates in the 'remember, ' 'forget, ' and 'un-cued' conditions for the first experiment. A significant main effect was found on hits [F (2,60) = 20.78; p < 0.001; η 2 p = 0.41]. Post hoc comparisons showed that for hit rate was highest for TBR, being significantly higher than TBF (p < 0.01) and UI (p < 0.001). Hit rate for TBF was also higher than for UI (p < 0.01). Further, there was also a significant main effect for false alarms [F (2,60) = 7.91; p = 0.001; η 2 p = 0.21]. Post hoc comparisons showed that the false alarm rate was considerably higher for UI lures than for TBF lures (p < 0.01) and TBR lures (p < 0.01), the latter two not differing (p = 0.82).
Discrimination Accuracy and Response Bias
Following Snodgrass and Corwin's (1988) two-high-threshold model, discrimination accuracy (P r = hit rate -false alarm rate) and response bias [B r = false alarm rate/(1 -P r )] were analyzed from the data, simultaneously taking into account hits and false alarms and resulting in separate measures of recognition accuracy and response bias in DF. 1 ANOVA confirmed significant differences in the discrimination of differently cued stimuli [F (2,60) = 28.69; p < 0.001; η 2 p = 0.49] and revealed that TBR were recognized significantly more accurately than both TBF (p < 0.01) and UI (p < 0.001). Crucially, P r was significantly higher for TBF than for UI (p < 0.001). There was no significant 1 In old/new recognition memory experiments, hits and false alarms need to be considered simultaneously to yield measures of memory accuracy on the one hand and response bias on the other as participants' recognition data will be determined both by the actual memory strength for an item and their readiness to respond given a certain amount of mnemonic information. Several such models have been developed (see Snodgrass and Corwin, 1988). Here, we chose the two-high threshold model. However, calculation of the d and C measures reveals equivalent results. For a discussion of the relative merits of different models of recognition memory, see e.g., (Broder and Schutz, 2009
Discussion Experiment 1
Experiment 1 indicates that in item-method DF, presenting stimuli for incidental encoding with no specific instruction results in poorer memory accuracy than both a remember and a forget instruction. This is inconsistent with the notion of successful memory inhibition in item-method DF. As expected and in line with selective rehearsal, TBR were recognized more accurately than TBF or UI. Moreover, TBF were also recognized more accurately than UI, implying the possibility of ironic effects (Wegner et al., 1987;Wegner, 1994Wegner, , 1997Wenzlaff and Wegner, 2000). Results indicate that while selective rehearsal may account for TBR memory superiority, TBF seem to trigger active, noninhibitory, memory processing that exceeds the one of completely un-cued, incidentally encoded, items. To further investigate this, a second experiment is conducted using more pictures and reducing picture presentation duration, thus increasing task difficulty. This addresses the possibility that, in spite of a monetary incentive to the contrary, participants somehow remembered list A items in association with their instruction and were guided by this on the recognition test.
EXPERIMENT 2 Method
The experimental methods mirrored the ones used in Experiment 1 with the following exceptions:
Stimuli
The stimulus set was expanded to 90 target-distracter pairs of similar pictures.
Procedure: Learning Phase
Presentation duration was reduced to one second. false alarms for TBF being significantly lower than TBR (p < 0.01) and UI (p < 0.001). False Alarm rate was also significantly lower for TBR than UI (p < 0.05).
Discrimination Accuracy and Response Bias
Repeated measures ANOVA confirmed significant differences in the discrimination accuracy P r of the differently cued stimuli [F (1.75,69.82) = 33.71; p < 0.001; η 2 p = 0.46]. TBR were recognized significantly more accurately than both TBF and UI (both p < 0.001; see Figure 3A). Crucially, P r was significantly higher for TBF than for UI (p < 0.001). There were also significant differences for the response bias B r [F (1.41,56.47) = 18.05; p < 0.001; η 2 p = 0.31]. TBF response bias was significantly more conservative than TBR and UI (both p < 0.001, see Figure 3B). TBR showed a more liberal response bias than UI (p < 0.05).
Discussion Experiment 2
As in Experiment 1, higher recognition accuracy was found for TBR compared to TBF or UI items. Again, TBF were recognized more accurately than UI, overall confirming that selective rehearsal can account for the TBR advantage and that TBF induces, active, albeit for recognition memory seemingly non-inhibitory, processing. As a new finding, in this longer version instruction affected response bias: TBF were responded to more conservatively than TBR and UI, TBR being more liberal than UI. Also, it has to be noted that unlike in Experiment 1, the effect is now driven more by instruction-induced changes in false alarms than in hits, requiring further scrutiny. Possibly, because presentation time during learning was reduced, participants relied more on gist representation, bringing up overall false alarm rate and increasing its contribution to the effects. Interestingly, false alarm rates were across both experiments lower for TBF than for UI and in Experiment 2 also lower for TBF than for TBR. However, a possible limitation of both experiments is that in the UI condition only a blank screen was used, resulting in a perceptual difference from the other two conditions. Explicit processing cues may automatically induce reprocessing of the previously presented picture for both cued item types as participants may need to refresh the cue-item association to initiate further active processing, thus causing superior memory for perceptually cued items in comparison with items for which no cue appears and that after the initial rehearsal phase are allowed to passively decay. On the other hand, the absence of a cue may also result in UI items being on average rehearsed a little longer until participants realize that there will be no cue. If so, the latter possibility should reduce differences between R, F, and U, whereas the former should enlarge it. To further examine the pattern of results and ensure that variation in perceptual input had no impact on the current results, a third experiment was performed using symbolic cues for all three conditions.
EXPERIMENT 3 2
Method Experiment 3 resembled Experiment 2 with the following exceptions:
Participants
Twenty-seven students (14 women; mean age = 24.23, SE = 0.55; range: 19-32 years) from the University of Tübingen, Germany, participated. The experiment was approved by the Ethics Committee of the University of Tübingen.
Procedure: Learning Phase
The letter-cues were replaced by symbolic cues. A blue circle, a purple square and yellow triangle were randomly assigned to represent R, F, or U. Symbol-cue assignment was counterbalanced across participants. The basic procedure was identical to Experiment 2.
Results
Twenty-six data-sets were available for analysis as data from one participant were lost.
Hits and False Alarms
A significant main effect was observed for hits [F (2,50) = 22.51; p < 0.001; η 2 p = 0.47]. TBR hit rate was significantly higher than TBF and UI hit rates (p < 0.001, respectively), whereas TBF and UI did not differ (p = 0.80). A significant main effect for false alarms was also found [F (2,50) = 16.23; p < 0.001; η 2 p = 0.39]. Post hoc comparisons showed that the false alarm rate was significantly higher for UI than for TBR (p < 0.01) and TBF (p < 0.001), while TBR tended to be higher than TBF (p = 0.06). Mean hit and false-alarm rates are given in Table 3.
Discussion Experiment 3
Experiment 3 replicates findings from Experiments 1 and 2 regarding response accuracy. Furthermore, by introducing a third (symbolic) cue in addition to F and R, a potential weakness of the two previous experiments was addressed. Therefore, the pattern cannot be explained by differences in the physical features of the cues, or by the fact that F and R cues induced reprocessing, whereas UI did not. It rather has to be assumed that a negative instruction leads to a more accurate representation of the respective stimulus compared to no instruction at all, although both conditions are perceptually identical. Regarding materials, Experiment 3 is directly comparable with Experiment 2, and in both the accuracy effect is carried more by false alarms than by hit rate. In both these experiments, fewest false alarms are made for TBF items and effects on recognition bias are observed with R stimuli being classified almost without bias, U stimuli slightly more conservatively and F stimuli most conservatively. This difference from Experiment 1 may result from increasing task difficulty and participants' greater reliance on gist representation. Experiments 2 and 3 used more stimuli and a faster presentation rate, resulting in overall lower hit and higher false alarm rates. The response bias results depart from the commonly observed pattern that strengthening items leads to a more conservative response bias (e.g., Hirshman, 1995;Stretch and Wixted, 1998). The initial forget instruction may induce a subjective underrepresentation of the frequency of forget items on the test list (Strack and Förster, 1995;Hirshman and Henzler, 1998), reducing participants' readiness to respond to these items. If so, such a subjective underrepresentation appears not to be due to variations in perceptual input between Experiments 2 and 3 as the pattern was very similar and if anything, one might expect items associated with less perceptual input (UI in Experiment 2) to be more prone to subjective underrepresentation. There might be a small perceptual effect, since in Experiment 2 the response bias for TBR is significantly higher than for UI and this difference disappears in Experiment 3. However, in terms of response bias, the comparison with TBF items is the same in both experiments. Still, in Experiments 1 and 2 forget and remember conditions differed perceptually from the un-cued condition. Although so far this perceptual variation does not seem to impact the pattern of results in a major way, a fourth experiment was conducted to replicate the symbolic cue effect. In this fourth experiment some of the previous picture pairs were replaced with new pairs because several participants had indicated that they found some of the target-distracter pairs too similar and easily confusable (see Figure 1). If so, this would have added additional noise to the data, assuming that these pairs had been randomly distributed across the conditions as implemented by the random picturecondition assignment. However, if distribution of these pairs had been uneven across conditions this could even have affected the pattern of results.
EXPERIMENT 4
Experiment 4 recorded both behavioral and EEG data. EEG data will be fully reported elsewhere. Behaviourally, Experiment 4 resembled Experiment 3 with the following exceptions:
Method
Participants
Stimuli
Fifteen of the 90 image pairs were replaced (see Figure 5 for examples of replacement pairs).
Discussion Experiment 4
Regarding recognition accuracy, Experiment 4, replicates findings from Experiments 1-3. As in Experiment 1, this effect was mostly carried by hits. The data suggest that difficulty may play a role in whether the consistent accuracy effect is driven by differences in hits or false alarms, possibly reflecting the extent to which participants relied on gist representation. Although list length was longer in Experiment 4 than in Experiment 1, some of the most difficult item pairs were eliminated from Experiment 4, perhaps balancing for effects of list length. In Experiment 4 as in Experiments 2 and 3 the response bias is most lenient for TBR, however, unlike in Experiments 2 and 3, the response bias for UI was as conservative as for TBF. Across all experiments, false alarm rate was always lowest for TBF. No instruction-dependent difference in response bias was found in Experiment 1. Across the experiments, it appears that instruction-dependent differences in recognition accuracy with TBR being remembered more accurately than TBF and crucially TBF more accurately than UI is a robust phenomenon in item-method DF, whereas effects on the recognition bias are more variable. To formally assess similarities and differences between the four experiments and underscore the statistical stability of findings, in a final step across-experiment comparison was conducted for hits and false alarms as well as discrimination accuracy Pr and recognition bias Br.
BETWEEN STUDIES COMPARISON
A mixed ANOVA with the between factors Experiment and the within factor Cue (TBR, TBF, UI) and Response Type (hits and false alarms) and two additional separate ANOVAs for discrimination accuracy Pr and recognition bias Br, again with the between factor Experiment and the within factor Cue (TBR, TBF, UI), were calculated for the data from all 122 participants.
GENERAL DISCUSSION
This series of experiments compared recognition memory for items encoded under remember and forget instructions with recognition memory for incidentally encoded items for which no explicit instruction was given. Across four experiments, discrimination accuracy was best for TBR and worst when no specific instruction was given, leaving items to be implicitly encoded. Relative to totally UI, TBF were remembered more accurately, instead of equally well or worse than UI, and this held even when the conditions were fully perceptually matched. Better recognition of TBR than of TBF items is in line with the view that the item-method DF effect might be primarily due to 'selective rehearsal' of TBR. Still, selective rehearsal might not be fully able to account for why TBF were recognized better than UI. The forget instruction has been shown to induce inhibition in spatial attention using the IOR paradigm (Taylor, 2005;Taylor andFawcett, 2011, 2012). However, it does not seem to impair recognition accuracy in the same way as active suppression has been shown to do in the Think-No Think paradigm (e.g., Anderson et al., 2004) or as automatic inhibition does in retrieval-induced forgetting (Anderson et al., 1994). An active memory suppression view of DF is sometimes also adopted in the clinical literature (e.g., Cottencin et al., 2006). Under such a memory suppression account, memory for TBF should be even worse than for incidentally encoded baseline items. Such a pattern might also have occurred, had participants diverted capacities to UI items to distract themselves from TBF as has been shown for item-method DF and the illusionary loudness effect (see Foster and Sahakyan, 2012). The baseline condition involved both mere presentation of UI (Experiments 1 and 2) and additional presentation of perceptually matched symbolic cues (Experiments 3 and 4). Upon testing, UI were consistently recognized less accurately than TBF. Whereas the experiments differed in the extent to which this was due to differences in hits or false alarms, hit rate was never higher for UI than for TBF and false alarm rate was never higher for TBF than for UI. In its traditional version, selective rehearsal can explain more accurate recognition of TBR compared to TBF and UI, but not more accurate recognition of TBF than UI. Conversely, active inhibition effects on recognition memory might predict worse recognition of TBF compared to both TBR and UI. Evidently, in the present experiments active processing of TBF, even with the intention to forget, did not reduce memory to the same extent as no processing instruction at all. Indeed, extending cueprocessing time in this paradigm has been shown to improve rather than impair memory for TBF (Bancroft et al., 2013). Thus, whatever active processes occur in item-method DF, these do not necessarily induce successful memory inhibition compared to incidental encoding, although they do result in inhibitory phenomena in other domains (Fawcett and Taylor, 2008;Taylor and Fawcett, 2012;Lee and Hsu, 2013). Accordingly, frontal brain activations previously observed in this design (Hauswald et al., 2011;Nowicka et al., 2011;van Hooff and Ford, 2011;Rizio and Dennis, 2013) may result from either non-inhibitory processes within the frontal lobes, such as conflict monitoring (Silvetti et al., 2014) or attention orienting (Chun and Turk-Browne, 2007) or perhaps from unsuccessful inhibition attempts. The latter view would be consistent with the operation of ironic monitoring (Wegner et al., 1987;Wegner, 1994Wegner, , 1997Wenzlaff and Wegner, 2000) as well as findings from cognitive linguistics demonstrating the extra cognitive load of having to process negative statements (Kaup, 2001;Ferguson et al., 2008;Lüdtke et al., 2008). Overall, the forget cue may induce automatic reprocessing of the associated item, causing the present effect. The reality of the findings is underscored by the fact that participants were offered monetary incentive for accurate performance.
Of course, there are ambiguities associated with leaving participants to their own devices in an experiment and presenting material that is not associated with any specific instruction. Behavioral data cannot fully answer the question of what participants actually do when receiving an UI versus an TBF instructions, although incidental encoding situations are quite natural and have been amply used in the literature (e.g., Craik and Lockhart, 1972;Hockley, 2008;Hockley et al., 2015). By some TBF might be considered as even stricter F cues. However, an explicit ignore instruction (as in variants of list-method DF) was never given here, UI were just not commented on. Also, participants could have been confused about the difference between TBF and UI items. We asked participants whether there were problems with the instruction and, at least on the selfreport level, there was no indication of confusion. Moreover, data on effects of left prefrontal tDCS stimulation acquired in the context of Experiment 3 (Zwissler et al., 2014) show that for the R and the F conditions cathodal and anodal left prefrontal tDCS stimulation had antagonistic effects on false alarm rate. However, neither anodal nor cathodal tDCS affected the UI condition compared to the sham condition whose data are reported here. This underscores that both F-cued and R-cued induce, albeit qualitatively different, active processes in the prefrontal cortex that are not activated when the perceptual symbol is not associated with a specific memory instruction as in UI. Similarly, EEG data acquired in the context of Experiment 4 indicate qualitative processing differences between all three conditions. In particular, a previously identified frontal positivity, at the time suggested to indicate active inhibition (Hauswald et al., 2011), was larger for F than for UI and R items, the UI positivity being also larger than the frontal R positivity. By contrast, a parietal positivity indexing selective rehearsal was larger for R than for both F and UI items, F being again larger than UI. Both the tDCS and the EEG data are in line with the notion that the F-cue induces active, but regarding recognition memory, noninhibitory processing which is qualitatively different from the type of processing induced by the R cue. Crucially, F cues result in less effective forgetting than a cue that does not explicitly specify a memory instruction.
It cannot be completely ruled out that participants did not follow the given instructions but instead rehearsed items independently of instruction across an entire set, especially in case of semantically interrelated stimuli (i.e., cars, humans, animals). However, due to the size and the thematic diversity of the image sets, a systematic distortion seems rather unlikely. Finally, in Experiments 3 and 4, we chose to resolve the physical difference between behaviorally relevant cues (R, F) and the irrelevant one (U) by assigning a symbol to each of them. This might raise the question whether a symbol carrying no meaning still qualifies as a non-existent cue. Results do not suggest a major difference between the first and the last experiment. Experimenters can never be quite sure what participants really do, even when they receive an explicit instruction, and the problem might be exacerbated when no instruction is given. On the other hand, free viewing and uninstructed processing is a very natural situation as much of the material that is encountered in everyday life is not associated with explicit instructions and sometimes arguably not even with an intrinsic goal. Therefore having a certain proportion of stimuli that is not associated with an explicit instruction would appear quite natural in many situations. Indeed, the data pattern suggests that across participants there was a systematic response to UI as well as to TBR or TBF. Free viewing has been used in various areas of perception (Junghöfer et al., 2001;Kissler et al., 2007) and memory (Potter and Levy, 1969;Potter, 1976) research. Present data incorporating a free viewing condition indicate that even under fully perceptually matched conditions discrimination accuracy for items not associated with a specific instruction is poorer than for items explicitly instructed to be forgotten and that only these are truly ignored and decay passively.
There were also effects on the response bias: It is notable that these depart from what would be expected under a typical TBR strengthening account. Typically, strengthening items leads to a more conservative response bias (Stretch and Wixted, 1998). Thus, TBR items should have been responded to more conservatively than the other item types, UI showing the most liberal response bias. Yet, TBF were by-and-large responded to most conservatively, which could be indicative of a separate effect of the forget instruction on how participants set their response criterion. Indeed, across experiments false alarm rate was always lowest for TBF. Effects on response bias are generally more apparent with higher false alarm rates and lower hit rates, as in Experiments 2 and 3, where TBF were responded to less readily than TBR and UI. That is, in spite of monetary incentive to the contrary, participants required more mnemonic evidence to make an 'old' decision to TBF than to the other item types. The initial forget instruction may induce a subjective underrepresentation of the frequency of forget items on the test list which would have reduced participants' willingness to endorse these items as old (Strack and Förster, 1995;Hirshman and Henzler, 1998). Perhaps this reflects one aspect of the inhibitory processing found to be induced by the forget instruction in other contexts. Such a bias may be beneficial in legal settings, resulting in a reduced tendency to misidentify look-alikes of an exonerated former suspect from a line-up. Unfortunately, for a mere bystander (UI in our context), misidentification tendencies might be higher at least under some circumstances. Further research will specify how different memory instructions interact with other experimental parameters in item-method DF.
Even where it occurs, a conservative response bias apparently cannot compensate for the initial alerting process. As a consequence, both TBR and TBF are remembered more accurately than UI. Several findings (e.g., Lee et al., 2007;Taylor, 2008, 2010) suggest that TBF are not instantaneously toned down during learning. Rather, even TBF benefit from longer post-cue intervals. Presumably, when a stimulus is presented, it is being held online to begin with. After onset of a 'meaningful' cue (i.e., R and F), both these stimulus types receive special attention. Only for UI, it seems that processing ceases after stimulus off-set. This happens even when UI are followed by a perceptually equivalent symbolic cue to which no cognitive significance is assigned. The effect is seen in each individual experiment and underscored by the cross-experiment analysis, where it is seen for both hit rate and discrimination accuracy with no cross-experiment interaction. Still, visually the above experiments differ in the extent to which this effect is carried by hits versus false alarms. Future research will further specify the dynamics of the present phenomenon, however, tentatively, list length and overall target discrimination levels could be important factors.
The present results may appear surprising in view of experimental evidence of successful representational inhibition of target items compared to baseline in the Think-No Think paradigm (Anderson and Green, 2001;Anderson et al., 2004). In the Think-No Think paradigm and in item-method DF as in cognitive control in general, prefrontal structures have been shown to be involved (Mitchell et al., 2007;Wylie et al., 2008;Giuliano and Wicha, 2010). In DF, prefrontal cerebral activity during cue presentation differentiates intentionally forgotten from incidentally forgotten items (Wylie et al., 2008). Further research will resolve whether the F-instruction's paradoxical effect is solely due to a short-lived alerting elicited by the F cue. Incorporating un-cued baseline stimuli in neuroscientific studies of DF will aid interpretation of previously observed effects.
Of note, the present studies all used pictorial material and did not test free recall. An important extension of this work will concern the question whether similar results can be found with verbal material and in free recall. So far, data suggest that in item-method DF, pictorial and verbal materials behave in similar ways (Hourihan et al., 2009;Quinlan et al., 2010), but firm conclusions await further empirical tests. Also, the use of thematically matched pairs may have been problematic. As in some previous research (Zwissler et al., 2011(Zwissler et al., , 2012, this approach had been used to facilitate scoring of hits and false alarms per item category. However, participants may have noticed that the material was organized in pairs and this may have biased their responses in unforseeable ways. The most obvious possibility is that participants on presentation of the second picture from such a thematic pair realized that they had gotten the first one wrong because they had made a gist-based decision. While this may have helped them on the second decision, they could not undo the first response and therefore the procedure enhanced noise in the data. Most likely, such noise would be distributed equally across all conditions. Still, there is the possibility that such effects interacted with instruction in hitherto unknown ways. For the current methods and materials, the current study raises the possibility that item-method DF could involve ironic processes. Initially, two operations may be required: one to remember TBR, which is a common task for students; the second is to forget TBF, which is comparably unusual. As Wegner (1994) suggests, under mental load resources are drawn from the operating process and an ironic monitoring process takes over interfering with thought control, or presently, with successful forgetting.
The present research demonstrates that item-method DF occurs only in comparison to a 'remember' instruction and not compared to giving no instruction at all. Thus, regarding recognition accuracy, the F-cue induces active, but not inhibitory processing. These results are in line with other findings demonstrating that humans have trouble processing negative information and have practical implications for educational and legal settings. | 2016-05-04T20:20:58.661Z | 2015-11-16T00:00:00.000 | {
"year": 2015,
"sha1": "4148c41b01a50ec9a3e7af504b511eb4e17e0f17",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2015.01741/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "181a0955c237fc2396969ad90ac8b56ea9a1ac6c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
270140016 | pes2o/s2orc | v3-fos-license | Metaphorical Architecture of Tanean Lanjhang as a programming concept for Indonesia Islamic Science Park (IISP) – Madura, towards Sustainable Urban Tourism
The current trend in Indonesia involves extensive urban development across various sectors, including tourism. The province of East Java has plans to create an area that not only entertains but also educates and imparts cultural and Islamic values. This concept is known as the Indonesia Islamic Science Park (IISP), which is intended to be built on Madura Island. However, the Madurese community holds strong reservations towards modern infrastructure due to their cultural traditions. Therefore, the design concept for the Indonesia Islamic Science Park serves as an alternative approach to address these concerns. This research employs an intangible metaphorical approach that maps elements of the traditional Tanean Lanjhang house arrangement pattern into zones within the Indonesia Islamic Science Park. Data collection related to Tanean Lanjhang includes literature review and field surveys. The transformation process utilizes source-to-target mapping frameworks. This mapping is carried out to select source objects that can be explored for information to be directed towards the target domain (architecture). This data or information will be reduced into design criteria to be used in formulating the spatial program and zoning for the Indonesia Islamic Science Park area. Ultimately, a spatial program concept like this can become the hallmark of the Indonesia Islamic Science Park, offering a unique tourist destination that is not only reminiscent of the local culture but is also readily embraced by all members of the community.
Introduction
The Indonesia Islamic Science Park (IISP) represents a visionary initiative spearheaded by the government of East Java Province, envisioning something exceptional in this region.IISP is an ambitious project that converges elements of halal tourism, education, an Islamic center, and recreation.It aims to become more than just an ordinary tourist destination; it will be an international attraction promoting profound insights into the cultural and religious values of Islam while providing entertainment and educational experiences [1].
The location of IISP on Madura Island is a highly relevant choice, considering the unique culture and character of its inhabitants.Madura Island has long been known for its strong culture and deeprooted traditions.This also includes occasional resistance among the Madurese population towards modern infrastructure development.This resistance can be understood as concerns regarding its IOP Publishing doi:10.1088/1755-1315/1351/1/012005 2 potential negative impacts on the integrity of their cultural and religious values.A clear example of this resistance was observed during the construction of the Suramadu Bridge connecting Madura Island to Surabaya [2].The Madurese community, in general, tends to ensure that development and modernization do not erode the values and cultural identity they hold dear.
On the other hand, what sets Madura Island apart is their high level of commitment and devotion to cultural values and the Islamic religion in their daily lives.This culture is an integral part of Madurese life, making it a crucial element in planning IISP.The close interconnection between cultural values and Islamic religion reflects their strong commitment to their identity and beliefs.Therefore, IISP must respond wisely to these dynamics to build a harmonious relationship with the Madurese community.
To ensure the success of IISP, it is imperative that this project is embraced by both the Madurese community and the wider public.The benefits are clear, with the potential to boost the economy of Madura Island, enhance understanding of Islamic values, and create new employment opportunities.However, to achieve optimal acceptance within the Madurese community, IISP must incorporate a concept deeply rooted in Madurese culture.This will ensure that the project not only preserves but also promotes the cultural heritage held dear by the local community [3].Furthermore, in terms of architecture, the use of the "Tanean Lanjhang" theme would be a wise approach, honoring local traditions while offering opportunities for sustainable and positive economic development for Madura Island and East Java as a whole.In this way, IISP will become a place that celebrates culture and offers positive benefits to the Madurese community and all visitors.This research also offers a new way of creating spatial planning by utilizing traditional architecture from the Madurese community.
Tanean Lanjhang
Indonesia is a country that is rich in cultural diversity, especially in terms of traditional houses.Traditional houses in Indonesia can reflect the ethnic diversity, traditions and geography that exist in Indonesia.Each region has a unique architectural style and construction materials that reflect the character of its people.Traditional houses are not only physical buildings, but also contain values, symbolism and functions in the lives of local people.One example is the Madura traditional house, namely Tanean Lanjhang.Investigating this, Tanean Lanjhang is a traditional Madura house with a unique layout concept.The Tanean Lanjhang order or hierarchy begins with the main house (Tongguh) which faces South, followed by the kobhung (family prayer room), Dapor and Kandhang [4].Tanenan itself means "yard" and Lanjhang means "long".Therefore, Tanean Lanjhang can also be interpreted as a yard that extends from west to east, making this yard a shared area that can be used communally by the local community [5].As time goes by, many of these traditional houses have undergone renovation or even reconstruction, but several areas still maintain their ancestors' cultural heritage.
In this research, the author will discuss the layout pattern of the Tanean Lanjhang House.The Tanean Lanjhang arrangement pattern consists of certain parts that can characterize or differentiate Tanean Lanjhang from other traditional houses.These sections include "Main House or Tongghuh",This part of the area is where the parents live, if the family has a daughter who is married then a house will be made right to the east of the main house, this placement will continue until the last daughter with placement in a longitudinal position to the east in sequence starting from the eldest daughter of the family.Another part of Tanean Lanjhang is "kobhung or langgar", this part is the most important part which is the axis of the Tanean Lanjhang traditional house, Kobhung or langgar is at the westernmost end facing East as the end of the building in the Tanean Lanjhang area.Kobhung or langgar can be used by boys in the family to gather and rest.The remaining two parts are "Dhapor" and "Kandhang" This section is located south-facing north.When you meet where the number of girls is large, while the land owned is limited, the composition of Tanean Lanjhang can change, with the same order from the west end then ending in the east, while the "Dhapor" and "Kandhang" It can be shifted to the area behind the house or it can be moved directly to the west or east Kobhung, because of the original land "Dhapor" and "Kandhang" will be used as land for a residence.More concisely can be seen in Figure 1.The uniqueness of Tanean Lanjhang will be made concept Indonesia Islamic Science Park (IISP) which can combine the culture on Madura Island with the desires of the market or investors so that it can provide recreational and educational public space, it is hoped that the local community and the general public will well receive it.
Method
In the field of architecture, there are various approaches, one of which is Metaphorical Architecture.Examining the word "Metaphor," its origin comes from the Latin word "metaphora," which means "to carry" and the Greek word μεταφορά (metaphora), which means "to transfer," derived from μεταφέρω (metapherōmetapherein), meaning "to carry" or "to transfer" [6].According to Larson in 1998, a metaphor is a figurative expression based on comparison [7].Meanwhile, Metaphorical Architecture, as defined by Anthony C. Antoniades, a Greek-American architect and author, is a way to understand something by explaining one object using another object and attempting to see one object as something different [8].
According to Anthony C. Antoniades in his book titled "Poetics of Architecture: Theory of Design," metaphorical architecture is classified into three categories: Intangible Metaphor (Abstract Metaphor), Tangible Metaphor (Concrete Metaphor), and Combined Metaphor.Firstly, Intangible Metaphor (Abstract Metaphor) involves the emergence of metaphors within the concept and ideas, capturing the essence of objects or things being metaphorized.In the context of architecture, Intangible Metaphor refers to the use of abstract concepts, ideas, or values like individualism, nature, tradition, etc., to shape design elements within buildings.This allows architects to convey deeper meanings through physical elements in architectural design.Secondly, Tangible Metaphor pertains to architectural concepts referencing tangible, visually perceptible objects.Metaphors in this category can be visually manifested through architectural forms and materials.Lastly, Combined Metaphor refers to architectural designs that incorporate both concrete and abstract metaphors in their concepts, ideas, perceptions, and forms.In Combined Metaphor, the concept and visuals complement each other as fundamental elements, and visualization serves to achieve the desired foundational qualities in architectural design [9].
In the development of the concept for the Indonesia Islamic Science Park (IISP), an intangible architectural metaphor approach will be applied.The IISP design will metaphorically draw inspiration from the traditional Tanean Lanjhang house originating from Madura Island.This research discusses the stages of transformation from non-architectural sources of information to the architectural syntax target using the domain-to-domain transfer method [10].Objects are identified by collecting data The final outcome of this domain-to-domain transfer will result in design concepts and criteria for the spatial program and zoning of the Indonesia Islamic Science Park.The aim is for this concept to be embraced by the Madurese community and the general public.
Result and Discussion
The Indonesia Islamic Science Park is part of the development plan outlined in the KSN Gerbangkertosusila, East Java.This park is envisioned as an internationally recognized center for halal tourism in East Java.The application of the Tanean Lanjhang spatial pattern concept as the spatial program for the Indonesia Islamic Science Park aims to accommodate community acceptance.By implementing this concept, the zoning within the Indonesia Islamic Science Park attempts to follow the Tanean Lanjhang spatial pattern, which is closely associated with the local community, especially the Madura community.
From an architectural perspective, Tanean Lanjhang consists of four building elements and a long central courtyard.These building elements and the courtyard serve various functions and activities.Using an intangible metaphorical approach (aligning with the activities and functions within each zone), these four building elements and the courtyard are translated into the spatial program of the Indonesia Islamic Science Park.Additionally, by metaphorically translating the spatial pattern, another aspect that can be applied is the east-west orientation of Tanean Lanjhang, which can also be incorporated into the spatial program of the Indonesia Islamic Science Park.
Based on the intangible metaphorical architectural approach, considering the activities accommodated and the functions of the buildings, the resulting spatial program can be seen in Table 1.According to Table 1, the Langgar or mosque in Tanean Lanjhang becomes the Islamic zone in the Indonesia Islamic Science Park.This zone accommodates similar activities and features the same building, which is the central mosque of the Indonesia Islamic Science Park (Figure 3).This mosque also serves as one of the landmarks of the park.The traditional Madurese houses, known as Tongghuh, become the Identity zone [10].This zone is chosen because Tongghuh (residential homes) possess unique characteristics in their wall ornaments, which are exclusive to Tanean Lanjhang.The Identity zone is located on the north side of the Indonesia Islamic Science Park.On the south side, there are two other zones: the Economic zone and the Tourism and Education zone.The Economic zone is derived from the metaphor of the dhapor building in Tanean Lanjhang, accommodating economic activities.Meanwhile, the Tourism and Education zone is a metaphor for the kandhang structure.Activities housed within the kandhang have the potential to serve as livestock tourism and educational facilities [11].The concept of space program and zoning is supported by an arterial road on the western side of the design site.This arterial road meets the requirements and is a one-way street.On the west side, the road is designed to serve as the entrance to the identity zone and the Islamic zone.Subsequently, the exit road is located in the Islamic zone and the economic zone.In addition to the arterial road, there are existing collector roads that serve as entrance and exit points for the economic zone and the tourism and education zone (Figure 3).With this site layout (Figure 5) and the arrangement of the spatial program (Figure 4), it can be identified that the mosque is situated in the center, in accordance with the Tanean Lanjhang spatial pattern.To the east of the mosque, there is a green open space that extends to the eastern side.
Conclusion
The development of the Indonesia Islamic Science Park area began with an observation of the Madurese community's strong resistance to developments that do not align with Madurese values and culture.It is essential to acknowledge that any development in the region that does not respect the values and traditions of the Madura people is likely to face rejection.Conversely, the development of the Indonesia Islamic Science Park offers numerous benefits and positive impacts that can be enjoyed by both the Madurese community and the general population.
Therefore, it becomes imperative to shape the program and spatial concepts in alignment with Madurese culture, making it more readily accepted by the Madura community.The metaphorical architectural approach has been employed to translate the arrangement Tanean Lanjhang patterns into the Indonesia Islamic Science Park area.
Through this spatial program, the Indonesia Islamic Science Park can grow according to its predefined goals and plans while serving as a platform for the development of Madurese culture.It can
Figure 1 .
Figure 1.Tanean Lanjhang Source: personal documents .1088/1755-1315/1351/1/012005 4 through literature review and field surveys.Subsequently, the selection of objects from the source domain is carried out and reduced towards the target domain through a transfer framework (figure2).
Figure 4 .
Figure 4. Indonesian Islamic Science Park Blockplan: Zoning Division Source: personal documents
Table 1 .
Results of Spatial Program Metaphors for the Indonesia Islamic Science Park based on the Tanean Lanjhang Spatial Pattern | 2024-05-31T15:08:41.821Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "9e4c16084c3cd357facff1bfb9cfe2a865290310",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/1351/1/012005/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fcbaee94d6c788ab786defe6718ccd843db9f1cb",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
44101199 | pes2o/s2orc | v3-fos-license | Efficacy of Supportive Therapy of Allergic Rhinitis by Stinging Nettle (Urtica dioica) root extract: a Randomized, Double-Blind, Placebo- Controlled, Clinical Trial.
The aim of this study was to survey the exact benefit of this herb in the management of clinical and laboratory signs and symptoms of allergic rhinitis. In a randomized double blind clinical trial, 74 patients with the signs and symptoms of allergic rhinitis and a positive skin prick test were selected and randomly divided into 2 groups who were takenUrtica dioica 150-mg, Urtidin® F.C Tablet) or placebo for one month. Their signs and symptoms, eosinophil percentage on nasal smear, serum IgE, and interleukin IL-4, IL-5, interferon- γ) levels were recorded. Forty patients completed the trial. Based on the Sino- Nasal Outcome Test 22 SNOT-22), a significant improvement in clinical symptom severity was observed in both groups P < .001). Furthermore, a statistically significant reduction in mean nasal smear eosinophil count was observed after treatment with Nettle P < .01). However, the mean IgE and IL4 and IL5 levels in the study group before and after treatment with Nettle saw no significant changes P > .1). Intergroup pre- and post-treatment laboratory findings suggested that there was a significant difference in post-treatment changes of mean IFN γ levels between the study and placebo group P = 0.017). Although the current study showed certain positive effects of Nettle in the management of allergic rhinitis on controlling the symptoms based on the SNOT-22, similar effects were demonstrated by placebo as well. We believe that our limitations underscore the need for larger, longer term studies of Nettle for the treatment of allergic rhinitis.
Introduction
Allergic rhinitis has the most common allergic disorder in various regions, affecting 20 percent or more of the general population (1-3). This figure continues to risen though (4). Persistent allergic rhinitis was an IgE-mediated inflammatory disorder of the nasal mucosa characterized by continuous symptoms presented for more than 4 days a week consecutive weeks) nasal congestion, rhinorrhea, sneezing, itching, purities of the conjunctiva, nasal mucosa and oropharynx, allergic shiners, lacrimation, along with ocular symptoms and fatigue (5). The condition could be caused by environmental agents such as dust mites, mold, pollen, fungus spores, cockroaches, grass, animal dander, feathers, and also food sensitivities, structural abnormalities, metabolic conditions, or drugs (2, 6). Allergen avoidance has the essential component of allergy management but was not always practical for all patients (5). There was a wide range of over-the-counter OTC) medications on the market for potential allergy management. However, many of these were accompanied with adverse side-effects like sedation, headache, dry mouth, drowsiness, impaired learning/memory, and cardiac arrhythmias, projecting the need for newer therapeutic strategies that could decrease the morbidity associated with Allergic Rhinitis and already in-use drug regimens. On the other hand, various studies have demonstrated that rhinitis patients were tormented by repeated nose blowing, had a disrupted sleep pattern, were fatigued, suffered from a significant decline in concentration, verbal learning, decisionmaking speed, and psychomotor speed which in turn might lead to considerable reduction in productivity level at work, frequent absenteeism from work and school and also a significant decline in general quality of life (7-11). As the financial costs and the negative impact of allergic rhinitis on life quality were of high importance and it was documented as a major risk factor for developing future asthma (12), and effective treatment would be extremely valuable.
In recent years, significant changes have occurred in the strategies of allergic rhinitis treatment (13). On the other hand, there has been a growing trend towards using herbal therapy for both medical and economic reasons (14). It has been reported that herbal therapy was frequently used for the treatment of allergic diseases in various parts of the world (15-17). Based on these facts, there was a need among general physicians, otolaryngologists and immunologists for more knowledge about herbal therapies (18) and for pharmaceutical scientists to further investigate and document the actual efficacy of such treatments.
Nettle Urtica dioica L., Family: Urticaceae) was a well-known medicinal plant that has long been used worldwide in complementary and alternative medicine (19). It was native to Eurasia and was widely distributed throughout the temperate regions of the world, including the U.S (20). Nettle which was a perennial, temperate dioecious plant, prefers wet, nutrient rich soil, lighted places, hot and mild climate and tends to grow in large patches (21). It has 2-4 cm long, oval, core shape, fleshy, drooping, opposite, sharply toothed leaves. The leaves and stems were covered with persistent stinging nettle. It produces inconspicuous small greenwhite flowers from May to September. The fruits of nettle were arid and single germ (21,22). In recent years, several studies have been conducted to examine and confirm the medicinal properties of Nettle and investigate the underlying biochemical mechanisms of such activities. Nevertheless, confirmatory clinical trials in humans were yet needed. Nettle extensive use in medicine and also the entry of its products such as oral capsules and topical solutions either alone or in combination with other herbs) to the pharmaceutical industry (23, 24), provoke the need for a precise knowledge of the herb's pharmacological properties. An overview on the different uses of this herb and the most relevant active ingredients was provided in Table 1.
To our interest were the reported anti-oxidant and anti-inflammatory properties of Nettle which are investigated and proven in several studies. The role of oxidants and oxidative stresses in the pathophysiology of allergic rhinitis has been confirmed in several studies (25, 26). Mittman et al.,(27) reported that while the freeze-dried extract of nettle leaves reduced allergy symptoms based on Global assessment at the end of the double blind clinical trial, only a small difference was observed between the herb and placebo on the daily response diaries. Based on such controversial evidences, we faced lack of sufficient, recent proof to support or refute the use of Urtica Dioica in the treatment of allergic rhinitis. Herein, we aimed to survey the exact benefit of this herb in the management of clinical and laboratory signs and symptoms of allergic rhinitis.
Patients and Methods
This study was randomized, double blind clinical trial which evaluated the additive effect of Nettle on reducing the signs and symptoms of allergic rhinitis. In this study 100 patients with allergic rhinitis who visited the Allergy and the Ear, Nose, and Throat ENT) clinic of Qaem educational hospital, Mashad, Iran, from June 2013 to April 2014 were assessed for eligibility. This clinical trial was registered in www.irct.ir, with IRCT 2013102715177N1 number and was approved by the ethics committees of Mashhad University of Medical Sciences MUMS). For all patients consent information filled to the participation in the record according to the requirements of the ethics committee within each hospital . Of the 74 patients who fully met the inclusion criteria, 40 patients completed the trial and performed laboratory tests at both time points Figure 1). Because the present work was a pilot study at the time of initiation, we chose the sample size based on the number of referral cases to our clinics and the available material for performing such a study. The inclusion criteria for this study was having the clinical signs/symptoms of allergic rhinitis and a positive allergy skin test for at least one of the tested allergens based on the studies that have investigated the prevalence of allergy and common allergens in this region, a total of 18 common regional allergens were tested and reported in 4 categories: tree, mixed grass, Salsola (weed), and Alternaria (mold)). Persistent allergic rhinitis was defined as related symptoms present for much than 4 days a week and consecutive weeks, with the realization that patients usually suffer almost every day. For all these patients, oral antihistamines H1 type) and also inhaled corticosteroids had been previously administered according to the standard treatment protocol of allergic rhinitis, but patients had experienced no or very little clinical response. By doing a simple randomization method, the cases were divided into 2 groups 40 cases each); both groups also received the routine daily treatment for allergic rhinitis: Loratadine10 mg Shahid Ghazi Pharmaceutical Co., Tabriz, Iran) daily and nasal Saline rinses Tehran Chemie Pharmaceutical Co., Tehran, Iran) 3 times a day. Along with this therapy, a 1-month treatment course of Nettle was administered as 150-mg F.C.
tablets Urtidin ® , Barij Essence Pharmaceutical Co., Kashan, Iran) prescribed 4 times daily for the study group. The control group received placebo for the same duration. The placebo was prepared from excipients that were used as conservatives or carriers besides the main therapeutic components and matched for size, shape, and volume of contents and manufactured by the same company. This experiment was approved by the Ethics Committee of Mashhad University of Medical Sciences; all patients were fully informed about the study protocol, and a signed informed consent was obtained from each of them. Patients with any other etiology for rhinitis, or with other underlying systemic diseases, and those taking any kind of herbal drugs with antioxidant effects were excluded from the study. The clinical symptom severity of the patients was evaluated by the standard Sino-Nasal Outcome Test 20 SNOT-22) questionnaire. A skin test for all the common regional aeroallergens-tree, mixed grass, Salsola weed), and Alternaria mold)-was also performed for each case to confirm the allergic base of the disease. Laboratory tests were performed as previously described (28). The percentage of eosinophils on the nasal smear was then quantified by the use of a high-power field microscope HPF 4); a 5-mL blood sample was collected from each patient at baseline and after 4 weeks end of study period). Plasma was frozen and then stored for analysis at the end of the study. Total blood IgE level was measured through the use of enzyme-linked immunosorbent assay ELISA) kits manufactured by Pishtaz Teb Diagnostics, Tehran, Iran) both at baseline and at the end of the trial.
Besides, blood samples were obtained to investigate the concentration of interleukin IL)-4 as the major Th2 inducing cytokine), IL-5 eosinophils accumulation inciter), and interferon IFN)-γ, as the major Th1 inducing cytokine): at each time point, a 2-cc blood sample was sent to Bu-Ali immunologic research center Mashhad, Iran), where the lymphocytes were initially separated and cultured for 48h under polyclonal stimulation by implementing the Phycol method Difco, Bacto Laboratories Pty Ltd, Liverpool, England). Once being stimulated with the mitogen agent of Phyto Mito Antigen PMA, which increases the concentration of cytokines and facilitates their measurement), the supernatant was collected and stored at -80°C pending further analysis. At the end of the trial, cell supernatants were analyzed by ELISA assay to measure the cytokine concentrations as directed by the supplier Sanquin Blood Supply Foundation, Amsterdam, the Netherlands). Cytokine data are expressed as the difference between the spontaneous culture and the control pg/mL). After taking the first blood sample by a single physician, all patients were randomly and in a double-blind manner divided into the study and control groups. A simple randomization method was applied where the consecutive patients were divided into the study or control group intermittently. Group selection was by chance coin flip). At the end of the trial, clinical and laboratory signs and symptoms were once again recorded for each patient. Data were then analyzed by applying the SPSS software version 19; SPSS, Inc, an IBM Company, Chicago, Illinois). The pre-and post-treatment changes in each group and between the 2 groups were compared using the paired samples t test and independent samples t test, respectively.
Results
We evaluated the clinical and laboratory findings of 40 allergic rhinitis patients divided into 2 groups, both before and after treatment with Nettle. There was not significant difference in terms of age and sex between the 2 groups; while there were 6 30%) men and 14 70%) women in the study group, there were 7 35%) men and 13 65%) women participants in the control group. Mean ±SD) age of the participants in study and control groups was 23.98 ±10.72) years and 28.40 ±10.46) years, respectively.
The prevalence of clinical symptoms among all the studied cases is shown in Table 2. The most frequent symptoms were sneezing and nasal blockage with the frequency rate of 100 and 97.5 percent respectively.
The paired samples t test and Wilcoxon signed rank test demonstrated that there was a significant decrease in symptom severity based on SNOT-22) following treatment in the study group P < .001). Furthermore, a statistically significant reduction in mean nasal smear eosinophil count was observed after treatment with Nettle P < .01). However, the mean IgE and IL4 and IL5 levels in the study group before and after treatment with Nettle saw no significant changes P > .1). Mean IFN γ levels experienced a non-significant rise in the study group P = .068).
Mean clinical symptom severity based on SNOT-22 before and after treatment showed a statistically significant decrease after treatment in the control group P < .001). Pre and post treatment mean nasal smear eosinophil count, serum IgE, IL4, and IL5 levels in the control group also demonstrated no significant difference after treatment with placebo P > .1). IFN γ levels significantly reduced the following placebo treatment P = .023).
Intergroup Pre-and Post-treatment Clinical and Laboratory Findings The difference in pre-and post-treatment mean IFN γ level with the 95% confidence interval was -0.75 ± 1.96 and -0.55 ± 1.27μg/ mL intervention and control groups, respectively . Therefore, intergroup Pre-and Posttreatment laboratory findings suggested that there was a significant difference in post treatment changes of mean IFN γ levels between the 2 groups P = 0.017). Table 4 Demonstrates there was not significant difference in pre-and posttreatment changes of mean clinical symptom severity, nasal smear eosinophil count, serum IgE, IL-4 and IL-5 levels between the 2 groups. P = .25, P = .142, P = .494, and P = .259 and p= .680 respectively).
Discussion
Ayers et al (29) have reported that adenine, nicotinamide, synephrine and osthole, found in Urtica dioica has anti-inflammatory and antiallergenic properties. All these compounds were found previously to have significant antiinflammatory effects. Interestingly, Synephrine which is an alkaloid, has been long used as a nasal decongestant (30) and is used in traditional Chinese medicine for treatment of seasonal allergy and other inflammatory disorders (31). More recently, Urtica dioica extract was shown to have inserted in-vitro inhibition of several key inflammatory events that cause allergic rhinitis symptoms. These include 1. the antagonist and negative agonist activity against the Histamine-1 H1) receptor which blocks histamine production and release, 2. the inhibition of mast cell tryptase hindering mast cell degranulation and consequent release of a host of proinflammatory cytokines and chemokines that lead to the appearance of allergy symptoms, 3. Inhibition of Cyclooxygenase-1 COX-1), Cyclooxygenase-2 COX-2) both key enzymes involved in the induction of many inflammation events associated with allergic rhinitis) and therefore prevention of prostaglandin formation, and 4. Hematopoietic Prostaglandin D2 synthase HPGDS) inhibition, that specifically deters Prostaglandin D2 production, a primary proinflammatory mediator in allergic rhinitis (32). However, no recent study has yet been performed on the impact of Nettle with proven antioxidant and anti-inflammatory effects) in the treatment of allergic rhinitis. It should be mentioned that in numerous experiments, other herbal products with established antioxidant and anti-inflammatory effects have been studies in this disease, and a satisfactory result mainly in relieving major clinical symptoms and enhancing the quality of life of such patients has been achieved (28, 33). These motivated us to perform a randomized, double blind clinical trial to investigate the efficacy of supportive therapy of allergic Rhinitis by Urtidin ® Tablet.
We found out that the dominant symptoms of allergic rhinitis recorded in our patients were similar to those of previous studies. This mainly included sneezing, nasal congestion and clear, watery rhinorrhea. Sleep pattern disorder was also widely seen in the patients. The severity of clinical symptoms based on the patients' sex showed there was not significant difference between the 2 sexes. This factor has not been separately included in previous studies.
Here, apart from assessing the conventional clinical symptoms for both diagnosis and evaluation of the degree of recovery, we evaluated laboratory signs related to allergic rhinitis, for the first time. Elicitation of a Th2 response and the decrease in Th1 response are the typical features of inflammatory processes like allergic diseases (34). The secretion of cytokines by Th2 cells leads to the production of specific IgE antibodies by B lymphocytes. IL4 the major Th2 inducing cytokine) which is a necessary signal to B lymphocytes, induces the synthesis of IgE antibodies by B cells. In addition, IL5 induces the accumulation of eosinophils in tissue which is the hallmark but not the only cause) of allergic inflammation and reported to be the major effector cell involved in chronic or perennial rhinitis.
In the current study, Nettle co-administrated with other routine treatments of allergic rhinitis for 1 month, lead to a significant decrease in the severity of clinical symptoms based on the SNOT-22). Furthermore, nasal smear eosinophil count significantly dropped after treatment with Nettle. Saxena et al also reported a significant decrease in the total eosinophil count after treatment with an herbal mixture of Aller-7.
We observed improvement in clinical symptoms in the control group as well. However, in this group, IFN γ as a Th1 cytokine significantly declined meaning that it could lead to the progress of allergic rhinitis after a while. In other words, the improvement in clinical symptoms in the group which received placebo could be temporary as the decline in Th1 response can further enhance the allergic symptoms. Previously, some studies have demonstrated the role of psychological factors in alleviation or worsening of allergic conditions. Also, based on the predictable, short-term positive psychosomatic effect of placebo in any disease and according to the fact that due to our limitations we could not follow the patients any longer, we cannot report on the effect of adding placebo to the routine treatment regimens of allergic rhinitis. This was not the prior aim of the study.
As mentioned earlier, serum IgE and cytokine levels have not been measured in any previous study that has administered antioxidant compounds for the treatment of allergic rhinitis. In the current study, their post treatment level shows that there was no significant difference in either group, except for IFN γ). This, however, could not refute the efficacy of treatment. It was already demonstrated that the total serum IgE has low sensitivity 43.9%) as well as limited clinical value when evaluating a patient for allergic rhinitis. Because of limited facilities, we evaluated the immunologic condition of the patients by studying serum IgE and cytokine levels, whereas measuring such factors in the nasal discharge -although yet there was not supported by all clinicians, has a much higher specificity and accuracy in evaluating the severity of allergic conditions limited to the upper airway system. This is due to the fact that other simultaneous systemic inflammatory disorders can also affect serum IgE and cytokine levels. Furthermore, the difference in response rates could be due to remarkable differences in oxidative stress laboratory tests, which were not examined in this study because of our limitations but we recommend their investigation for future researches in this field.
With respect to safety, our results were consistent with past researches in reporting preliminary evidence of no serious, deleterious adverse effects with the systemic administration of Nettle.
Although this experiment had an arguable outcome, several points should be kept in mind for future similar trials. Most importantly, we could not use Nettle solely for controlling the patients' symptoms; this was due to ethical considerations and the patients' probable dissatisfaction with being treated by a single experimental drug. Thus, Nettle was applied along with other routine treatments. Second, as our study was only 1 month in duration, it presented no direct evidence of any potential much more positive effects associated with long-term usage. Third, our trial encompassed only 40 subjects at last which resulted in insufficient statistical power to prove the insignificant changes. Fourth, the outcome of our study applies only to the We conducted our research based on the assumption that the manufacturer›s statements about active ingredients were accurate. However, in future research, it would be logical to consider testing for this prior to administration. Conclusion current study shows certain positive effects of Nettle in the management of allergic rhinitis on controlling the symptoms based on the SNOT-22 and also similar effects was demonstrated by placebo. Hence, the exact efficacy of Urtica dioica in this respect could not be determined in this study. We believe that our limitations underscore the need for larger, longer term studies of different pharmaceutical dosage forms of Nettle for the treatment of allergic rhinitis. | 2018-06-21T12:41:03.892Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "fc96dff81354f08126c5c20be07ed6fa6da2059a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fc96dff81354f08126c5c20be07ed6fa6da2059a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266596812 | pes2o/s2orc | v3-fos-license | Fetal brain response to worsening acidosis: an experimental study in a fetal sheep model of umbilical cord occlusions
Perinatal anoxia remains an important public health problem as it can lead to hypoxic–ischaemic encephalopathy (HIE) and cause significant neonatal mortality and morbidity. The mechanisms of the fetal brain’s response to hypoxia are still unclear and current methods of in utero HIE prediction are not reliable. In this study, we directly analysed the brain response to hypoxia in fetal sheep using in utero EEG. Near-term fetal sheep were subjected to progressive hypoxia induced by repeated umbilical cord occlusions (UCO) at increasing frequency. EEG changes during and between UCO were analysed visually and quantitatively, and related with gasometric and haemodynamic data. EEG signal was suppressed during occlusions and progressively slowed between occlusions with the increasing severity of the occlusions. Per-occlusion EEG suppression correlated with per-occlusion bradycardia and increased blood pressure, whereas EEG slowing and amplitude decreases correlated with arterial hypotension and respiratory acidosis. The suppression of the EEG signal during cord occlusion, in parallel with cardiovascular adaptation could correspond to a rapid cerebral adaptation mechanism that may have a neuroprotective role. The progressive alteration of the signal with the severity of the occlusions would rather reflect the cerebral hypoperfusion due to the failure of the cardiovascular adaptation mechanisms.
and appears to be a poor predictor of ischemic brain damage and long term outcome 6,[23][24][25][26] .Direct study of the fetal brain to analyse its response to anoxia in real time therefore seems crucial, in order to detect brain damage as early as possible.
Therefore, the objective of this study was to assess the fetal sheep cerebral response to progressive hypoxia using EEG visual and quantitative analysis in our model of UCO.
Ethics
The anesthesia, surgical and experimental protocols followed the recommendations of the National Institutes of Health guide for the care and use of Laboratory animals (NIH Publications No. 8023, revised 1978) and the ARRIVE guidelines.The study was approved by the Animal Experimentation Ethics Committee of "Hauts-de-France" (CEEA #2016121312148878).
For EEG placement, we have recently developed a new EEG recording technique using electrodes placed in utero under the scalp of the foetus, providing a real-time recording of its brain activity 27 .The fetal head was exposed through the hysterotomy.Four needle electrodes (MYWIRE 10, MAQUET, Germany) were fixed on both sides over frontal and parietal cortex, 10 mm lateral to the sagittal suture and 5 mm anterior to the frontoparietal suture.A reference electrode and a ground electrode were placed over the occiput.Each electrode was placed under the cranial periosteum through 5 mm skin incision, in contact with the bone to completely isolate it from the amniotic fluid.The electrode was attached to the scalp suture (Vicryl 2/0 Rapide).An inflatable silicone occluder (OC VO-16HD-DOCXS Biomedical Products-Ukiah, California) was placed around the fetal umbilical cord.
The placental circulation was maintained intact and the fetus was placed back into the uterus after instillation of 500 mg of amoxicillin-clavulanic acid (Amoxicillin-clavulanic acid SANDOZ®, Sandoz, Levallois-Perret, France) into the amniotic fluid.The occluder, catheters, EEG and ECG wires connected to extenders were externalized on the right lateral flank of the sheep by subcutaneous tunnelling.
After the operation, the permeability of the fetal catheters was maintained by a daily injection of 10 IU/mL heparinised isotonic saline (Heparin CHOAY®5000IU, Sanofi-aventis France, Paris).A maternal intramuscular injection of 0.3 mL/10 kg of Buprenorphine (Bupaq®, Virbac, France) was performed 24 and 48 h after the operation to ensure postoperative analgesia.Antibiotic prophylaxis was performed by a daily injection of 1 mL/kg of Amoxicillin intramuscularly (Clamoxyl LA® 150 mg/mL, Zoetis, France) during the 3 days of postoperative rest.
Experimental procedure
The experimental protocol started on the fourth day after surgery.After a stability period of one hour, anoxia was induced by repeated umbilical cord occlusions of increasing frequency, as previously described 17,20,30 .Three phases of one hour each were performed successively, with 1-min occlusions repeated every 5 min during phase A (mild occlusions), every 3 min during phase B (moderate occlusions) and every 2 min during phase C (severe occlusions).Every 20 min a 5-min period without occlusion was observed, called "pause" (Fig. 1).The protocol was stopped if the arterial pH fell below 6.90.
Data acquisition
Fetal EEG signal and haemodynamic data were recorded continuously during the stability and occlusion phases.The EEG signal from each channel was recorded monopolar against the reference electrode using EEG System plus Evolution software (Micromed SAS, Macon, France), filtered between 0.5 and 70 Hz and digitised at a sampling rate of 256 Hz.The signal was analysed before the occlusions during the stability phase (1 h), per-occlusion during the last occlusion of each phase (1 min) and during the whole duration of each occlusion phase (1 h).
The fetal heart rate was recorded by the 4 precordial ECG electrodes and the arterial pressure monitored by the fetal arterial catheters connected to pressure sensors (Pressure Monitoring Kit, Baxter, France).These haemodynamic data were visualised on a multiparametric anaesthesia-resuscitation monitor (Merlin monitor, Helwett Packard, Palo Alto, CA, USA) and recorded on computer media using Physiotrace® software.Heart rate (HR) and mean blood pressure (MBP) were analysed before the occlusions, during the stability phase (1 h), per-occlusion during the last occlusion of each phase (1 min) and between the occlusions during the last pause of each phase (5 min).
Gasometric data (pH, pO2, pCO2, lactate) were measured from arterial blood samples using the I-STAT® micro-method (Abbott Laboratories, Abbott Park, IL USA).Measures were taken in the stability phase and during the last pause of each occlusion phase.
EEG analysis
The entire EEG recordings were visually analysed for frequencies, amplitude and lability by two examiners (LL and SN) trained in neonatal EEG analysis using System Plus Evolution software (Micromed SAS, Macon, France).The EEG was displayed in bipolar montage on 30-s pages (filter 0.5-70 Hz, gain 70 µV).
The raw EEG signal was analysed quantitatively using three qEEG features available in the program "EEG Analyser" (Micromed®): the minimum Amplitude Index (min AI), the Burst Suppression Ratio (BSR) and the Spectral Edge Frequency (SEF).These features were measured on 2-s intervals on frontal and parietal channels in bipolar montage and then averaged over the two channels.
Min AI was calculated as the minimum peak of the EEG amplitude envelope (CFM) over the analysed interval (µV), after filtering the signal at 2-20 Hz.BSR was measured as the ratio of signal suppression, detected for an amplitude < 10 µV during > 500 ms, on the total length of the analysed interval (%).SEF was defined as the frequency (Hz) below which 95% of the power in EEG exists in the analysed interval.
Median values of each qEEG and haemodynamic feature were calculated during the stability phase (1 h preocclusion), during the last occlusion of each phase (1 min) to analyse immediate response to anoxia, and during the whole duration of each phase (1 h of mild, moderate and severe occlusions) to analyse the progressive brain adaptation to repeated anoxia.
Statistical analyses
Gasometric, haemodynamic and qEEG data were described by the median (1st-3rd quartile) for each phase studied.
The difference between the stability phase and the three occlusions phases A, B and C was analysed by a Friedman test for repeated measures (significance level p ≤ 0.05).When the difference was significant, a post-hoc analysis comparing the different phases 2 by 2 was performed by a Wilcoxon test (significance threshold p ≤ 0.05).
The correlation between the qEEG and the different gasometric and haemodynamic markers was analysed by a Spearman test (significance level p ≤ 0.05).
These statistical analyses were performed with XlStat software for Microsoft Excel (AddInSoft, Paris, France) and with SPSS version 20.0 software (SPSS, IBM).
Results
Fourteen pregnant ewes were instrumented.Two fetal deaths occurred in utero at postoperative day 1, one delivery occurred at postoperative day 3, one occluder ruptured during the first phase of UCO and one foetus was excluded because it presented severe cerebral lesions whose origin appeared to be prior to the protocol.A total of 9 fetuses were included for analysis.
Data evolution with progressive hypoxia
The pH decreased progressively during the experimental procedure (Table 1), reaching a severe acidosis (pH 6.98) in phase C. PCO2 and lactate increased between each phase (p < 0.001), while PO2 remained stable (p = 0.430).Heart rate during pauses between occlusions remained stable over the different phases (p = 0.736).A per-occlusion bradycardia was recorded from the very first occlusion in phase A (85 bpm versus 178 bpm) and then remained relatively stable throughout the three occlusions phases.Mean blood pressure measured during pauses between occlusions increased between stability phase and phases B and C (p = 0.037).Per-occlusion mean blood pressure decreased significantly in phase C compared to A and B (p = 0.003).
EEG visual analysis showed a continuous, mild amplitude and labile signal during the stability phase (preocclusion), as described in our previous publication 27 .
During the phase A (mild occlusion) there was few visual EEG changes during and between occlusion.
In phase C (severe occlusions), occlusions visually resulted in immediate EEG changes with a rapid decrease in amplitude or even a complete suppression of activity, sometimes preceded by transient large waves (Fig. 2).The release of occlusion was followed by high amplitude transient waves and then a rapid recovery of the activity.Between occlusions the EEG signal was characterised by low amplitudes and slower waves than in the previous phases.
In phase B, EEG changes varied between fetuses but some modifications could be observed from the beginning of that phase during and between occlusions (Figs. 3, 4).www.nature.com/scientificreports/Quantitative analysis (Table 2) during stability phase (pre-occlusion) showed that BSR median ranged from 0 to 7% except for one foetus where the median was 26%.The SEF ranged from 5.3 to 16.3 Hz and the minimum amplitude from 0.1 to 7.0 µV.The SEF values fluctuated during the stability phase (Fig. 4), corresponding to the physiological variations observed visually.
During occlusions, the BSR (ocBSR) increased with a peak depending on the severity of the occlusion (Fig. 5) with marked inter-individual variations (Table 2).Some fetuses presented peaks of BSR during occlusion from phase A, some only in phase B or C. Compared to stability data, the BSR increase was only significant in phase C (p = 0.018) (Table 2(2)).Per-occlusion SEF (ocSEF) decreased significantly in phase C (p = 0.008) and minimum AI (ocminAI) decreased in phase C as well but was not significant (p = 0.096).
During the length of each phase, BSR levels (wBSR) increased progressively between phases, especially between phases A and B (p = 0.029) (Table 2(1)).The SEF significantly decreased in phase B compared to phase A (p = 0.003).The amplitude decreased too with each severity phases but not significantly (p = 0.066).
Per-occlusion and whole phase decrease in SEF and amplitude correlated with per-occlusion decrease of MBP (respectively p = 0.041 and p = 0.001 for the SEF; p = 0.003 and p = 0.004 for min AI) and with acidosis and increased PCO2.Only the amplitude decrease correlated with the lactate increase (p = 0.034).
BSR did not correlate with biological markers and no qEEG marker correlated with PO2.
Discussion
In this study we investigated in utero fetal EEG response to progressive hypoxia, caused by repeated UCO of increasing frequency.We observed that EEG signal was suppressed during occlusions and progressively slowed between occlusions as the anoxia protocol progressed.Per-occlusion EEG suppression correlated with perocclusion bradycardia and increased blood pressure, whereas EEG slowing and amplitude decreases correlated with arterial hypotension and respiratory acidosis.www.nature.com/scientificreports/Amplitude decrease or EEG signal suppression was visually identifiable from the mild occlusions phase in some fetuses and evident for severe occlusions.This signal suppression was quantitatively reflected by a peak of BSR (ratio of signal suppression) and a decrease in SEF (due to the decrease in frequency and amplitude of the signal) with statistically significant modification only during severe occlusions.Quantitative amplitude decreases were not significant, but the amplitude could have been affected by the large waves often occurring at the onset and at the end of the cord occlusion.
Alterations of EEG background could be observed visually from the moderate occlusions phase, with a progressive slowing down of the signal, which no longer normalized between severe occlusions.This was quantified by the decrease of the whole phase SEF, especially in the severe phase.The whole phase amplitude also tended to decrease, but the difference was not significant.The increase in BSR from the moderate occlusion phase onwards was probably related to the closer timing of the occlusions, responsible for more intense and frequent concomitant signal suppression; the signal did not appear to be discontinuous between occlusions.
Our results are comparable with those obtained from ECOGs recordings during experimental anoxia protocols in lamb foetuses.In prolonged anoxia protocols (bilateral carotid occlusion or complete cord occlusion lasting more than 5 min), EEG signal suppression was observed immediately or between 30 and 90 s after occlusion depending on the study 14,16,31 , accompanied by a decrease in mean amplitude, SEF and total power 12,[32][33][34] .For example, in Pulgar's 2007 study in 120-day gestation fetuses, SEF decreased from a mean of 13.6 ± 1.6 Hz to 10.6 ± 0.8 Hz during 5-min total cord occlusions 35 .Cerebral signal flattening and decrease in SEF concomitant with occlusions and dependent on their severity have also been described in studies using brief repeated cord occlusion protocols 15,22,36 .In Frasch 2011 study, using the same protocol as ours in fetuses of identical term, the mean ECOG amplitude decreased from 88 ± 13 µV in stability to 90 ± 27 µV per mild occlusion, 81 ± 21 µV per moderate occlusion and 60 ± 12µV per severe occlusion.Similarly, the SEF decreased from 14.4 ± 0.4 Hz in stability to 14.1 ± 2.1 Hz per mild occlusion, 11.2 ± 0.9 Hz per moderate occlusion and 9.9 ± 1.1 per severe occlusion.Only the severe per-occlusion values were significantly lower than the stability values and recovery was rapid between occlusions 15 .A progressive decrease in SEF was also found between occlusions of increasing severity in these studies 15,22,37 .
At the end of severe occlusions, Frasch et al. reports a pattern of EEG-ECG synchronisation with a decrease in amplitude and a peak in SEF concomitant with bradycardia and correlated with the onset of arterial hypotension, approximately 1 h before the onset of severe acidosis 15,22,38 .The authors suggest that this pattern may be related to a relative increase in fast frequencies due to the predominance of the activity of inhibitory GABA interneurons capable of high-frequency oscillations 15 .In our opinion, the SEF peak around 25 Hz could be directly related to the signal suppression.In fact, in a recent study analysing the different parameters of the BIS, Connor shows that when EEG is perfectly isoelectric, the signal power is close to zero in all frequency bands, the SEF becomes then undefined and the algorithm returns an unusual value of 30 Hz 39 .Nevertheless, we did not find a peak in SEF concomitant with severe occlusions in the quantitative analysis of our tracings.
Considering the amplitude signal, the results are controversial: De Haan et al. showed a progressive decrease in ECOG amplitude during occlusions (1ʹ/2.5ʹ in fetuses of 126 days gestation) whereas Frasch et al. found no modification or even an increase in the global amplitude with the severity of the occlusion phases.This last result could be linked to a disturbance in the sleep-wake cycles of the foetus.Indeed, some studies suggest that repeated occlusions may result in an increase in large amplitude slow activity 15,31,40 .
The increase in BSR during occlusions correlated with per-occlusion bradycardia and increased blood pressure between occlusions.These haemodynamic changes correspond to cardiovascular mechanisms of adaptation to anoxia previously described in the lamb foetus 41,42 , allowing the redistribution of blood flow to the central organs and the initial maintenance of cerebral perfusion.
Therefore, per-occlusion EEG suppression could reflect a rapid active adaptive neuroprotective mechanism that participates with cardiovascular adaptation in the maintenance of cerebral energy requirement.Indeed, Table 3. Correlation between qEEG and gasometric/haemodynamic data.HR heart rate, MBP mean blood pressure calculated during the pause (p) and during the occlusions (oc), minAI minimum amplitude index (µV), BSR burst suppression ratio (%), SEF spectral edge frequency (Hz): calculated during the occlusion (oc) and during the whole phases (w).Spearman's correlation coefficient (r) and p-values.In bold: significant values (p < 0.05).previous experimental animal studies suggest that during acute anoxia, synaptic transmission is rapidly inhibited via inhibitory neuromodulators [43][44][45][46] .This extinction of brain activity seems to reduce the energy consumed by the potassium-sodium ionic transport that accompanies the generation of synaptic potentials, and thus the brain metabolism [43][44][45][46] , although the precise mechanisms of these phenomena remain to be elucidated.In parallel, the cerebral autoregulatory response to acute hypoxia appears to depend on cardiovascular adaptation mechanisms, which would allow a central redistribution of blood flow through peripheral vasoconstriction and arterial hypertension, and thus the maintenance of cerebral perfusion in the first instance.Moreover, the intensity of this cerebral "shut down" seems directly related to the severity of anoxia since the EEG modification is more pronounced during severe occlusions.Interestingly, the different fetuses did not adapt in the same way since this per-occlusion modification was found more or less early in the experimental procedure.
Markers
Otherwise, the progressive alteration of the EEG background (slowing down and tendancy for amplitude to decrease) occurring from moderate occlusions phase was correlated with acidosis and arterial hypotension.This signal alteration could therefore be associated with the failure of cardiovascular adaptation mechanisms.It has been shown that tissue hypoxia leads to progressive acidosis and then cardiovascular failure, with impaired cardiac function and loss of initial peripheral vasoconstriction resulting in arterial hypotension and then cerebral hypoperfusion [47][48][49][50][51] .Yumoto et al. report a decrease in myocardial contractility as soon as the pH drops below 7.20 50 .
During 4-min occlusions repeated every 90 min in lamb fetuses, Kaneko et al. show an increase in cerebral blood flow and perfusion pressure during the first occlusions whereas the pressure increases less or even decreases at the end of the occlusion when occlusions are repeated 16 .The global signal alteration could therefore reflect a more intense cerebral adaptive response due to the cerebral hypoperfusion or already correspond to an anoxic cerebral depolarisation.This would indicate that the autoregulation threshold has been overwhelmed and that the cerebral hypoperfusion is being responsible for acute anoxic-ischemic brain damage.Lotgering et al. show a loss of cerebral auto-regulation from 4 min of total cord occlusion in fetuses close to term, in relation to arterial hypotension 13 .We can suspect that the same phenomenon occurs when short intermittent occlusions are continued overtime, particularly when the occlusions are very frequent, as in the severe phase of our protocol.De Haan et al. reported a decrease in ECOG signal amplitude parallel with an increase in cortical impedance reflecting a cytotoxic oedema that persisted at the beginning of the recovery phase 37 .
The global decrease in amplitude and SEF correlated with acidosis, mainly respiratory.The direct association between fetal EEG signal suppression and severe acidosis has been reported previously in the fetus 52 .The EEG signal alteration described could be directly related to a deleterious effect of acidosis on the brain.On the contrary, some recent studies have shown that hypercapnia decreases neuronal excitability with a neuroprotective effect [53][54][55] .
Besides the severity of the mechanism of anoxic-ischemia, the constitution of ischemic brain lesions thus appears to depend on the adaptive response of the fetus and its capacity to maintain cerebral energy requirements in the acute phase.We have noted inter-individual variability in cerebral responses to occlusion, also reported in previous studies 15,37,56 .During prolonged carotid occlusions (30 min) in Fraser' study, some fetuses showed only a slight decrease in ECOG amplitude and SEF while others showed profound signal suppression during the occlusion 32 .This inter-individual variable cerebral response appears to be primarily related to variable fetal cardiovascular adaptability 15,37,56 .Previous mild chronic hypoxia appears to be particularly deleterious to adaptation capacity to an added acute anoxia.More rapid and profound acidosis, earlier arterial hypotension and greater suppression of EEG activity have been observed in the lamb fetus as well as reduced carotid flow and cerebral oxygen delivery, greater increase in cortical impedance and greater neuronal loss 15,35,36,57 .The "reservoir" of adaptive capacity of each fetus therefore depends not only on the cause and the mechanism of acute anoxia but also on many factors such as fetal maturation, fetal weight, multiple pregnancy, prior chronic hypoxia, aerobic reserve or maternal temperature 58 .Finally, the challenge is not only to recognise fetuses exposed to acute hypoxia during delivery but also to detect those whose adaptive capacities become insufficient.In this objective, the fetal EEG seems to be particularly informative, allowing a continuous monitoring of fetal brain activity.
In contrast to previous studies in fetal sheep, we visually analysed the raw EEG in addition to the quantitative analysis of the signal.Visual analysis guides the interpretation of the quantitative analyses by relating it to a physiological basis.This avoids misinterpretation that can occur when quantitative results are blindly analysed.For example, as explained above, an increase in SEF above 25 Hz may reflect signal suppression with an isoelectric tracing 39 , but without raw EEG visual analysis this increase may be interpreted as a simple acceleration in signal frequency.We used for this study quantitative EEG markers that have been shown to be correlated with post anoxia EEGs in humans 59 .BSR is of particular interest and had never been used before in fetal sheep EEG studies to our best knowledge.This marker appears to be more sensitive than the amplitude in reflecting periods of signal suppression, as its value is less affected by the intermittent presence of artefactual large slow waves occurring during the occasional movements of the ewe.
Limits are inherent to experimental studies with a limited number of subjects and numerous artefacts due to the technical procedures.We aimed to use a protocol that tries to mimic the different phases of labour in human.Moreover, we do not have objective markers of fetal brain lesions that could be provided by pathological analysis.
Conclusion
In this study, the direct cerebral response of lamb fetuses to progressive anoxia was analysed using a recently developed in utero EEG recording technique.Visual analysis of the EEG was first performed to guide and interpret the quantitative analysis of the tracings.We hypothesize that EEG signal suppression during cord occlusion could reflect a cerebral adaptation mechanism that may have a neuroprotective role.The progressive alteration of the EEG signal with increasing occlusion frequency would correspond to the failure of cardiovascular adaptation mechanisms and the onset of neurological repercussions of cerebral hypoperfusion.These qEEG markers could be used to predict in real time the failure of the fetal adaptive mechanisms to anoxo-ischemia and thus the risk of HIE.These preliminary results need to be confirmed by further studies and an intrapartum EEG recording technique in the human fetus remains to be developed before a practically usable fetal brain function monitoring tool can be devised.
Figure 1 .
Figure 1.Experimental protocol.At day 4 after surgery: progressive anoxia induced by repeated umbilical cord occlusions of increasing frequency.
Table 1 .
Gasometric and haemodynamic data according to stability and occlusion phases (N = 9).Median values (1st-3rd quartile) calculated pre-occlusion in stability phase (S); between occlusions during the last pause of each phase (A,B,C) for gasometric data, pause heart rate (pHR) and pause mean blood pressure (pMBP); per-occlusion during the last occlusion of each phase for per-occlusion heart rate (ocHR) and perocclusion mean blood pressure (ocMBP).The p-values are calculated by a Friedman test between the stability phase and the three occlusions phases.Wilcoxon test: x significant values versus Stability; *significant values versus Phase A; + significant values versus Phase B (p < 0.05).Significant values are in bold.
Figure 2 .
Figure 2. Example of fetal sheep EEG tracing during cord occlusion.A diminution of the signal amplitude can be seen during this UCO in phase B (1 min of occlusion every 3 min)-90 s epoch.The red bar mark the occlusion.Transverse bipolar montage with derivations 1-2 corresponding to left and right frontal electrodes and 3-4 to left and right parietal electrodes.
Figure 3 .
Figure 3. Two examples of fetal sheep EEG tracings before and during cord occlusions.Extracts pre-occlusion in stability phase (S) and between occlusions in mild phase (A), moderate phase (B) and severe phase (C)-30 s epochs.EEG signal alteration can be seen in both fetuses from moderate occlusions phase with slowing and decrease in amplitude.Transverse bipolar montage with derivations 1-2 corresponding to left and right frontal electrodes and 3-4 to left and right parietal electrodes.
Figure 4 .
Figure 4. Example of qEEG curves in a fetus in stability phase and during the protocol of progressive hypoxia.(1) Evolution of BSR (%), SEF (Hz) minAI (µV) values in stability phase (S), in mild occlusions phase (A), moderate occlusions phase (B) and severe occlusions phase (C), of 1 h each in fetal sheep 3. O = total cord occlusion (1 min).(2) 30-s EEG extracts in stability phase and at the end of a mild and severe occlusion.
Table 2 .
qEEG data according to stability and occlusion phases (N = 9).Median values (1st-3rd quartile) calculated in stability phase (S) and during each whole occlusion phase (1); per-occlusion during the last occlusion of each phase(2).minAI minimum amplitude index (µV), BSR burst suppression ratio (%), SEF spectral edge frequency (Hz): calculated during the occlusion (occ) and globally during the entire phases (global).The p-values are calculated by a Friedman test between the stability phase and the three occlusions phases.Wilcoxon test: x significant values versus stability; *significant values versus Phase A; + significant values versus Phase B (p < 0.05).Significant values are in bold. | 2023-12-30T06:18:15.019Z | 2023-12-27T00:00:00.000 | {
"year": 2023,
"sha1": "94e57340caf696218c83573d2d7d32208e0467c4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "37bd391714a48d032dd5c30c96a1c903429ede1a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34926710 | pes2o/s2orc | v3-fos-license | Isolating quantum coherence using coherent multi-dimensional spectroscopy with spectrally shaped pulses.
We demonstrate how spectral shaping in coherent multidimensional spectroscopy can isolate specific signal pathways and directly access quantitative details. By selectively exciting pathways involving a coherent superposition of exciton states we are able to identify, isolate and analyse weak coherent coupling between spatially separated excitons in an asymmetric double quantum well. Analysis of the isolated signal elucidates details of the coherent interactions between the spatially separated excitons. With a dynamic range exceeding 10(4) in electric field amplitude, this approach facilitates quantitative comparisons of different signal pathways and a comprehensive description of the electronic states and their interactions.
Introduction
Coherent multi-dimensional spectroscopy (CMDS) for electronic transitions, much like equivalent techniques in infra-red (IR) [1] and nuclear magnetic resonance (NMR) spectroscopy, utilises multiple pulses that excite and probe the sample during different time periods to quantify excited state dynamics and interactions between states. In multi-dimensional NMR, this type of information facilitates complete structure determination of complex molecules, such as proteins [2]. CMDS for electronic transitions, being technically more challenging, is over 30 years of development behind multi-dimensional NMR and some way from being able to achieve an equivalent level of detail. Nonetheless, third order CMDS experiments have been used to explore energy transfer and relaxation dynamics and more recently to identify coherent coupling between excited states in light-harvesting complexes [3,4], conjugated polymers [5] and well-separated semiconductor nanostructures [6].
In these experiments three phase-locked pulses generate a signal with phase and amplitude that is measured by a heterodyne detection scheme and is proportional to the third order susceptibility of the sample. Varying the delays between pulses results in three time periods (t 1 , t 2 , t 3 ) and three corresponding frequency domains (ω 1 , ω 2 , ω 3 ), as described in Section II. To analyse these data, 2D spectra that correlate the absorption energy (hω 1 ) and the emission energy (hω 3 ) for different values of the t 2 are typically presented.
Coherent coupling between spatially separated systems has long been explored as a necessary requirement for quantum information and cryptography [7]. Recent discoveries suggest such phenomena appear in a much wider range of processes, including light-harvesting in photosynthesis [3,4,8]. These discoveries have been facilitated by developments in CMDS for electronic transitions. [9,10,11,12]. Coherent coupling can be identified in such experiments in the form of a coherent superposition of states, which leads to peaks in 2D spectra with phase that oscillates as a function of t 2 . Alternatively, Fourier transforming the data with respect to t 2 shifts these features alonghω 2 by an amount equal to the energy difference between the coupled states [13,14,15,16,17]. In simple systems these coherence pathways can thus be separated from other signal pathways that involve population relaxation, energy transfer, ground state bleach and excited state absorption. In complex systems, however (e.g. light-harvesting complexes from photosynthetic organisms) numerous states and spectral broadening lead to overlapping peaks that can be difficult or even impossible to identify and/or separate. Additionally, for systems where many-body effects are important (e.g. semiconductor nanostructures) excitation of transitions at one energy can alter the signal detected despite playing no direct role in its generation [10], which can further complicate the interpretation.
The origin of these limitations is the same broad spectral bandwidth that makes 2D spectroscopy so useful. On the one hand, the ability to explore multiple pathways simultaneously can speed-up data acquisition and the analysis of 2D peak shapes can provide more information than is otherwise accessible. On the other hand, if the many different pathways cannot be separated these advantages are lost. Several important and useful approaches to separate different pathways in broadband experiments have been established, [18,19,20,11,21] yet there often remain contributions that cannot be isolated, which can lead to difficulties and uncertainty in the analysis. In such cases a CMDS experiment that can further select specific pathways would prove useful.
Two-colour four-wave experiments that selectively excite and probe specific coherence pathways have recently shown some advantages over broadband CMDS [22,23,24,25]. Similarly, Wright et al. [26] have developed 'Multiresonant Coherent Multidimensional Spectroscopy', which varies the wavelength of relatively narrow-band pulses to identify coherence pathways. What has been lacking, however, is the phase stability between pulses that allows coherent multi-dimensional spectra to be obtained, and with it the ability to analyse peak-shapes and fully correlate the relative contributions from different pathways.
We have combined the selectivity achieved in these multi-wavelength approaches with the phase stability required for CMDS, allowing us to perform both broadband and pathwayselective experiments that can be quantitatively compared. We utilise this pathway-selective CMDS (PS-CMDS) experiment to reveal and explore coherent coupling between excitons localised to semiconductor quantum wells (QWs) separated by a 6 nm barrier, as depicted in Fig. 1. The different widths of the two QWs lead to different transition energies and the possibility of downhill energy and/or charge transfer between wells [27,28,29]. This type of system has been explored extensively for potential device applications [30] and as a tunable template to explore fundamental energy transfer processes [15,31]. When the barrier between wells is low and/or narrow substantial coupling between the wells leads to hybridised wavefunctions and a significant role for coherent quantum effects in energy and charge transfer. For high, wide barriers, where the electron and hole wavefunctions are localised to single QWs separated by large distances, there is no coupling between excitons. In the intermediate regime, where excitons are well-localised to a single well but close enough that dipole interactions can induce coupling, the role and nature of quantum coupling between wells is less clear. We utilise the PS-CMDS technique described here to provide insight into these fundamental coherent interactions.
Materials and methods
The asymmetric double quantum well sample used in this study consists of two GaAs QWs 5.7 nm and 8 nm wide separated by a 6 nm wide Al 0.35 Ga 0.65 As barrier, as shown in Fig. 1. This sample was grown by Metal-Organic Chemical Vapour Deposition (MOCVD) and throughout the experiments was cooled to 20K in a closed-cycle circulating cryostat. In order to determine the extent of localisation, the wavefunctions of both electrons and holes were calculated by solving the one dimensional Schrödinger equation for the relevant potential profile [15]. The calculated wavefunctions are shown in Fig. 1(c). Based on these wavefunctions we determine the probability of finding electrons and holes in each well by splitting the wavefunctions at the center of the barrier and integrating the square of the wavefunction on either side. These probabilities are shown in Table 1 and indicate that each of the wavefunctions are well-localised to one of the QWs. The experiments reported here utilise a CMDS apparatus based on a pulse shaper that is used to delay and compress each of the beams and independently shape their spectral amplitudes. This approach to performing CMDS with a pulse shaper was pioneered by Nelson et al. [17] and has the advantage of being intrinsically phase-stable since all pulses are incident on the same optics. We extend this approach to facilitate spectral shaping and the selective excitation of coherence pathways. The precise and known phase relation between the different spectral components, inherent in the initial femtosecond pulses, then allows the generation of 2D and 3D spectra that includes only the selected pathway/s. We utilised a Titanium:Sapphire oscillator to produce transform limited ∼45 fs pulses centered at 785 nm (as confirmed by FROG and X-FROG) at a repetition rate of 97 MHz. The CMDS experimental apparatus utilised two spatial light modulators (Boulder Nonlinear 512 nematic SLM) in an arrangement similar to Turner et al. [17]. The first SLM is used as a Fourier beam shaper to split the incident beam into four beams in a boxcars geometry (three for exciting the third order polarization in the sample and one local oscillator (LO) which overlaps with the signal for heterodyne detection). These are relayed through a 4F imaging system to a pulse shaper based on the second SLM. Each beam is spectrally dispersed horizontally and separated from the other beams vertically on the SLM. A spectral phase is applied to each beam independently to compensate for any chirp and apply the specified delay (a linear phase gradient in frequency corresponds to a shift in the time domain). With the signal detected in the direction given by −k 1 + k 2 + k 3 , where k i is the wavevector of pulse i, the delay between pulse-1 and -2 is labelled t 1 , the delay between pulse-2 and -3 is labelled t 2 and the time between the third pulse and the signal is labelled t 3 .
In addition to the temporal pulse shaping, a vertical grating is applied to the SLM to diffract the beams down. This allows the time delayed beams to be separated from any replica pulses and picked off from the incident beams. Varying the depth of the vertical grating also facilitates amplitude control of each beam [32]. This spectrally resolved amplitude control then enables spectral shaping.
The delayed beams are then imaged to the sample, where they overlap and excite a third order polarization that radiates in momentum conserving directions. At the sample position each of the three excitation beams have average power of ≤ 2.8 mW and are focussed to a 150µm diameter spot. The incident photon density is 6.7 × 10 11 cm −2 per pulse, which will lead to a coherent response primarily in the χ (3) regime [31]. The four-wave mixing signal detected is collinear with the local oscillator and focussed into a spectrometer where spectral interferometry allows the amplitude and phase of the signal to be determined. An eight-step phase cycling procedure is used to minimise noise and scatter from the excitation beams and maximise the signal. Further details of the experimental configuration and operation can be found in the Supplemental Material.
To generate a 3D spectrum the delay t 1 was scanned in 10 fs steps from 0 to 2000 fs for fixed values of the delay t 2 , which was varied in 15 fs steps from 0 to 900 fs. For all of the data presented here co-linearly polarized pulses were used and only the absolute value rephasing contribution (pulse-1 arriving first)are shown. A rotating frame of reference was used, with the carrier frequency set to 795 nm. This ensures that the phase at 795 nm does not change as the delays are varied and reduces the sampling requirements for complete determination of the electric fields. From the spectral interferograms the amplitude and phase are determined and the data Fourier transformed with respect to t 1 and t 2 .
In the coherence pathway specific experiment, spectral amplitude shaping was used to tailor the excitation spectrum of the first two pulses so that they were centred on different transitions with very little spectral overlap. The spectral amplitude masks used are shown in Fig. 2(a) and were chosen to give spectral amplitudes that were close to Gaussian as shown in Fig. 2(b). The flat spectral phase leads to transform limited pulses and the approximately Gaussian spectra ensure good temporal profiles, as shown in Fig. 2(c). The average powers of these pulses are then and resultant spectra of the two shaped pulses (red and blue) together with the spectrum from the QW sample. (c) Temporal profile of narrowed and un-narrowed spectra as calculated by a Fourier transform of spectra in (b) assuming a flat spectral phase. Plots are normalized and offset for clarity. reduced to 1.0 mW and 0.3 mW for the first two pulses, respectively. For the third excitation beam and the local oscillator the full laser spectrum was used. This pulse scheme drives only processes that involve coherent superpositions between states excited by the first two pulses (except where pulse-3 is overlapped with the other two pulses).
A major benefit of this experimental setup is flexibility. The pulse sequence utilised here is one of many possible combinations which can be designed to excite any given pathway. The excitation spectra can also be tailored to match the temporal and spectral requirements of the sample. The intrinsic ability to perform a series of such pathway-selective CMDS experiments, alongside broadband CMDS, with no changes to the optical setup allows quantitative comparisons that can facilitate precise and detailed understanding of interactions in complex systems.
Broadband CMDS
Broadband CMDS was performed with the full spectral bandwidth shown in Fig. 2(a) for each pulse. The absolute value spectra for the rephasing contribution only are shown in Fig. 3. The 2D spectrum at t 2 = 0 in Fig 3(a) shows contributions from pathways involving each of the four excitons indicated by the horizontal and vertical lines and identified in Fig. 1(b). This 2D spectrum is dominated by the NW hh diagonal peak due to a combination of the laser spectrum and the oscillator strength of this transition. Cross-peaks corresponding to heavy-hole -light-hole interactions in the same well are present both above and below the diagonal for each well. Below diagonal cross-peaks indicating interactions between the NW hh exciton and both wide well excitons are also present. These cross peaks may combine contributions from both population (e.g. population relaxation, energy transfer, ground state bleach or excited state absorption) and coherence pathways, making it difficult to ascribe their origin from such a 2D spectrum.
The 3D spectrum separates these contributions as shown in Fig. 3(b). As discussed above, the presence of cross-peaks that are shifted alonghω 2 in the 3D spectrum by amounts equal to the energy differences between the coupled exciton states is indicative of coherent coupling. In Fig. 3(b) the majority of the signal is athω 2 = 0 and therefore due to population pathways. Coherences involving heavy-hole and light-hole excitons localised to the same well are the next strongest contributions, indicative of the expected strong coupling between these. Two further peaks corresponding to coherences involving the NW hh exciton and the two WW excitons can also be resolved. These inter-well cross-peaks are, however, almost three orders of magnitude weaker than the strongest peaks and as a result sit on top of a large noisy background. Previous work has identified this type of coherence signal by examining different types of 2D spectra correlating ω 2 and either ω 1 or ω 3 [33]. In the present case it is not possible to identify these coherence peaks in the projections and it is only because they can be isolated in the full 3D spectrum that they can be identified. Furthermore, the 3D spectrum allows the peaks to be isolated and 3D peak shapes to be analysed, as described in section 3.3. In the projections, however, different pathways can contribute and overlap, limiting the ability to fully analyse the peak shapes. This is particularly the case for projections onto the (ω 1 , ω 3 ) plane where the coherence signal is typically swamped by competing signal pathways, making the significant information that can be extracted from the coherence peak shape in this projection inaccessible. The 3D spectrum (c) confirms that these peaks arise entirely from inter-well coherence pathways. The separation of the four peaks in three dimensions and enhanced signal to noise allows further quantitative and peak shape analysis. Figure 3(c) shows the inter-well coherence peaks in isolation but due to the poor signal to noise little analysis beyond identifying their presence is possible. In contrast, the intra-well coherences, which are much stronger and well above the noise, demonstrate peak shapes that are elongated in the diagonal direction, indicating correlated inhomogeneous broadening, as will be discussed in Section 3.3.
Coherence specific PS-CMDS
To further examine the inter-well coherence peaks the pathways that lead to these signals were selectively excited using the pulse sequence shown in Fig. 4(b). With the first pulse resonant only with NW excitons and the second resonant only with WW excitons, all population and single well coherence pathways should be excluded.
A 2D spectrum using this pulse sequence is shown in Fig. 4(a). By comparison to the broadband 2D spectrum ( Fig. 3(a)) it can be seen that all single well processes are suppressed and the only signal is in the region of the inter-well cross peaks. There are four inter-well peaks present, the two identified in Fig. 3 and two additional peaks at (NW lh ,WW hh ) and (NW lh ,WW lh ). The 3D spectrum in Fig. 4(c) shows each of the peaks to be well-resolved and confirms that these four peaks are all due to inter-well coherent superpositions. This is in stark contrast to the 3D spectrum in Fig. 3 where only two coherences are identified from noisy peaks that are not well separated from other signal pathways and background noise. The absence of signal athω 2 = 0 further confirms that the coherent superposition pathways are indeed being excited in isolation.
Selectively exciting the coherence pathways has not only identified two additional coherent signals (and hence inter-well coupling between two additional pairs of excitons), but also enhanced the signal to noise. This allows further detailed analysis of the shape, location and magnitude of each peak, as discussed in the following sections.
In these experiments the noise level varies as a function of ω 3 and is proportional to the total signal at eachhω 3 value. This is because the different peaks emitting at the same energy are not separately measured, but are separated by Fourier transforms. Slight variations in the spectral interferrograms between different steps of the phase cycling, likely caused by vibrations of the cryostat, is the major noise source. This noise is then amplified athω 3 values with strong signal. In the PS-CMDS results there are no strong diagonal peaks to add noise to the cross peaks, leading to the much cleaner signal observed.
Peak shape analysis
Peak shape analysis of 2D peaks has become one of the strengths of CMDS. Such analysis can be used to separate homogeneous and inhomogeneous broadening, reveal correlated and uncorrelated inhomogeneous broadening or spectral diffusion and to identify contributions from different many-body effects. The ability to separate different quantum pathways, as described here, allows broader application of these analysis tools. Extension to 3D peak shape analysis, as we will show, adds further utility. Figure 5 shows 2D spectra obtained by selecting specific peaks in the 3D spectrum and integrating the peak along ω 2 . These spectra correlatehω 1 andhω 3 in much the same way as standard 2D spectroscopy, allowing many of the same peak shape analysis tools to be applied.
The spectrum for the NW hh diagonal peak centred at ω 2 = 0 is shown in Fig. 5(a). The analysis of this is exactly as for standard 2D spectra: this peak is elongated along the diagonal, indicative of inhomogeneous broadening, allowing the homogeneous linewidth of 1.7 ± 0.2 meV to be measured in the presence of an inhomogeneous linewidth of 4.7 ± 0.2 meV.
In Fig. 5(b) the (NW lh , NW hh ) intra-well coherence peak is isolated and similarly broadened along the diagonal. Simulations presented previously for such coherence peaks have demonstrated that correlated broadening will result in peaks elongated along the diagonal, while uncorrelated broadening results in peaks with no diagonal elongation [15]. The major source of broadening in these wells is local fluctuations in the width of the wells, and since hh and lh excitons in the same well will experience the same fluctuations, the broadening is expected to be correlated [14], as indicated by the diagonal peak-shape. Closer inspection of this peak reveals that the major axis of the ellipse is not perfectly along the diagonal direction, but slightly tilted towards the horizontal. This suggests that the inhomogeneous broadening is greater alonḡ hω 1 , corresponding to the NW lh exciton, than alonghω 3 , corresponding to the NW hh exciton. This is consistent with the inhomogeneous linewidths measured for the two diagonal peaks, and consistent with the origin of inhomogeneous broadening in this MOCVD grown sample being fluctuations in the well width, which will affect light-holes more than heavy-holes, as can be seen in Fig. 1(c). This differing dependence on well width is well-known [34,35], but the 3D spectrum and 2D projections provide a clearer separation of the different effects than other techniques. Specifically, the imperfect correlation of the inhomogeneous broadening due to the different dependence on well width for the heavyhole and light-hole excitons is clear and immediately apparent.
In contrast, the inter-well cross peaks from the PS-CMDS experiment represented in Fig. 5(c)-5(f) show no apparent elongation along the diagonal, indicating uncorrelated inhomogeneous broadening. The fluctuations in well width that are responsible for the majority of inhomogeneous broadening are not expected to be correlated across the different wells and so for excitons localised in different wells uncorrelated broadening is expected. This observation confirms that these excitons are indeed localised to different wells and coherent coupling between them is not due to wavefunction hybridisation and spatial overlap.
Further analysis on the 3D peak shape and projections onto each 2D plane and 1D axis can reveal further details, some of which can be identified in Fig. 6. These show the complete 1D, 2D and 3D peakshapes for the (NW hh ,WW hh ) inter-well coherence peak and the (NW lh , NW hh ) intra-well coherence peak, from the pathway selective and broadband CMDS data, respectively. The 2D (ii)-(iv) and 1D (v)-(vii) peak shapes are obtained by integrating windowed 3D peaks (i) in one or two of the frequency dimensions. Each of the 1D peaks is fit well by a Gaussian function, plotted as the solid blue line in Fig. 6 (v)-(vii). The details of these fits including the centre, full width at half maximum (FWHM) and amplitude for each peak are compiled in Table 2. Table 2. Tabulated data taken from peak shape fits and peak heights. Uncertainties in the peak width and center are estimated by fitting the data using a range of different reasonable selections of data. Amplitude uncertainties are estimated based on the strength of background signal near the peak. Corrected amplitude uncertainties also include a contribution from the uncertainty of the excitation spectra used for the spectral correction.
For population peaks on the diagonal thehω 1 andhω 3 peak widths should be equal. For coherence cross-peaks, thehω 3 linewidths should match the linewidths of the diagonal peaks at the corresponding emission energy. Similarly, the linewidth alonghω 1 for coherence peaks should match the linewidth of the diagonal peaks at that absorption energy.
The width of peaks inhω 2 will depend greatly on the nature of the transitions and the broadening mechanisms involved. For example, population peaks would be expected to have widths inversely proportional to the excited state lifetimes. Coherence peaks where inhomogeneous broadening is correlated would be expected to have a width less than or equal to the larger homogeneous linewidth of the states involved. Whereas coherence peaks where the inhomogeneous broadening is uncorrelated would be expected to have width inhω 2 that is determined by the convolution of the two inhomogeneous distributions of the states involved.
The peak widths in Table 2 match the expected relative values within the measurement error. Thehω 1 andhω 3 peak widths are roughly equal for all the diagonal population peaks and coherence peaks havehω 1 andhω 3 widths that match the widths of the diagonal peaks for the corresponding absorption or emission energy. Thehω 2 widths also behave roughly as expected. The diagonal peaks in the broadband CMDS experiment all havehω 2 linewidths that are less than or equal to the respectivehω 1 andhω 3 widths as predicted for population peaks. For the intra-well (NW lh , NW hh ) coupling peak, thehω 1 ,hω 2 , andhω 3 widths are all comparable, which is expected for correlated inhomogeneous broadening. The broadband CMDS intra-wellhω 2 widths are larger than the homogeneous linewidths of the individual transitions, but all are approximately at thehω 2 resolution limit based on the scan parameters used. On the other hand, the inter-well coupling peaks in both the broadband CMDS and PS-CMDS spectra havehω 2 linewidths that are approximately the sum ofhω 1 andhω 3 , which is consistent with uncorrelated inhomogeneous broadening.
Further analysis of the 2D projections which correlatehω 2 withhω 1 andhω 3 can add further insight into these types of interactions as detailed in [36]. Similarly, further analysis of the real part of the data and the corresponding 3D peak profiles contains additional information on many-body effects and the interactions between wells [10,36,37]. This detailed analysis is beyond the scope of this manuscript and the tools for understanding these features in isolated 3D spectra will be the subject of future work.
Quantitative comparisons
In addition to analysis of peak shapes and quantitative analyses of peak widths and locations, comparisons of peak amplitudes can provide important details. One of the significant advantages of our approach is that there is no change to the experimental setup between CMDS and PS-CMDS experiments and they can be conducted in immediate succession. This allows quantitative comparisons of signal strengths and hence the relative contribution of the different signal pathways. With these details it should be possible to determine precisely all transition dipole moments and the coupling strengths between each of the spatially separated excitons.
One factor that needs to be taken into account is that each of the transitions is excited by a different spectral intensity, which will vary in the different configurations. The simplest approach to take this into account is to scale the measured signal by the spectral amplitude of each pulse at the energy of each interaction. For example, the centre of the (NW hh ,WW hh ) coherence peak will be scaled by the spectral amplitudes of the first pulse at the NW hh energy, the second pulse at the WW hh energy, the third pulse at the NW hh energy and the local oscillator at the WW hh energy. These corrections have been made for each of the diagonal peaks in the broadband CMDS experiments and the inter-well coherence peaks in both the broadband and PS-CMDS experiments with the resultant values shown in the final column of Table 2. For the two inter-well coherence peaks that are present in both the experiments the corrected amplitudes agree within the experimental uncertainties, supporting the validity of this approach.
Finally, we note that the absolute (uncorrected) strength of the weakest peak in the PS-CMDS spectrum is nearly four orders of magnitude below the strongest peak in the broadband CMDS experiment. This dynamic range and an ability to quantitatively compare different signal pathways over this range will greatly enhance the versatility and applicability of the technique and enable determination of the dipole moments and coupling strengths. This represents an important step towards quantum state and process and tomography on these systems, extending recent demonstrations of quantum state and process tomography in simple systems [38,16,39].
Discussion
Recent work by Nardin et al. identified coupling between excitons predominantly localised to different InGaAs QWs and identified that the coupling was mediated by many body effects [31]. In that case they were unable to resolve the coherent superpositions ('zero-quantum coherence') that are explored here, but rather use the two-quantum coherence signal to identify and analyse the coherent interactions. These complementary approaches, which both identify coherent coupling between excitons, provide access to different details that help to understand the coherent interactions between the spatially separated excitons. One particular advantage of the approach described here, however, is the potential to identify very weak coupling. Indeed the system studied here consists of excitons that are very weakly coupled and spatially very wellseparated.
In the present experiments co-linearly polarised pulses were used, meaning all resonant tran-sitions were excited by each pulse. To gain an even deeper understanding of the mechanisms responsible for the coherent coupling, experiments with different combinations of circularly polarised pulses will be able to identify selection rules for the coupling and the role of angular momentum in determining the coupling strengths. In the ADQW system studied here it is possible to sufficiently narrow the pulses to selectively excite the different transitions while maintaining sufficiently short pulse durations to provide the temporal resolution required. This may not be the case in all systems. For example, where the spectral separation between states is small, it becomes difficult to completely isolate a given pathway. It does, however, remain possible to significantly enhance the pathway of interest relative to competing pathways. Hence, it becomes a balance between maintaining sufficiently short pulses to access the relevant dynamics, while selectively enhancing the specific pathway of interest. However, even where transitions are separated by as little as a few meV some advantage can still be gained by utilising this PS-CMDS approach.
Conclusions
We have devised a pathway specific CMDS experiment that combines many of the benefits of CMDS with an ability to selectively excite specific quantum pathways. We utilise these capabilities to unambiguously reveal coherent coupling between excitons localised to quantum wells separated by 6 nm. With our experimental approach we are able to achieve a dynamic range of 4 orders of magnitude in amplitude, which corresponds to 8 orders of magnitude in intensity. With this dynamic range we are able to identify coherent superpositions of spatially separated excitons, some of which have not previously been seen. Furthermore because we are able to isolate these coherence peaks we are able to perform peak shape analysis and quantitative comparisons that are not possible with the equivalent data from broadband CMDS. In analysing the peak shapes we identify several new tools to help understand the interactions between different electronic states.
This ability to isolate and analyse coherences, and indeed any specific signal pathway, can provide significant insight into the interactions and dynamics in a range of complex systems. In photosynthetic light harvesting complexes, for example, this type of approach has the potential to resolve important questions regarding the nature and role of quantum effects in efficient energy transfer. | 2018-04-03T02:47:15.057Z | 2014-03-24T00:00:00.000 | {
"year": 2014,
"sha1": "a8c1632dbd5d67a1df99ab2c71adaa12961b9c24",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.22.006719",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "ba3e83b51da33e9712ce2a7aee87200f562ec1bc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
236567654 | pes2o/s2orc | v3-fos-license | THE EFFECT OF ULIN (Eusideroxylon zwageri) STEM BARK EXTRACT ON THE GROWTH OF Candida albicans ON ACRYLIC RESIN DENTURE PLATES
Background: Candida albicans is the main microorganism that causes denture stomatitis, thus denture soaking in cleansing solution is needed to protect them from Candida albicans contamination. The 0.2% Chlorhexidine gluconate is one of denture cleansers that induce side effects at prolonged use. An alternative ingredient that can be used as a denture cleanser is ulin stem bark extract. Objective: To determine the effect of ulin stem bark extract at 20%, 40%, 60%, 80%, 100% concentration on the growth of Candida albicans on acrylic resin denture plates. Method: True Experimental with post-test only with control group design was employed under 7 treatment groups consisting ulin stem bark extract at 20%, 40%, 60%, 80%, 100% concentration, 0.2% Chlorhexidine gluconate, and aquadest in a total of 28 samples. Acrylic resin samples that had been exposed to Candida albicans were soaked in respective treatment for 15 minutes. Results: The average of Candida albicans colonies on acrylic resin denture plates after soaking in ulin bark extract at 20%, 40%, 60%, 80%, 100% concentration, 0.2% Chlorhexidine gluconate, and aquadest were 29.5 CFU/ml, 13.0 CFU/ml, 0 CFU/ml, 0 CFU/ml, 0 CFU/ml, 0 CFU/ml, and 155 CFU/ml. Based on Mann Whitney test, there was no significant difference when ulin stem bark extract at 60% concentration was compared to 0.2% Chlorhexidine gluoconate. Conclusion: Ulin stem bark extract at 20%, 40%, 60%, 80%, and 100% concentration have been proven to reduce Candida albicans colonies on acrylic resin denture plates, and the 60% concentration is equivalent to 0.2% Chlorhexidine gluconate.
INTRODUCTION
A common dental and oral health problem that often occurs among Indonesian society is tooth loss. Riskesdas data in 2018 showed that the prevalence of tooth loss at the age of 15-24 years old was 8.4% and increased to 30.6% above the age of 65 years old. Tooth loss may lead to several functional disorders, including over-eruption, decreased aesthetic and masticatory function, tooth rotation, and tooth migration. The use of denture is a solution to avoid the impact of tooth loss. 1 The most common material that used for denture plate is acrylic resin. Around 95% denture plates are made of heat cured acrylic resin. The advantages of this material include good aesthetics, lightweightness, unexacting repairability, easy manufacturing, and cheap. On the other hand, the disadvantages of this material include proneness to fractures, poor thermal conductivity, susceptibility to abrasion during cleaning or usage, and capability to absorb liquid. Therefore, when acrylic resin is placed into oral environment, it will absorb saliva that will cover denture surface with protein-rich saliva; hence, pellicles will be formed. The pellicles will promote the colonization of microorganisms that will gradually grows and cause an increase in the attachment of microorganisms to dentures, and one of them is Candida albicans. 1,2,3 Candida albicans is a species of fungi found in the oral cavity. In fact, it is normal flora of the mouth. It can transform into an opportunistic pathogen when the environment is suitable, resulting in disturbances. Dirty dentures will increase the number of Candida albicans colonies which cause inflammation of the oral mucosa, known as denture stomatitis. 4 Candida albicans is the main microorganism that causes denture stomatitis. 1 Soaking denture in denture cleanser at night is a prevention of denture stomatitis, hence it has an important role in reducing the number of Candida albicans. 5,6 Denture cleanser usually have a chemical base, one of them is 0.2% Chlorhexidine gluconate . It can be used by soaking denture in the solution for 15 minutes. The impact of long-term use of 0.2% Chlorhexidine gluconate involves teeth discoloration, a relatively expensive price, and decolorization of denture plate, therefore alternative ingredients for denture cleaners are needed. There are many types of traditional plants in Indonesia that can be used as an alternative ingredient for denture cleaners, thus many researchers have begun to explore the use of these ingredients as disinfecting agents. 5,7,8 Based on the results of phytochemical tests, ulin (Eusideroxylon zwageri) stem bark extract contain a large amount of tannins, flavonoids, and phenols and there are moderate amounts of alkaloids, saponins, terpenoids,. Several studies showed that phenols, flavonoids, saponins, terpenoids, alkaloids, and tannins had an antifungal effect and might inhibit the growth of Candida albicans. 10,11 Up until now, there has been no study on natural ingredients that uses ulin bark to inhibit the growth of Candida albicans yet, therefore the study of the effect of ulin (Eusideroxylon zwagei) stem bark extract on the growth of Candida albicans on acrylic resin dentures plates was conducted. This study used a true experimental with a posttest only with control group design. The sample used in this study was heat cured acrylic resin with a size of 10 mm x 10 mm x 2 mm. The sampling process was obtained by simple random sampling technique that used 7 treatment groups, that were ulin stem bark extract at 20%, 40%, 60%, 80%, 100% concentration, 0.2% Chlorhexidine gluconate as control positive, and aquadest as control negative with a total of 28 samples.
MATERIALS AND METHODS
Ulin stem bark extract was made using maceration method. Two kilograms of ulin stem bark were cleaned and dried. First, ulin stem bark was cut into small pieces and then processed into powder. After that, the powder was filtered through a screen mesh. Ulin stem bark powder was then immersed in 96% ethanol solvent for 1 x 24 hours and stirred with the help of shaker. Furthermore, the extract was filtered, and the filtrate was evaporated using a rotary evaporator at a temperature of 59-60℃ until a concentrated extract was obtained, then it was heated on a water bath so that the entire solvent was evaporated to produce 14 grams of brownish liquid residue with 100% concentration. A few drops of potassium dichromate (K2Cr2O7) were added to the sample of ulin stem bark ethanol extract for free ethanol test. If there was no color change, then ulin stem bark extract stated as free ethanol.
Ulin stem bark extract with a concentration of 100% was diluted to various concentrations of 20%, 40%, 60%, 80% with formula as follow: V1.M1=V2.M2 V1 = Volume of diluted solution (ml) M1 = Concentration of ulin stem bark extract (%) V2 = Volume of solution (water and extract) desired (ml) M2 = Concentration of ulin stem bark extract to be made (%) Acrylic resin plate samples were immersed in sterile distilled water for 48 hours to reduce the residual monomers, then the sterilization was performed. Furthermore, the acrylic resin plates were soaked in sterile saliva for 1 hour to facilitate the attachment of Candida albicans. Next, the acrylic resin plates were rinsed with PBS (Phosphate Buffer Saline) solution twice. Then, acrylic resin plates were inserted into test tubes containing Candida albicans suspensions and incubated for 24 hours at 37℃. Acrylic resin plates that had exposed to Candida albicans are inserted into a test tube containing a solution of ulin stem bark extract (Eusideroxylon zwageri) of 20%, 40%, 60%, 80%, 100%, negative control (Aquadest), and positive control (0.2% Chlorhexidine gluconate) for 15 minutes. Next, acrylic resin plates were rinsed with PBS solution twice. Then the acrylic resin plates were inserted into the BHIB (Brain Heart Infusion Broth) and vibrated using a vortex mixer for 30 seconds. A total of 0.1 ml BHIB was taken and dropped into SDA (Sabouraud Dextrose Agar). The media was subsequently incubated for 48 hours at 37℃. The next step was counting the number of Candida albicans colonies.
RESULTS
The results for the effect of ulin stem bark extract on Candida albicans on acrylic resin plates were obtained by counting the number of Candida albicans colonies after the soaking process. The average number of Candida albicans colonies found on acrylic resin denture plates after the soaking process in ulin stem bark extract, 0.2% Chlorhexidine gluconate, and aquadest can be seen in the table below. Table 1 shows the average results of various treatment groups. The number of Candida albicans colonies on acrylic resin denture plates after soaking in ulin stem bark extract at 20%, 40%, 60%, 80%, 100% concentration, 0.2% Chlorhexidine gluconate, and aquadest have an average of 29.5 CFU/ml, 13.0 CFU/ml, 0 CFU/ml, 0 CFU/ml, 0 CFU/ml, 0 CFU/ml, and 155 CFU/ml. Data obtained from each treatment was tabulated and a normality test was performed using Saphiro Wilk test. Normality test results obtained that p<0.05, thus it can be concluded that the data were not normally distributed and based on Levene's homogeneity test showed a significance value of p <0.05 revealing that the data were not homogeneous. Furthermore, the data were analyzed using Kruskal Wallis non-parametric test.
Based on Kruskal Wallis non-parametric analysis test results, the value of p=0,000 (p <0.05) which shows differences in the number of Candida albicans colonies when assessed on the treatment given; therefore, the analysis was continued by the Mann Whitney test to find out which groups impart the difference. Table 2 represents the results of Mann Whitney test for each treatment group. Ulin stem bark extract at 20% concentration had p<0.05 when compared to ulin stem bark extract at 40%, 60%, 80%, 100% concentration, positive control of 0.2% Chlorhexidine gluconate, and negative control of aquadest; thus, a significant difference was found. Ulin stem bark extract at 40% concentration had p<0.05 when compared to ulin stem bark extract at 60%, 80%, 100% concentration, positive control of 0.2% Chlorhexidine gluconate, and negative control of aquadest; thus, a significant difference was found. Ulin stem bark extract at 60% concentration had a value of p=1.000 when compared to ulin stem bark extract at 80%, 100% concentration, and positive control 0.2% Chlorhexidine gluconate; thus, no significant difference was found, whereas a significant difference was found when compared to negative control aquadest which had p<0.05. Ulin stem bark extract at 80% concentration had a significance value of p=1.000 when compared to ulin stem bark extract at 100% concentration and positive control of 0.2% Chlorhexidine gluconate; thus, no significant difference was found, whereas a significant difference was found when compared to negative control of aquadest which has a value of p<0.05. Ulin stem bark extract at 100% concentration had p=1.000 when compared to positive control 0.2% Chlorhexidine gluconate; thus, no significant difference was found, whereas a significant difference was found when compared to negative control aquadest which had of p<0.05.
Based on this study, ulin stem bark extract at 60% concentration is equivalent to 0.2% Chlorhexidine gluconate. Based on Mann Whitney test, ulin stem bark extract with a concentration of 60% did not show any significant difference when compared to the positive control of 0.2% Chlorhexidine gluconate.
DISCUSSION
Based on the results of the study it can be seen that the acrylic resin denture plate soaked in ulin stem bark extract at 20%, 40%, 60%, 80%, and 100% concentration has been proven to reduce the number of Candida albicans colonies. Based on statistical test, there was no significant difference when ulin stem bark extract at 60% concentration was compared to 0.2% Chlorhexidine gluoconate. Therefore, ulin stem bark extract at 60% concentration is equivalent to 0.2% Chlorhexidine gluconate.
The number of Candida albicans colonies found on acrylic resin denture plates after the soaking process in ulin stem bark extract at the concentration of 20% obtained an average value of 29.5 CFU/ml. The soaking process in ulin stem bark extract at the concentration of 40% obtained an average value of 13.0 CFU/ml. Meanwhile, soaking process in ulin stem bark extract at the concentration of 60%, 80%, and 100% obtained an average value of 0 CFU/ml. This shows that the higher the concentration of ulin stem bark extract, the lower the number of Candida albicans colonies on the acrylic resin denture plate. This was in accordance with research by Hertanti et al (2015) and Ornay et al (2017) that the higher the concentration, the inhibitory and killing power will increase because the bioactive components in an extract are increased. 12,13 Ulin stem bark extract is proven to reduce the number of Candida albicans colonies on acrylic resin denture plates; and ulin stem bark extract at 60% concentration is equivalent to 0.2% Chlorhexidine gluconate. This is due to the antifungal content found in the ulin stem bark. Based on the research conducted by Wila et al (2018), secondary metabolite compounds contained in ulin stem bark are tannins, phenols, and flavonoids, saponins, alkaloids, and terpenoids. The secondary metabolite compounds present in the ulin stem bark extract can work as an antifungal. 9,10,11 Tannins and phenols work as antifungals by inhibiting the synthesis of chitin for the formation of cell walls and damaging cell membranes in fungi. 14,15 Tannins damage cell membranes by inhibiting the biosynthesis of ergosterol, whereas phenols can cause fungal cells to become lysis due to denaturation of protein bonds present in cell membranes and phenols can enter the cell nucleus so that the fungi is undeveloped. 16,17 Flavonoids as antifungals work by interfering with the permeability of cell membrane walls. Hydroxyl groups present in flavonoids cause fungal cells to become lysis due to changes in organic components and nutrient transports. 18 Saponins can reduce the surface tension of sterol membranes and cell walls because of its polar surfactant nature and cause disruption of fungal membrane permeability which results in swelling and rupture of the cell because of the disturbance in inclusion of material or substances needed by the fungus. 19 Terpenoids have toxic characteristics that can inhibit fungal growth by damaging cell membranes so that fungal growth is inhibited. 15,20 Alkaloids can cause damage and death in fungi due to strong bonds with ergosterol which causes leakage in cell membranes. 21 This study used 0.2% Chlorhexidine gluconate as positive control. Chlorhexidine gluconate with 0.2% concentration is effective against grampositive bacteria, gram-negative bacteria, virus, and fungi. It takes 15 minutes to eliminate Candida albicans effectively. 22,23 Chlorhexidine gluconate with 0.2% concentration works as an antifungal by disrupting cell membranes and triggering cytoplasmic precipitation. 24 Chlorhexidine gluconate with 0.2% concentration has a high degree of antimicrobial activity which causes changes in the integrity of the fungal cell wall when bound to a fungal cell membrane component, so that the function of the fungal cell membrane will be lost. The chlorophenol ring in the structure of the 0.2% Chlorhexidine gluconate formula is lipophilic which works by absorbing into the cell wall so that it can be easily accepted by the cell membrane and causing leakage of intracellular components. 25 Ulin stem bark extract was studied to be used as an alternative ingredient to chemical denture cleansers. The raw material of the extract used is ulin which is a typical plant of South Kalimantan and has the potential as an herbal denture cleansers because it contains antifungal compounds that can reduce the number of Candida albicans colonies that cause denture stomatitis, and is expected to reduce the side effects of long-term use of denture cleaning chemicals such as 0.2% Chlorhexidine gluconate.
Based on the results, it can be concluded that ulin stem bark extract at 20%, 40%, 60%, 80%, and 100% concentration have been proven to reduce the number of Candida albicans colonies on acrylic resin denture plates, and ulin stem bark extract at 60% concentration is equivalent to 0.2% Chlorhexidine gluconate. | 2021-08-02T00:05:52.871Z | 2021-05-11T00:00:00.000 | {
"year": 2021,
"sha1": "890a8b73e209afe04ed50b3137c23cd6b3a24ac8",
"oa_license": "CCBY",
"oa_url": "https://ppjp.ulm.ac.id/journal/index.php/dentino/article/download/10637/7050",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b7fd30dc25a6a322d3db275b390e529b078dcdc6",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
216178716 | pes2o/s2orc | v3-fos-license | Bioinformatics analyses and biological function of lncRNA ZFPM2-AS1 and ZFPM2 gene in hepatocellular carcinoma
Hepatocellular carcinoma (HCC) remains one of the most lethal malignant tumors worldwide; however, the etiology of HCC still remains poorly understood. In the present study, cancer-omics databases, including The Cancer Genome Atlas, GTEx and Gene Expression Omnibus, were systematically analyzed in order to investigate the role of the long non-coding RNA (lncRNA) zinc finger protein, FOG family member 2-antisense 1 (ZFPM2-AS1) and the zinc finger protein, FOG family member 2 (ZFPM2) gene in the occurrence and progression of HCC. It was identified that the expression levels of lncRNA ZFPM2-AS1 were significantly increased in HCC tissues, whereas expression levels of the ZFPM2 gene were significantly decreased in HCC tissues compared with normal liver tissues. Higher expression levels of ZFPM2-AS1 were significantly associated with a less favorable prognosis of HCC, whereas higher expression levels of the ZFPM2 gene were associated with a more favorable prognosis of HCC. Genetic alterations in the ZFPM2 gene may contribute to a worse prognosis of HCC. Validation of the GSE14520 dataset also demon stared that ZFPM2 gene expression levels were significantly decreased in HCC tissues (P<0.001). The receiver operating characteristic (ROC) analysis of the ZFPM2 gene indicated high accuracy of this gene in distinguishing between HCC tissues and non-tumor tissues. The areas under the ROC curves were >0.8. Using integrated strategies, the present study demonstrated that lncRNA ZFPM2-AS1 and the ZFPM2 gene may contribute to the occurrence and prognosis of HCC. These findings may provide a novel understanding of the molecular mechanisms underlying the occurrence and prognosis of HCC.
Introduction
Hepatocellular carcinoma (HCC) is one of the most lethal malignant tumors worldwide, with an incidence rate of 40.0% in men and 15.3% in women per 100,000 population in China (1,2). According to the Global Burden of Disease Study 2017, ~820,000 individuals succumbed to HCC worldwide (3). Among them, the number of HCC-associated mortalities in China (~422,000) accounted for 51.5% of global HCC-associated mortalities (4).
Considering the promising role of the lncRNA ZFPM2-AS1 and the ZFPM2 gene in carcinogenesis and prognosis of several types of cancer, it was hypothesized that lncRNA ZFPM2-AS1 and the ZFPM2 gene also contribute to the development and prognosis of HCC. In present study, a series of bioinformatic and clinical analyses were performed to investigate the potential functions of lncRNA ZFPM2-AS1 and the ZFPM2 gene in the process of carcinogenesis and progression of HCC.
Materials and methods
Expression of ZFPM2-AS1 and ZFPM2 gene in the cancer genome atlas (TCGA) and GTEx tissues. The comparison of the expression levels of ZFPM2-AS1 and ZFPM2 genes in HCC and non-tumor tissues was performed using GEPIA version 2.0 (37), during which TCGA (https://portal.gdc.cancer. gov) HCC samples were compared with GETx (https://www. gtexportal.org/home) samples, which were used as controls. The associations of expression levels of ZFPM2-AS1 and the ZFPM2 gene with the prognosis of HCC and other digestive system tumors were evaluated using the Kaplan Meier plotter (http://kmplot.com/analysis), which presents overall survival, disease free survival, relapse free and progression free survival (38), and GEPIA.
Validation of expression of ZFPM2-AS1 and ZFPM2 gene in clinical tissues. The present study was approved by the Ethics Committee of the Army Military Medical University (Chongqing, China) and written informed consent was provided by all participants prior to the study start. A total of 53 HCC and paired adjacent normal tissues (>2 cm from tumor tissues) 45 men and 8 women; age range, 30-74 years; median age, 53 years) were collected from the Department of Hepatobiliary Surgery (Chongqing, China) between November 2017 and May 2019, following surgical resection. All diagnoses were blindly confirmed by at least two pathologists at The First Affiliated Hospital of Army Military Medical University, and patients who received radiofrequency ablation, chemoradiotherapy or other treatments prior to surgery were excluded from the present study. Samples were subsequently stored at -80˚C, prior to subsequent experimentation.
Interaction network and functional enrichment analyses.
To investigate the biological functions and pathways of ZFPM2-AS1 and the ZFPM2 gene, gene-gene and protein-protein interaction (PPI) network analysis of the ZFPM2 gene was conducted using the GeneMANIA (http://genemania.org) and Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database version 11.0 (40). Genes associated with ZFPM2 and ZFPM2-AS1 were initially identified using the COXPRESdb database (version 7.3; https://coxpresdb.jp). Subsequently, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway and Gene Ontology (GO) analyses of ZFPM2-AS1 and ZFPM2-associated genes were performed using Database for Annotation, Visualization and Integrated Discovery (DAVID) version 6.8 (david.ncifcrf.gov/home.jsp).
Determination of genetic mutation status of the ZFPM2-AS1
lncRNA and ZFPM2 gene. To investigate the underlying mechanisms relevant to mutation status of ZFPM2-AS1 and ZFPM2 gene, the cBioPortal database (cbioportal. org/) was utilized. Kaplan-Meier survival estimates for overall survival of HCC patients, with or without mutations of the ZFPM2 gene was also analyzed, using the log-rank test.
Validation of the GEO dataset. The validation of the expression levels of the ZFPM2 gene in HCC tissues and adjacent normal tissues was further conducted with the GEO dataset GSE14520 (41). The receiver operating curve (ROC) with the area under the curve (AUC) value for assessing the predictive accuracy and discriminative ability of ROC was drawn to identify the diagnostic significance of expression level of the ZFPM2 gene.
Statistical analysis. SPSS version 22.0 (IMB Corp.) and GraphPad Prism version 7.0 (GraphPad Software, Inc.) were used for statistical analyses. P<0.05 was considered to indicate a statistically significant difference. All results are presented as mean ± standard deviation (unless otherwise shown). One-way ANOVA tests were used to evaluate the differences in ZFPM2-AS1 and ZFPM2 expression in clinical stages of HCC, while Wilcoxon's test was used for paired continuous variables. The χ 2 test was used to evaluate differences in categorical variables. All expression data were log transformed for differential analysis.
Results
Associations bet ween expression levels of lncR NA ZFPM2-AS1 and the ZFPM2 gene with clinical significance of HCC. First the associations between the expression levels of ZFPM2-AS1 and the ZFPM2 gene and the clinical characteristics of HCC in TCGA and GTEx samples were analyzed. Table I presents the clinical characteristics of the patients from TCGA and GTEx databases (only sex available for GTEx), including sex, age at diagnosis, Child-Pugh score (42), creatinine value, HCC risk factor, family cancer history (data for 326 samples available), neoplasm histological grade (43) (data for 372 samples available) and metastasis status. The expression levels of lncRNA ZFPM2-AS1 were higher in HCC tissues compared with normal liver tissues (Fig. 1A), whereas the expression levels of the ZFPM2 gene were significantly lower in HCC tissues compared with normal liver tissues (Fig. 1C). No significant difference between the clinical stages of HCC and ZFPM2-AS1 ( Fig. 1B; P=0.136) and ZFPM2 (Fig. 1D; P=0.935) expression levels were observed. For the survival of patients with HCC it was observed that higher expression levels of ZFPM2-AS1 were significantly associated with a less favorable prognosis ( Fig. 2A), whereas higher expression levels of the ZFPM2 gene were significantly associated with better prognosis of HCC (Fig. 3). These bioinformatic results were also verified using clinical samples. The expression levels of lncRNA ZFPM2-AS1 were significantly higher in HCC tissues compared with adjacent normal tissues ( Fig. 4B; P<0.001), whereas the expression levels of the ZFPM2 gene were significantly lower in HCC tissues compared with adjacent normal tissues ( Fig. 4A; P<0.001).
Gene-gene and PPI network of the lncRNA ZFPM2-AS1 and ZFPM2 gene. According to the results obtained from COXPRESdb, lncRNA ZFPM2-AS1 was associated with the ZFPM2 gene. Thus, gene-gene and PPI network analysis of the ZFPM2 gene were conducted using the GeneMANIA and STRING tools and it was demonstrated that ZFPM2 primarily associated with GATA factors, including GATA1, GATA3 and GATA4 (Figs. 5 and 6).
Clinical significance of genetic alterations of the lncRNA ZFPM2-AS1 and ZFPM2 gene. Using the cBioPortal database, 9% (93/1,052) of samples were identified as harboring a mutated ZFPM2 gene. From Kaplan-Meier survival analysis, the overall survival rate demonstrated statistical differences, which means patients with HCC with ZFPM2 mutations had a less favorable prognosis compared with those without ZFPM2 mutations (P=0.0331; Fig. 7). The publicly available GTEX data only provided data on sex. HCC, hepatocellular carcinoma; TCGA, The Cancer Genome Atlas. KEGG pathway and GO term analyses. The top 200 ZFPM2 and ZFPM2-AS1 associated genes are presented in Table SI, which were identified using the COXPRESdb database. KEGG pathway and GO term analysis of the ZFPM2 associated genes were performed using DAVID. The GO term results demonstrated that these genes may be involved in the 'integral component of plasma membrane', 'protein binding' and 'plasma membrane' (Fig. 8).
Validation of the ZFPM2 expression profiling in the GSE14520 dataset. As shown in Fig. 9, the expression levels of the ZFPM2 gene in HCC and non-tumor tissues were consistent in the GSE14520 dataset, in stages I and II. Consistent with TCGA data, ZFPM2 gene expression were significantly decreased in HCC tissues compared with the non-tumor tissues in both stage I and II (P<0.001; Fig. 9A and C). The ROC analysis of the ZFPM2 gene demonstrated a high accuracy of ZFPM2 in distinguishing between HCC tissues and non-tumor tissues (AUCs, >0.8; Fig. 9B and D)
Discussion
At present, the etiology of HCC remains poorly understood.
In the present study, datasets from the cancer-omics databases TCGA, GTEX and GEO were analyzed in order to confirm the role of lncRNA ZFPM2-AS1 and the ZFPM2 gene in HCC, which are located at the cancer susceptibility locus 8q23 implicated in the carcinogenesis and prognosis of HCC (44). In the present study, it was observed that the expression levels of lncRNA ZFPM2-AS1 and the ZFPM2 gene were significantly different between HCC tissues and normal liver tissues and that these expression levels were also associated with prognosis of HCC. Patients with HCC with ZFPM2 gene alterations had a less favorable prognosis compared with those without ZFPM2 gene alterations. Functional enrichment analysis demonstrated that the ZFPM2 associated genes were primarily involved in the formation of integral component of membrane, protein binding and plasma membrane. To the best of our knowledge, the present study is the first report that aimed to investigate the association between lncRNA ZFPM2-AS1, the ZFPM2 gene and the occurrence and progression of HCC. Both lncRNA ZFPM2-AS1 and the ZFPM2 gene are located at 8q23 region, an aggregate of cancer susceptible loci (44)(45)(46)(47)(48)(49). Tomlinson et al (48) first identified rs16892766 on chromosome 8q23.3 as a colorectal cancer susceptibility locus. A previous study identified 41 variants that are associated with venous thromboembolism, and mapped to the ZFPM2-AS1 and ZFPM2 gene region using GWAS catalog (50). In the present study, expression levels of lncRNA ZFPM2-AS1 and the ZFPM2 gene were associated with both the occurrence and prognosis of HCC and mutations of the ZFPM2 gene were associated with a less favorable prognosis of HCC. These results further confirmed the role of lncRNA ZFPM2-AS1 and the ZFPM2 gene in HCC carcinogenesis. In the present study, gene-gene and PPI analyses revealed that ZFPM2-AS1 and ZFPM2 were primarily co-expressed and interacted with the GATA factors, including GATA1, GATA3 and GATA4. The GATA family, which controls the development of diverse tissues by activating or repressing transcription, widely participants in carcinogenesis, differentiation of several types of cancer (51,52). Furthermore, studies have shown that aberrant GATA-3 expression contributes to the occurrence of breast, prostate and pancreatic cancer (53)(54)(55)(56)(57)(58). GATA1, GATA4 and GATA6 are also associated with different types of cancer, including colorectal and breast cancer (59,60). The results of the present study demonstrated that ZFPM2 was significantly associated with GATA factors, suggesting its potential role in the development of different types of cancer.
Conclusively, the present study demonstrated that lncRNA ZFPM2-AS1 and the ZFPM2 gene may contribute to the occurrence and progression of HCC. These findings may provide a novel perspective on the underlying molecular mechanisms of HCC and suggest valuable biomarkers and therapeutic targets for patients with HCC. However, further validations with experimental evidence and clinical research are needed to confirm the functions of lncRNA ZFPM2-AS1 and ZFPM2 gene in HCC carcinogenesis.
Acknowledgements
Not applicable.
Funding
The present study was funded by the Science Foundation for Outstanding Young People of the Army Military Medical University (Chongqing, China; grant. no. 20170113).
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Authors' contributions
XM and YL designed the study. YL, XW, LM, SL, ZM and XF performed the statistical analyses. YL, XW and LM drafted the manuscript. All authors read and approved the final manuscript.
Ethics approval and consent to participate
The present study was approved by the Ethics Committee of the Army Military Medical University (Chongqing, China) and written informed consent was provided by all participants prior to the study start (approval no. 20170307).
Patient consent for publication
Not applicable. | 2020-04-02T09:11:56.696Z | 2020-03-27T00:00:00.000 | {
"year": 2020,
"sha1": "d992b7bd90cdbb346fae653bfe338a6f0cc45e8a",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2020.11485/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59bb90e99eef34671a2e79bacdc4547e90c2a557",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
11282813 | pes2o/s2orc | v3-fos-license | The XRCC1 phosphate-binding pocket binds poly (ADP-ribose) and is required for XRCC1 function
Poly (ADP-ribose) is synthesized at DNA single-strand breaks and can promote the recruitment of the scaffold protein, XRCC1. However, the mechanism and importance of this process has been challenged. To address this issue, we have characterized the mechanism of poly (ADP-ribose) binding by XRCC1 and examined its importance for XRCC1 function. We show that the phosphate-binding pocket in the central BRCT1 domain of XRCC1 is required for selective binding to poly (ADP-ribose) at low levels of ADP-ribosylation, and promotes interaction with cellular PARP1. We also show that the phosphate-binding pocket is required for EGFP-XRCC1 accumulation at DNA damage induced by UVA laser, H2O2, and at sites of sub-nuclear PCNA foci, suggesting that poly (ADP-ribose) promotes XRCC1 recruitment both at single-strand breaks globally across the genome and at sites of DNA replication stress. Finally, we show that the phosphate-binding pocket is required following DNA damage for XRCC1-dependent acceleration of DNA single-strand break repair, DNA base excision repair, and cell survival. These data support the hypothesis that poly (ADP-ribose) synthesis promotes XRCC1 recruitment at DNA damage sites and is important for XRCC1 function.
Single-strand breaks (SSBs) are the commonest lesions arising in cells, resulting both directly from disintegration of deoxyribose and indirectly during the excision repair of DNA base damage [reviewed in (1)]. SSBs usually lack conventional 3 -hydroxyl and 5 -phosphate termini, often possessing modifications such as 3 -phosphate or 5 -hydroxyl termini, or fragments of deoxyribose or topoisomerase. If not repaired rapidly, such termini can block progression of RNA or DNA polymerases, disrupting transcription or replication, respectively. The threat posed by SSBs is indicated by the existence of human genetic diseases associated with neurological dysfunction in which single-strand break repair (SSBR) is attenuated (1).
To date, all known SSBR-defective diseases are associated with defects in end processing, the step of repair during which conventional 3 -hydroxyl and 5 -phosphate termini are restored. One critical component of end processing is XRCC1, a molecular scaffold protein that interacts with and recruits, stabilizes, and/or stimulates end processing enzymes and accelerates SSBR ∼5-fold (2,3). The importance of XRCC1 is illustrated by the hypersensitivity of XRCC1-mutant cells to a broad range of genotoxins and to their elevated frequency of chromosome aberrations, genetic deletions and sister chromatid exchanges (4)(5)(6). Moreover, mice with conditional deletion of Xrcc1 in brain recapitulate many of the pathologies associated with loss of SSBR in humans, including cerebellar defects, ataxia and seizures (7).
A number of observations suggest that XRCC1 recruitment at chromosomal SSBs is promoted by poly (ADP-ribose) (PAR) synthesis. First, XRCC1 interacts directly with both PAR and with the poly (ADPribose) polymerases PARP1 and PARP2 (8)(9)(10). Second, small molecule-mediated inhibition of PAR synthesis, or depletion/deletion of PARP1, greatly reduces the accumulation of XRCC1 at sites of H 2 O 2 or UV laser-induced DNA damage (11)(12)(13)(14)(15). Third, mutations that disrupt folding of the PAR-binding BRCT1 domain in XRCC1 reduce or ablate XRCC1 accumulation at DNA damage (9,11,15,16). Finally, depletion of PARG, the enzyme responsible for PAR degradation following SSBR, increases both steady state cellular levels of PAR and the accumulation and/or persistence of XRCC1 in sub-nuclear foci before and after DNA damage (17).
Despite these observations, however, several recent reports have challenged the importance of PAR binding for XRCC1 function, instead ascribing XRCC1 recruitment to DNA binding protein partners such as DNA polymerase  (Pol), polynucleotide kinase/phosphatase (PNKP), and DNA ligase III␣ (Lig3␣) (18)(19)(20)(21)(22). One reason this uncertainty remains is a lack of clarity concerning the mechanism of PAR binding by XRCC1. PAR binding was first ascribed to a degenerate motif present at the C-terminus of the central BRCT1 domain in XRCC1, comprised of an alternating series of basic/hydrophobic residues and present in numerous other PAR binding proteins (9). However, a recent study instead assigned PAR binding to the phosphate binding pocket present in the BRCT1 domain (16). Not knowing the site of PAR interaction has prevented the generation of point mutations that specifically reduce or ablate PAR binding, and consequently an analysis for their impact on XRCC1 function. Here, we have confirmed the site of PAR binding in XRCC1, enabling us to mutate this site and address directly, for the first time, its importance for XRCC1 cellular function.
Cell lines
The osteosarcoma cell line U2OS (obtained from the Genome Damage and Stability Centre cell repository) and derivatives of the Chinese hamster ovary (CHO) cell line EM9 (4) were maintained as monolayers in modified Eagle's medium (MEM) or Dulbecco's modified Eagle's medium (DMEM), respectively, supplemented with 10% (vol/vol) foetal calf serum, 100 U/ml penicillin, 2 mM glutamine and 100 g/ml streptomycin. Expression constructs were introduced into the XRCC1-mutant CHO cell line EM9 by Genejuice transfection (Novagen) and stable cell lines prepared by selection in media containing 1.5 mg/ml G418 (Gibco-Invitrogen) for 10-14 days. The cell line U2OS GFP-XRCC1 was generated by transfection of 1 × 10 6 U2OS cells with 0.5 g pEGFP-XRCC1 by nucleofection (Lonza kit V) according to the manufacturer's instructions. Twenty four hours after nucleofection, cells were selected in media containing 1 mg/ml G418 for 3 weeks and single clones selected based on their level of GFP expression. One clone, denoted U2OS GFP-XRCC1 , was selected for further use.
Transfection and fluorescence imaging
U2OS cells or U2OS GFP-XRCC1 cells were seeded onto coverslips and transfected 1 day later with appropriate constructs using FuGENE 6 transfection reagent according to the manufacturer (Promega). Twenty four hours later, the cells were mock-treated or treated with 10 mM H 2 O 2 for 10 min, incubated at 37 • C in drug free media for 15 min, washed with phosphate buffered saline (PBS) and then fixed for 10 min in 4% paraformaldehyde in PBS at room temperature. After fixation the cells were washed 2× with PBS, treated with ice-cold methanol/acetone solution for 10 min, washed 2× with PBS and mounted using VECTASHIELD Mounting Media. Images were captured on a Leica SP8 confocal microscope. For EdU labeling of sites of DNA replication, the Click-iT R EdU Alexa Fluor R 647 Imaging Kit from Molecular Probes was used according to manufacturer's instructions. For laser microirradiation, 2 × 10 5 EM9 cells were seeded onto glassbottom dishes (Mattek) and transfected with 1 g of the indicated pmRFP-XRCC1 construct using Genejuice (Milipore). Twenty four hours later, cells were pre-incubated with 10 g/ml Hoechst 33258 (for micro-irradiation with a 351 nm laser) or Hoechst 34580 (for a 405 nm laser) for 30 min prior to localised micro-irradiation with a 351 nm or 405 nm UV-laser at a dose of 0.22 J/m 2 as previously described (25). Time-lapse images were recorded at the intervals shown after micro-irradiation. For experiments with PARP inhibitor, cells were pre-incubated with either 100 nM Olaparib (Selleckchem) as indicated or with 500 nM Ku58948 (AstraZeneca) 30 min before micro-irradiation.
Clonogenic survival assays
The indicated cells (500) were plated in duplicate in 10 cm dishes and incubated for 4 h at 37 • C. Cells were rinsed with PBS and either mock treated or treated with H 2 O 2 (diluted in PBS at the indicated concentration immediately prior to use) or methyl methanosulfonate (MMS) (diluted in complete medium at the indicated concentration immediately prior to use) for 15 min at room temperature (H 2 O 2 ) or 37 • C (MMS). After treatment, cells were washed twice with PBS and incubated for 10-14 days in drug-free medium at 37 • C to allow formation of macroscopic colonies. Colonies were fixed in ethanol (95%), stained with 1% methylene blue in 70% ethanol and colonies of >50 cells counted. Percentage survival was calculated for each drug concentration using the equation 100 × [average mean colony number (treated plate)/average mean colony number (untreated plate)].
Alkaline single cell agarose gel electrophoresis (alkaline comet assay)
Sub-confluent cell monolayers were trypsinised, diluted to 2 × 10 5 cells/ml in ice-cold PBS (for H 2 O 2 treatment) or complete media (for MMS treatment) immediately prior to treatment, and either mock-treated or treated with 150 M H 2 O 2 (diluted in ice-cold PBS immediately prior to use) for 20 min on ice or with the indicated concentration of MMS (diluted in complete medium) for 15 min at 37 • C. Cells were then rinsed in ice-cold PBS and incubated, where appropriate, in fresh drug-free media for the desired repair period at 37 • C. Cells (100 per data point) were then analysed by alkaline comet assay as previously described (26) using Comet Assay IV software (Perceptive Instruments).
Expression and purification of His-XRCC1 161-406 and His-XRCC1 161-406 RK
For expression of recombinant XRCC1 proteins, we employed Rosetta TM 2 (DE3)pLysS (Merck Millipore) Escherichia coli harbouring the expression plasmids pTWO-E-His-XRCC1 161-406 or pTWO-E-His-XRCC1 161-406 RK . pTWO-E is modified from pET-17b, encoding an Nterminal Rhinovirus 3C-cleavable, His 6 affinity tag. For XRCC1 expression, 100 ml LB ampicillin/chloramphenicol media was inoculated with a single bacterial colony and incubated with shaking (220 rpm) at 37 • C for 8 h and then stored at 4 • C overnight. The next day, 6× 1 l of LB-ampicillin/chloramphenicol media supplemented with antibiotics as above was inoculated with the starter culture (10 ml/l) and again incubated, with shaking, at 37 • C until and OD 600 of 0.8-1.0 was reached, after which protein expression was induced by the addition of 0.2 ml 1 M IPTG/litre for a period of 3 h at 30 • C. Cells were harvested by centrifugation and the resulting pellet stored at −20 • C. For purification, cell pellets were thawed on ice, resuspended in 50 mM HEPES pH 7.5, 250 mM NaCl, 10 mM imidazole and 1 mM Tris(2-carboxyethyl)phosphine (TCEP), and lysed by sonication on ice for 10 min (10 s on/10 s off) using a large parallel probe at 25% amplitude (Sonics Vibra-Cell, VWR). The lysate was clarified by centrifugation for 50 min at 40 000 × g at 4 • C and the resulting supernatant added to a 5 ml bed volume of Talon resin (Clontech) in a gravity flow column. After 30 min incubation with the resin, with mixing at 4 • C, unbound material was removed by sequential washes (3 × 10 ml) with resuspension buffer. Bound protein was eluted by addition of (2 × 5 ml) elution buffer (50 mM HEPES pH7.5, 250 mM NaCl, 300 mM imidazole and 1 mM TCEP). The eluate was loaded onto a pre-equilibrated (20 mM HEPES pH 7.5, 150 mM NaCl and 1 mM TCEP) 5 ml FF Heparin column (GE Healthcare) and bound material eluted with a linear salt gradient (20 mM HEPES pH 7.5, 1 M NaCl and 1 mM TCEP). Fractions containing XRCC1 were identified by SDS-PAGE, then pooled and concentrated, using Vivaspin 20 (10 000 MWCO) centrifugal concentrators (Sartorius Stedim), to a final concentration of 0.3 mg/ml, and then stored at −80 • C until required.
Thermal denaturation and circular dichroism
For thermal denaturation, samples containing 2.0 M protein and 5× SYPRO Orange (diluted from a 5000 × stock supplied in DMSO; catalogue number S5692, Sigma-Aldrich) were prepared in sample buffer [50 mM HEPES pH 7.5, 300 mM NaCl, 0.5 mM TCEP and 5× Sypro Orange (from Sigma-Aldrich R )]. Denaturation curves were monitored in 96-well PCR plates in a Roche LightCycler 480 II, using 465 and 580 nm filters for excitation and emission wavelengths, respectively. Temperature midpoints (T m ) for each folded to unfolded transition were determined by non-linear regression fitting of a modified Boltzmann model (27) to normalized data in Prism5 (GraphPad Software).
where: a n and a d are the slopes, b n and b d the y-intercepts, of the native and denatured baselines, respectively. T m is the melting temperature and m a slope factor. For circular dichroism, spectra were measured at 20 • C between the wavelengths 198 and 280 nm in a JASCO J-715 spectropolarimeter attached to a JASCO PTC-384W temperature control system. CD spectra were measured using a 0.1 mm path length cell (Starna Scientific), with protein at a concentration of 54 M, that had been bufferexchanged into 10 mM HEPES pH 7.5, 300 mM NaCl, 0.5 mM TCEP. Spectra were measured using a 0.1 mm path length cell (Starna Scientific) and represent the average of 10 consecutive scans, where the signal from buffer alone has been subtracted.
Poly (ADP-ribose) binding assays
The wells of flat bottomed 96 well PS-microplates (Greiner) were incubated with either 50 l recombinant histone H1, PARP1 or BSA at 0.1 mg/ml in PBS overnight at 4 • C and the wells rinsed (4×) with 0.2 ml 0.1% Triton X100 in PBS. The adsorbed proteins were mock ribosylated in the absence of NAD + or ribosylated in the presence of the indicated concentration of NAD + (Sigma) in PARP1 reaction buffer (50 mM Tris-HCl pH8, 0.8 mM MgCl 2, 1% glycerol and 1.5 mM DTT) containing 40 nM single-stranded oligodeoxyribonucleotide (Eurogentec: 5 -CATATGCCGGAGATCCGCCTCC-3 ) and 5 nM PARP1 (recombinant, human, full length) in a final volume of 50 l at room temp for 30 min. After rinsing (4×) with or His-XRCC1 161-406 RK (diluted to 25 nM in 20 mM Tris pH7.5, 130 nM NaCl) was added to the adsorbed proteins and incubated on ice for 30 min. Where indicated, His-XRCC1 proteins were pre-incubated with mono (ADPribose) or poly (ADP-ribose)(Trevigen) competitor at the concentrations indicated for 30 min at 4 • C, before their addition to the adsorbed proteins. The wells were then rinsed (4×) as above and incubated with 50 l mouse antipolyhistidine (His-tag) Mab (Sigma, diluted 1: 3000 in 20 mM Tris pH7.5, 130 nM NaCl) followed by 50 l HRPconjugated rabbit anti-mouse IgG (Dako, 1: 5000 in dilution buffer) for 30 min each on ice. After a final wash with 0.1% Tween 20 in PBS, 50 l of TACS Sapphire (Trevigen) was added to the wells, incubated in the dark for 15 min, stopped by adding 0.2 M HCl, and the absorbance was read at 450 nm.
GFP pull down experiments
U2OS GFP-XRCC1-His cells (see above), or U2OS cells 48 h after nucleofection (Amaxa; Lonza, Slough, UK) with 4 g each of pEGFP-XRCC1 161-406 or pEGFP-XRCC1 161-406 RK and either pmCherry-PARP1 or pmCherry-PARP1 E988K were snap frozen until needed. Cells were then thawed on ice and lysed on ice for 20 min in 0.4 ml/5 × 10 6 cells in 25 mM HEPES (pH 7.8), 150 mM NaCl, 10% glycerol, 0.5% Triton X-100, and including Protease Inhibitor Cocktail and Phosphatase Inhibitor Cocktail 3 (Sigma-Aldrich R , Dorset, UK). Where indicated, the PARP1 inhibitor KU58948 (500 nM) was added to the cell culture medium 1 h prior to cell harvest and/or was included in the cell lysis buffer. Lysed cells were sonicated in a Bioruptor and clarified by centrifugation at 4 • C. Unless stated otherwise, all subsequent steps were performed on ice. Forty microliters of the clarified extract was retained on ice as 'input' and 360 l was mixed with 15 l (bed volume) of GFP-Trap A beads (ChromoTek GmbH, Germany) prewashed in 0.5 ml wash buffer (lysis buffer containing 1 mM DDT and 25 mM imidazole). After 1 h on a carousel at 4 • C, the GFP-Trap R A beads were gently pelleted by centrifugation at 2000 × g for 2 min. Sixty microliters of the supernatant was retained as 'unbound' material and the pellet was washed three times in wash buffer, with 50 l of the final wash retained as 'final wash'. Proteins were eluted from the beads by re-suspension in 50 l 2× Laemmli buffer (250 mM Tris (pH 8.0), 10% SDS, 500 mM DTT, 50% glycerol)), heating for 5 min at 95 • C, and centrifugation at 2700 × g for 2 min to recover the supernatant.
RESULTS
To further examine the importance of PAR binding for XRCC1 function we first addressed the location of the PARbinding site. The most evolutionary conserved and functionally important region of XRCC1 is the central BRCT1 domain that mediates binding to PAR (see Figure 1A) (28). PAR-binding by the BRCT1 domain was initially ascribed to a degenerate motif of hydrophobic/basic amino acids that is present in many PAR binding proteins ( Figure 1B, dotted red box) (9). However, a different putative PARbinding motif in BRCT1 was recently reported, comprised of the phosphate-binding pocket common to several other BRCT domains ( Figure 1B, solid red boxes and Figure 1C) (16). Within this pocket, Ser328, Arg335 and Lys369 are all predicted to contribute to phosphate binding, based on the structure of other phosphate-binding BRCT domains of this type. Consequently, for subsequent analysis in vitro, we expressed and purified both a wild-type histidinetagged fragment of human XRCC1 spanning the conserved BRCT1 domain (denoted His-XRCC1 161-406 ) and a mutant derivative in which both Arg335 and Lys369 were mutated to Ala (denoted His-XRCC1 161-406 RK ) ( Figure 1D, left). We employed both mutations because mutation of R335 alone failed to have any measurable impact on XRCC1 function (data not shown).
Next, to confirm PAR binding by the BRCT1 phosphatebinding pocket we adsorbed PARP1, histone H1, or BSA to microwell plates, mock-ribosylated or ribosylated these proteins with PARP1 in the absence or presence of NAD + , respectively, and compared their binding to His-XRCC1 161-406 and His-XRCC1 161-406 RK , in vitro ( Figure 1D, right). Wild-type His-XRCC1 161-406 bound both to adsorbed PARP1 and histone H1, if these proteins were first ribosylated in the presence of 1-50 M NAD + , and was fully bound even at the lowest concentration of NAD + employed (1 M) (Figure 2A, blue bars). In contrast, relatively little binding was observed to BSA, irrespective of whether or not it was first incubated with PARP1 and NAD + , consistent with this protein being a poor substrate for PARP1. More importantly, His-XRCC1 161-406 RK bound ribosylated PARP1 and histone H1 to a much lesser extent, and not at all at the lowest concentration (1 M) of NAD + employed (Figure 2A, red bars). This did not reflect a non-specific impact of the mutations on folding of the BRCT1 domain, because His-XRCC1 161-406 and His-XRCC1 161-406 RK exhibited similar thermal stabilities and circular dichroism spectra ( Figure 2B). Importantly, His-XRCC1 161-406 bound specifically to PAR in these experiments, because it was suppressed by a 8-fold molar excess of ADP-ribose competitor if present as polymer (PAR), but was not suppressed even at 500-fold molar excess if present as ADP-ribose monomer (MAR) ( Figure 2C). These data confirm that the phosphate-binding pocket of the XRCC1 BRCT1 domain promotes binding to PAR in vitro, particularly at low levels of poly (ADP-ribosylation).
Next, we examined whether PAR binding by the phosphate-binding pocket is physiologically relevant, by comparing wild type and mutant XRCC1 for interaction with cellular PARP1. As expected, full length EGFP-XRCC1 co-precipitated endogenous PARP1 from stably transfected U2OS cells (U2OS GFP-XRCC1 cells; see 'Materials and Methods' section) in a manner that was inhibited by PARP inhibitor (Figure 3A). Similarly, truncated EGFP-XRCC1 161-406 spanning the BRCT1 domain co-precipitated mCherry-PARP1 in transient co-transfection experiments, but co-precipitated mutant mCherry-PARP1 E988K lacking polymerase activity (29,30) to a much lesser extent ( Figure 3B). More importantly, EGFP-XRCC1 161-406 RK was also less able to pull-down wild type mCherry-PARP1, confirming that the phosphate-binding pocket promotes interac- tion with PARP1 ( Figure 3B). Consistent with these data, mRFP-XRCC1 161-406 rapidly accumulated at sites of UVA laser damage at a rate similar to full-length mRFP-XRCC1 and in a manner that was greatly inhibited by PARP inhibitor (500 nM Ku58948), suggesting that the region spanning the BRCT1 domain is sufficient for XRCC1 accumulation at sites of cellular PAR synthesis ( Figure 3C and D). Note that we confirmed previously that this concentration of Ku58948 greatly reduces or ablates PAR synthesis in UVA laser tracks (31). In contrast, neither fulllength mRFP-XRCC1 RK nor mRFP-XRCC1 161-406 RK accumulated at sites of UVA laser damage ( Figure 3C and D). Similarly, full-length EGFP-XRCC1 RK failed to accumulate in sub-nuclear foci at sites of H 2 O 2 -induced oxidative stress, confirming that the phosphate-binding pocket is also required for accumulation of EGFP-XRCC1 at this more physiologically relevant source of SSBs ( Figure 4A). XRCC1 has also been reported to colocalise with PCNA in replication foci in human cells, consistent with its proposed role during SSBR at sites of stalled or collapsed replication forks (1,(32)(33)(34)(35). However, whether XRCC1 accumulation at such sites is also regulated by PAR synthesis is not known. Indeed, the accumulation of EGFP-XRCC1 in sub-nuclear foci with endogenous PCNA, detected by expression of anti-PCNA antibody, was greatly reduced by PARP inhibitor in both early and late S phase cells (Figure 4B, left panels). We confirmed in these experiments that the sites of PCNA and EGFP-XRCC1 colocalisation were sites of DNA replication, by pulse labeling with EdU (Figure 4B, right panels). Importantly, EGFP-XRCC1 accumulation at sites of PCNA accumulation was greatly reduced or ablated by mutation of the phosphate-binding pocket, suggesting that PAR binding is also critical for the recruitment/accumulation of EGFP-XRCC1 at sites of DNA replication ( Figure 4C).
Finally, to address the importance of the phosphatebinding pocket for XRCC1 function, we employed derivatives of XRCC1-mutant EM9 cells stably transfected with either empty vector (EM9-V) or with expression vector encoding either full-length human XRCC1-His (EM9-XH) or XRCC1-His RK (EM9-XH RK ) ( Figure 5A). In contrast to XRCC1-His, XRCC1-His RK was unable to promote cell survival in XRCC1-mutant EM9 cells much more than empty vector, following H 2 O 2 or MMS treatment ( Figure 5B). This was also true in experiments in which we measured rates of chromosomal SSBR using alkaline comet assays, in which XRCC1-His RK again failed to correct the slow rate of DNA strand break repair observed in EM9 cells ( Figure 5C).
Collectively, these data demonstrate that the XRCC1 phosphate-binding pocket binds PAR in vitro and in cells, promotes XRCC1 accumulation at sites of DNA damage, and is required for XRCC1 cellular function.
DISCUSSION
The synthesis of poly (ADP-ribose) (PAR) by PARP1 can accelerate SSBR, but the molecular mechanism by which PAR achieves this is unclear (17). One likely role is promoting recruitment of the SSBR scaffold protein, XRCC1 (11)(12)(13)(14)(15), although this idea has proved controversial (18)(19)(20)(21)(22). To further address this possibility we have clarified the mechanism of PAR binding by XRCC1 and addressed its importance for SSBR and cell survival. PAR binding was initially ascribed to a degenerate motif present at the C-terminus of the central BRCT1 domain in XRCC1, comprised of an alternating series of basic/hydrophobic residues and present in numerous other PAR binding proteins (9). Interestingly, this motif in XRCC1 harbours a common polymorphism at amino acid 399 (arginine/glutamine), which in some epidemiological studies has been implicated in altered predisposition to cancer. However, in cellular assays this polymorphism does not impact measurably on XRCC1 function, suggesting that it does not influence PAR binding (36). Moreover, replacement of five of the basic residues characteristic of this degenerate motif with alanine also fails to impact on XRCC1 function, suggesting that the degenerate motif is not, by itself at least, important for PAR binding (unpublished observations).
Recently, PAR binding by XRCC1 was assigned to a different region of the BRCT1 domain; the highly conserved phosphate binding pocket in (16). In agreement with Li et al., we found that the phosphate-binding pocket interacts directly with PAR. However, in contrast to Li et al., we did not detect binding to mono(ADP-ribose) (MAR) by this motif. Indeed, our competition assays indicate that binding by this motif is highly selective for PAR. We found that the phosphate-binding pocket confers on XRCC1 the ability to bind PAR at low concentrations of polymer, as indicated by its greater impact on PAR binding by XRCC1 at low concentrations of NAD + , in vitro. This might be an advantage at low levels of SSBs such as those arising endogenously in cells, in which PAR polymer might be present at a low concentration and distributed at only a small number of sites across the genome. However, XRCC1 harbouring a mutated phosphate-binding pocket still bound PAR at high concentrations of polymer, albeit to a lesser extent than wild type XRCC1. This may reflect incomplete ablation of PAR binding by the R335A/K369A mutation or, alternatively, weak PAR binding conferred by the degenerate PAR binding motif described above. Nevertheless, mutation of the phosphate-binding pocket greatly reduced mRFP-XRCC1 recruitment at sites of UVA laser-induced damage, and also EGFP-XRCC1 at sites of DNA damage induced by H 2 O 2 , suggesting that this pocket is critical for accumulation of EGFP-XRCC1 at cellular sites of DNA strand breakage. Interestingly, the impact of mutating the phosphate-binding pocket on XRCC1 accumulation at sites of UVA laser induced damage was greater than incubation with PARP inhibitor. This might reflect incomplete inhibition of PAR synthesis by inhibitor or, alternatively, a low level of protein ribosylation generated prior to incubation with PARP inhibitor.
Mutation of the phosphate-binding pocket also greatly reduced XRCC1 accumulation at sites of PCNA accumulation, suggesting that PAR synthesis also promotes XRCC1 accumulation at sites of damaged replication forks. The latter is consistent with our model for replication-coupled SSBR, in which XRCC1 promotes repair of SSBs either ahead of an approaching fork or after replication fork collapse (35,37). It is also consistent with a role for PARP1 in regulating fork progression in the presence of DNA strand breaks (38)(39)(40)(41). However, it is important to note that we have so far only observed XRCC1 accumulation at sites of ongoing DNA replication in cells co-expressing RFP-PCNA or anti-PCNA antibody (data not shown). Consequently, we suggest that both approaches perturb normal PCNA function to some extent, thereby generating SSBs and/or other sources of replication stress that trigger PARP1 activation.
Finally, XRCC1 harbouring a mutated phosphatebinding pocket was unable to restore rapid rates of chromosomal SSBR to XRCC1-mutant EM9 cells, following treatment with either H 2 O 2 or MMS, and only slightly increased cellular resistance to these genotoxins. This work thus highlights the importance of the PAR-binding motif for XRCC1 functionality, both at oxidative breaks induced by H 2 O 2 and following MMS-induced DNA alkylation. The latter is particularly intriguing, because MMS-induced SSBs arise as intermediates of DNA base excision repair (BER), suggesting that PAR is important for XRCC1 function during BER. Whereas several reports have suggested that PARP1 is required during BER following DNA alkylation (42,43), others have reported that it is dispensable (44- | 2016-10-26T03:31:20.546Z | 2015-06-29T00:00:00.000 | {
"year": 2015,
"sha1": "781c3e48ff725b835b542adf91204efbb2d48041",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/article-pdf/43/14/6934/9475847/gkv623.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "af76dd0851de1c0f7703789ead2a9f6226fd644b",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
119245892 | pes2o/s2orc | v3-fos-license | Strong lensing constraints on bimetric massive gravity
We derive dynamical and gravitational lensing properties of local sources in the Hassan-Rosen bimetric gravity theory. Observations of elliptical galaxies rule out values of the effective length-scale of the theory, in units of the Hubble radius, in the interval 10^-6<lambda_g/r_H<10^-3, unless the proportionality constant between the metrics at the background level is far from unity, in which case general relativity is effectively restored for local sources. In order to have background solutions resembling the concordance cosmological model, without fine-tuning of the parameters of the model, we are restricted to the upper interval, or lambda_g/r_H ~ 1. Except for a limited range of parameter values, the Hassan-Rosen theory is thus consistent with the observed lensing and dynamical properties of elliptical galaxies.
Introduction
The recently formulated Hassan-Rosen (HR) theory [1,2], which is a ghost-free bimetric theory of gravity, has a rich phenomenology. The theory has been shown to be able to yield background solutions indistinguishable from a ΛCDM universe on the background level, even when no explicit cosmological constant or vacuum energy is included in the model [3][4][5][6][7][8][9][10][11]. It is also possible to generate accelerating models that deviate from that of a pure cosmological constant universe. These degeneracies on the background level are broken when studying structure formation in the linear regime, although explicit constraints on the parameters of the model arising from this fact are yet to be obtained [12][13][14][15][16].
The HR theory was developed as an extension of the de Rham-Gabadadze-Tolley (dRGT) theory [17][18][19], in conjunction of proving the ghost-free nature of the latter theory [20,21]. The original motivation of constructing the dRGT theory, in turn, went back all the way to Fierz's and Pauli's original investigations into the formulation of a consistent theory of massive spin-2 fields [22,23]. Formulations in the direction of a fully non-linear, ghost-free theory had previously been performed in e.g. [24][25][26][27][28][29].
The idea of introducing a massive spin-2 field to general relativity is intriguing based on the, by now, well-established late-time acceleration of the expansion rate of the universe. As stated above, for a small graviton mass, the HR theory can address the issue of the observed acceleration. In this paper, we investigate whether the theory is compatible with, and/or how the theory can be constrained from observations on galactic scales and below. With spherically symmetric solutions in the HR theory (previously studied in [4,[30][31][32]), we can use observations of galactic velocity dispersions and gravitational lensing angles to constrain the parameters of the theory.
In sec. 2, the basic aspects of HR theory used in this paper are summarized. In sec. 3, spherically symmetric solutions weak field solutions with and without sources are presented, with the special case of point mass sources given in sec. 4. The effect of including higher order terms, i.e. the Vainshtein mechanism, is discussed in sec. 5. In sec. 6, 7 and 8, we present the method, observational data and results in terms of constraints on the model parameter values. We conclude in sec. 9.
Hassan-Rosen bimetric massive gravity
The Hassan-Rosen formulation of bimetric massive gravity is given by the following action: β n e n g −1 f + √ −gL m (g, Φ) .
(2.1) Here L m is the matter Lagrangian coupled to g µν . In principle, it is possible to also add a different matter Lagrangian coupled to f µν , but in this paper we choose not to do so. The functions e n (with matrix arguments) are not needed in this paper, but can be found in e.g. [5]. Varying the equations of motion with respect to g µν and f µν gives the following equations of motion: Here the matrices Y (n) are given by
4)
Y (2) = X 2 − X · e 1 (X) + 1 · e 2 (X) , (2.5) (2.6) Taking the divergence, with respect to the g-metric, of eq. (2.2), and assuming source conservation, gives the following constraint: It can be shown that this constraint is equivalent to the constraint given by taking the divergence with respect to the f -metric, of eq. (2.3). By doing the constant rescalings the equations of motion for f µν become (2.9) The ratio M g /M f therefore drops out of the equations of motion. This is a reflection of the fact that we have not coupled f µν to any gravitational sources. The HR theory thus has five free parameters β i , where i = [0, . . . , 4] (remembering that m just multiplies all the β i :s to get the correct dimensionality). Two of these, β 0 and β 4 correspond, on the level of the Lagrangian, to a cosmological constant for the g-and f -sector, respectively. On the level of the equations of motion, however, it will be certain combinations of all the β i :s that contribute to an effective cosmological constant.
Spherically symmetric solutions
Spherically symmetric solutions in the HR theory have previously been studied in [4,[30][31][32]. Because of the absence of an equivalent of Birkhoff's theorem in the HR theory, there does not exist a unique solution for a spherically symmetric static spacetime. The studied solutions fall into two broad classes. With g µν diagonal, the most general form for f µν , after gauge fixing, contains an off-diagonal element f rt . In the case of non-zero f rt , [30] gave the complete analytical solutions. It turns out to that g µν in this case is completely degenerate with the standard Schwarzschild-de Sitter (or Kottler) metric. For the ansatz f rt = 0, the equations of motion turn out to be highly involved. [30] wrote down the linear solution, whereas [31] did an exhaustive numerical study of the solution. The main result of [31] is that, for the diagonal ansatz, there is "a whole zoo of new black holes with massive degrees of freedom excited." In this paper, we rederive the linear solution provided by [30] but in isotropic form, making the solutions more accessible for a gravitational lensing analysis. We also include second order terms to compute the size of the Vainshtein radius. This is to make sure that a linear analysis is valid in the region accessible for phenomenological study. Furthermore, the inclusion of matter sources allows us to observationally constrain the parameters of the theory. As our ansätze, we use the following diagonal forms for g µν and f µν : We perturb the metric around flat space (where the background metric is flat, i.e.ḡ µν = η µν andf µν = c 2 η µν ) in the following way: Since g µν is put on isotropic form, we can identify δV = Φ and δW = Ψ, where Φ is the gravitational potential and Ψ is the spatial curvature for scalar perturbations in the Newtonian gauge. For flat space to be a valid background solution, we must impose in order to remove cosmological constant contributions in the g-and f -sector. We note that this corresponds to the case where the background expansion is pure GR (but it is still possible, however, that perturbations around the background deviate from GR). Notice that the β i :s are parameter in the Lagrangian, whereas c is a parameter of the background solution. To first order, the solutions to the equations of motion given in eq. (2.2) and (2.9) in vacuum are where Here M 1 and M 2 are arbitrary integrations constants. The second order solutions are given in appendix A. If we introduce a pressureless source, for which T 0 0 = −ρ, and define where ml and m stands for massless and massive, respectively, we get the following two source equations for Θ ml and Θ m (for more details on the identification of the massless and massive modes in the HR theory see [33]): Inverting these equations gives Solving for δV , and putting the boundary terms (b.c.) to zero, then gives Notice that after introducing a source we do not have two independent integration constants; δV is completely determined by the source and the model parameters m g and c.
Since the normalization of ρ (or equivalently M 1 and M 2 in terms of the vacuum solutions) is arbitrary, in the following, we employ a constant rescaling of Newton's constant Note that a large value of the proportionality constant c in this sense could perhaps be related to the small observed value of G.
Since the HR theory represents a generalization of GR, it is natural to ask the question of whether the theory is capable of explaining the rotation curves of spiral galaxies without introducing a dark matter halo component. However, including only first order perturbations in the spherically symmetric solutions, this is not possible since the observed rotation curves generally are flatter at large radii than what can be obtained using the baryonic matter distribution only. The inclusion of an additional Yukawa term will have the opposite effect of increasing the slope as the Yukawa term decays after which the asymptotic behaviour equals the standard Newtonian form. The inclusion of the Yukawa term thus pushes the peak of the rotational velocity toward lower radii, as compared to the case of a purely Newtonian rotation curve. However, this conclusion may be altered when including higher order terms, that is the Vainshtein mechanism, in the solutions [34]. Also note that eq. (3.14-3.18) were computed in the weak-field limit, whereas for the interesting case of λ g ∼ r H higher-order terms have to be included.
Point mass source solutions
Introducing a point mass source with mass M , i.e. putting ρ = M δ (3) (r) in eq. (3.16-3.17), gives the following first order solutions: where Φ is the gravitational potential, Ψ the spatial curvature and ϕ = (Φ + Ψ)/2 is the effective gravitational potential felt by massless particles. We can decompose the potentials where the subscript GR denotes the general relativity value of −M/r and Y the Yukawa terms given by We note that M is the mass we would measure for a point mass at infinite distance. As evident from eq. (3.12), m g and c are not independent parameters. In fact, the Yukawa terms will approach zero both as c → 0 and c → ∞. These asymptotes are independent of the values of the β's. To understand the behaviour in between, we may assume that β 1 ∼ β 2 ∼ β 3 = β and thus where we have defined m 2 b ≡ βm 2 . That is, the only way to have sizable modifications of the potentials on galactic scales from the Yukawa terms when c is not too far from unity, is to have √ βm of the inverse order of galactic scales or smaller. In the limit m g r → 0, the ratio of the gravitational potentials felt by massive and massless particles is given by Note the similarity to the vDVZ-discontinuity factor of 4/3 in linear massive gravity. However, it is expected that this discrepancy between massive and massless particles as m g r → 0 will be removed as we include higher order terms, see sec. 5.
The Vainshtein radius
In 1972 Vainshtein observed that the formulation of massive gravity given at the time exhibited a radius that signals the breakdown of the linear expansion around a source with mass M ( [35], see also [36] for a recent review). This radius was latter called the Vainshtein radius, and is given by where r S is the Schwarzschild radius of the source and λ the wavelength associated to the massive graviton, i.e. λ = m −1 . Within this radius, higher-order corrections to the expansion of the metric in powers of GM have to be taken into account. Since r V is an intermediate scale between the gravitational scale of the source and the Compton wavelength of the graviton, for any specific source and graviton mass, one has to make sure to be well outside r V for the linear expansion is to be valid. The Vainshtein radius is derived for r λ, which will always hold for local sources when λ is of the order of the Hubble radius. For r λ, however, the Vainshtein radius is not applicable, and one has to check numerically that the second order solution does not dominate over the first order solution in the region of interest.
In order to identify the Vainshtein radius for spherically symmetric sources in the HR theory, we have solved the diagonal ansatz to second order. 1 The solution is given in appendix A. At second order, two new effective parameters occur, namely These are related to m 2 g through In the m g r 1 limit, the full second order solution then has a dominant term from which one can read off r V as the radius where second order terms start dominating over first order terms, .
(5.4)
This holds for all fields expect F , for which we instead have This means that it is not possible to decrease r V to smaller values by letting m 2 2 → 0 (i.e. looking at the other terms in the second order solution). Putting m 2 1 = 0, so that gives the Vainshtein radius, common to all fields, as Numerically, this is given by (for c = 1) is the Hubble radius. For a galactic mass scale of M ∼ 10 11 M and λ g ∼ r H ∼ 5 · 10 6 kpc, we obtain r V ∼ 800 kpc, i.e., more than a factor of 100 larger than the radius probed by the observations used in this paper. We also note that for the Sun as the gravitational source, the Vainshtein radius is larger than 1 AU as long as λ g 5 · 10 −12 r H . In the following, we have assumed that eq. 5.4 is a fair approximation for r V even when r λ g . That is, that we are well outside the Vainshtein radius when constraining the Yukawa decay of the potentials, making it possible to constrain the parameters m g and c using the linear approximation.
Lensing analysis
Since massive and massless particles experience different forces in a gravitational field in bimetric theories, we can constrain such theories if we have access to systems where the gravitational field, or mass, is probed by both massive particles and photons. One such example is the Sun, which we will return to later. On larger scales, galaxies and galaxy clusters, where we have both dynamical and lensing data, are obvious candidates. In this paper, we will make use of elliptical galaxies for which we have measurements of both the velocity dispersion and the gravitational lensing deflection angle. In doing this, we will to large extent apply the same methodology and data as in [38] and [39]. Basically, the method amounts to investigating for which parameter ranges of the theory the galaxy masses as inferred from massive particles (velocity dispersions) and massless particles (lensing angle) are consistent.
The velocity dispersion in elliptical galaxies can be derived from the equations of stellar hydrodynamics: where σ t and σ r are the velocity dispersions in the tangential and radial direction, respectively, ζ = 1 − (σ t /σ r ) 2 is the velocity anisotropy, ν is the density of velocity dispersion tracers (in this case the luminous matter) and Φ is the total gravitational potential. The prime indicates differentiation with respect to r. Assuming that ζ is constant, we can write Note that the integral is from r to ∞, the reason being that it is normalized such that the velocity dispersion approaches zero asymptotically. The actual observed velocity dispersion, given by the single number σ 2 , is then given by a line-of-sight luminosity weighted average over the effective spectroscopic aperture of the observations. To compute the velocity dispersion, we need ν(r), ζ and the radial derivative of the gravitational potential, given by the total density distribution ρ(r). In the following, we assume that both the luminous and total matter distribution can be written as power laws In appendix B, general expressions to derive observed velocity dispersions in the HR theory are outlined. The deflection angle of photons passing through a gravitational field is given bŷ where the integral to excellent approximation can be calculated over the undeflected path of the photon, and the derivative is with respect to the direction perpendicular to the direction of the photon. Now, what is actually observed is the angular image separation between multiple images of a background source. If the observer, lens and source are perfectly aligned, the appropriately scaled deflection angle is given by half the observed angular image separation, the so called Einstein angle of the system. It can be shown that this is an excellent approximation even in cases when the source does not lie directly behind the lens, and we can thus use the observed image separation to estimate the deflection angle.
In appendix C, general expressions to derive the deflection angles in HR theory are derived. In practice, given values for the model parameters m g and c, we can now use the observed deflection angles to normalize the mass density profile, or ρ 0 r γ 0 , of each galaxy which is then used to predict a value of the velocity dispersion that can be compared to the observed value. The analysis is complicated by the fact that the force experienced by massive and massless particles are not fully determined by the mass inside the radius at hand. However, these complications can be overcome by using the approximations outlined in appendix B and C.
Note that the method of comparing gravitational deflection angles with the dynamics of massive particles makes us very insensitive to the assumed matter distribution of the galaxies, specifically since the deflected photons and the velocity dispersion tracers effectively probes similar galactic radii of r h ∼ 10 −6 r H ∼ 5 kpc. We also note that given prior knowledge on the normalization of the individual mass density profiles, we could in principle use the observed velocity dispersions and gravitational lensing angles individually to constrain the parameters of the model. For example, for a given mass distribution, we expect the observed velocity dispersion in HR theory to be larger than in GR and for a given observed velocity dispersion, the mass-to-light ratio required in HR theory to be smaller than in GR. Such an analysis is left for future work.
Data
In this paper, we make use of the strong gravitational lens sample observed with the Hubble Space Telescope Advanced Camera for Surveys by the Sloan Lens ACS (SLACS) Survey [40]. The full sample consists of 131 strong lens candidates out of which we make use of a sub-sample of 53 systems with elliptical lens galaxies, well fitted by singular isothermal ellipsoidal lens models, and having reliable velocity dispersion measurements 2 . We use the velocity dispersions as measured from Sloan Digital Sky Survey (SDSS) spectroscopy over an effective spectroscopic aperture of 1.4 arcsec and the Einstein angle as measured from ACS imaging data. From ACS images, we also use the effective radii of the lens galaxies to individually estimate the luminosity profile power-law index δ of the lensing galaxies by comparing the total luminosity to the luminosity within half the effective radii. To the approximately 7 % velocity dispersion fractional errors quoted in [40], we add an additional 5 % to take into account possible deviations from the singular isothermal mass profile [41]. We assume a 2 % error on the measured image separations. The slope and anisotropy of the lensing galaxies are being individually marginalized over, using prior probabilities of γ = 2.00 ± 0.08 and ζ = 0.13 ± 0.13 (68 % confidence level) [38].
Results
Using the method and data described above, we are able to constrain the model parameters λ g = m −1 g and c as depicted in the left panel of fig. 1. As anticipated, as c → 0, λ g becomes unconstrained since GR is recovered in that limit. For c ∼ 1, data constrains the effective length scale of the theory to be λ g 10 −6 r H ∼ 5 kpc.
A few comments are in place here: We note that we obtain an upper limit on λ g /r H . The reason for this is that if λ g /r H becomes too large, we will have a constant vDVZ-like off-set between the force experienced by massive and massless particles. If we include also non-linear effects in the analysis, we expect the difference between the force experienced by massive and massless particles to be zero at small r or big λ g , reach a maximum value around the Vainshtein radius and then approach zero again as r λ g . This would then mean that our data will allow for either large values of λ g 10 −3 r H in which case the galactic scale r g ∼ 5 kpc would be within the Vainshtein radius where GR is restored, or very small values of λ g 10 −6 r H where the exponential decay of the Yukawa terms again restores GR. This can be compared to the results of [39] where a lower limit of λ g /r H 0.02 was obtained for the decoupling limit of the massive gravity model of [42].
As noted in sec. 4, m g and c are not independent parameters in terms of the fundamental model parameters. Using the definition m 2 b ≡ βm 2 where β = β 1 ∼ β 2 ∼ β 3 , we can constrain the corresponding length scale λ b = m −1 b = λ g / √ β together with c. Results are shown in the right panel of fig. 1. As expected, as c → 0 and c → ∞, constraints on λ b weakens, but if c is not too far from unity and λ b of the order of galactic scales or larger, we will have sizable contributions from the Yukawa parts of the potentials. For c ∼ 1, the length scale of the theory is constrained to λ b 10 −6.3 r H ∼ 2.5 kpc, in the linear approximation. Including higher order terms, the Vainshtein mechanism again opens up for the possibility of large values of λ b , putting galactic scales within their corresponding Vainshtein radii.
The magnitude of an additional Yukawa term to the GR gravitational potential have been constrained to be very small on scales from our Solar system down to millimeter distances [43,44]. Also, the deflection and time delay of light passing close to the limb of the Sun shows that the gravitational potential, Φ, and the spatial curvature Ψ are equal up to a fractional difference of ∼ 10 −5 [45]. Therefore, unless λ g is in the sub-millimeter range, at Solar system scales (1 AU ∼ 5·10 −9 kpc), we need to be well within the Vainshtein radius of the Sun for the theory to survive, limiting λ g 5 · 10 −12 r H ∼ 0.025 pc.
Although we have obtained the spherically symmetric solutions in a background equivalent to GR, we may assume that locally they are useful approximations also in a more general background. To have accelerating cosmological concordance-like solutions, we need λ b /r H ∼ 1 [5,10]. For such values, the observational probes employed in this paper are well inside their Vainshtein radii, effectively restoring GR.
We can now combine the limits discussed above into fig. 2, where we show the galactic Vainshtein radius in units of r H (neglecting possible modifications when r λ g ) as a function of the length-scale of the Yukawa decay of spherically symmetric solutions of the bimetric theory. The typical length scale (r g ∼ 5 kpc) probed by the velocity dispersion and gravitational lensing observations is indicated by the horizontal dotted line. Going from left to right on the x-axis, we make the following observations: • Values of λ g /r H 10 −11 are ruled out from gravity tests on Solar system scales and below.
• For 10 −11 λ g /r H 10 −6 , the scale of the galactic observations, r g , is larger than Figure 2. Limits on λ g including the fact that GR is restored inside the Vainshtein radius and outside the Yukawa length-scale λ g . We have assumed m 1 = 0 and c = 1. Note that for c very different from unity, GR is practically restored at all scales.
λ g , the Yukawa terms becomes negligible and GR is effectively restored.
• For 10 −6 λ g /r H 10 −3 , r g is smaller than λ g and the difference of proportionality between the Yukawa terms in the gravitational potential and spatial curvature, invalidates this parameter range when comparing velocity dispersions and lensing deflections.
• For λ g /r H 10 −3 , our observations fall inside the Vainshtein radii of the systems, and the parameter range is ruled in since GR is presumably restored through the Vainshtein mechanism.
• Apart from being compatible with observations on galactic scales, values of λ g /r H 1 also have the possibility of providing an explanation of the apparent accelerating expansion of space on cosmological scales.
Conclusions
In this paper we have studied perturbative solutions for a diagonal ansatz for spherically symmetric solutions in the Hassan-Rosen theory. We have compared these solutions with gravitational lensing deflection angles of elliptical galaxies. Using lensing dispersion data we have shown that, for the proportionality constant c not too far from unity, the effective length scale of the theory λ g either has to be small enough for the Yukawa term to be negligible an galactic scales, λ g 5 kpc, or large enough for the radii probed to be within the Vainshtein radii of the galaxies, λ g 5 Mpc. Values of λ g 0.025 pc are ruled out from observations on Solar system scales and below. We note that if λ g ∼ r H , i.e. if the length scale of the theory is close to the Hubble radius, apart from being compatible with data on galactic scales and below due to a presumed Vainshtein radius [37], the HR theory may also provide a mechanism for the apparent accelerated expansion rate of the Universe.
A Second order solutions
e mgr Ei (−2m g r)
B Velocity dispersions
Since we can decompose the gravitational potentials for massive and massless particles as Φ = Φ GR + Φ Y and ϕ = ϕ GR + ϕ Y , where the subscript GR denotes the general relativity terms and Y the Yukawa terms of the potentials, and both the velocity dispersion and gravitational lensing angle depends linearly on these potentials, we can decompose also these as σ r = σ GR,r + σ Y,r andα =α GR +α Y . The radially dependent velocity dispersion is given by eq. (6.2), The observed velocity dispersion, σ 2 , is then given by a line-of-sight luminosity weighted average over the spectroscopic aperture of size R max is the aperture weighting function.
B.1 General relativity term
For the GR term in the velocity dispersion, we can substitute Φ with GM (r)/r 2 through use of Poisson's equation, giving and The Singular Isothermal Sphere model (SIS) is given by ζ = 0 and γ = δ = 2, giving σ 2 GR,r = 2πGρ 0 r 2 0 . The GR term in the observed velocity dispersion is now given by (obtained by changing variables of the inner integrals of eq. (B.2) to x = R/r) .
B.2 Yukawa term
Now, in principle we can derive the velocity dispersion σ 2 Y,r (r) and σ 2 Y, corresponding to the Yukawa term in the potential. First, we need the Yukawa term in the gravitational potential for the case of a spherically symmetric mass distribution. For a potential of the form if the mass M = 4πR 2 ρ(R)dR is distributed in a thin shell of radius R, the corresponding potential is In order to get the total potential from a spherically symmetric matter distribution, we integrate over a series of shells Next, we differentiate with respect to r, In the simplest case of ζ = 0 and γ = δ = 2, and k = (4c 2 )/3 we get and where x ≡ m g r, Φ GR = 4πρ 0 r 2 0 /r, Shi(x) is the hyperbolic sine integral function and Ei(x) is the exponential integral function.
Since the derived expressions do not render the observed Yukawa part of the velocity dispersion, σ 2 Y, , analytically solvable, we use the following approximation: Since the observed velocity dispersion is a weighted average over a few spectroscopic apertures (the only scale in the problem since the luminosity and matter profiles are given by pure power laws), we can employ a constant correction to σ 2 r given by the correction to Φ at a distance equal to r = r s =σ atm , i.e. It can be shown numerically that this approximation gives a maximum fractional error of the derived velocity dispersion of ∼ 12 % from the exact value, assuming c = 1.
C Gravitational lensing
The gravitational deflection angle is given by eq. (6.5) We make use of a scaled deflection angle α ≡ D ls /D sα , where D ls and D s are angular diameter distances between the lens and the source and the observer and source, respectively. The scaled deflection angle fulfills the (spherically symmetric) lens equation where θ is the angular position of the image with respect to the center of the deflector and β is the angular position the source would have in absence of the lens (not to be confused with the β i :s of the Lagrangian defining the HR theory). The scaled deflection angle can now be computed as where κ(θ) is the scaled surface mass density Here, D l is the angular diameter distance to the lens.
C.1 General relativity term
In GR, it can be shown that the deflection angle is given by where m(R) is the projected mass enclosed within radius R. For the power law density profile, we begin by computing the surface mass density Σ(R) = For γ = 2, we get α = 8Gπ 2 ρ 0 r 2 0 = 4πσ 2 GR,r . (C.11)
C.2 Yukawa term
Using eq. (6.5), we can show that the deflection from the Yukawa term in the potential in units of the deflection angle from the GR term is given by where B ≡ m g b is the impact parameter in units of m −1 g . This is not analytically solvable, but a fit to this function gives α Y c 2 α GR = q q + B 2 , (C. 13) where q 1.45. Writing the total scaled deflection angle as α = α GR + α Y , we can write α Y (θ) = c 2 π ∞ 0 κ(x)xdx × (C.14) The inner integral over angle η can be shown to equal π θ × g(z, z ) + 1 z ≥ z g(z, z ) − 1 z ≤ z , (C. 16) where g(z, z ) = 1 − z 2 + z 2 z 4 − 2z 2 (z 2 − 1) + (z 2 + 1) 2 , (C.17) and z ≡ m g D l θ √ q , z ≡ m g D l x √ q . (C.18) Given the power law density profile, we can show that where h(z, γ) = z 0 (g + 1)z 2−γ dz + To simplify the analysis, we again assume that the correction can be approximated by a constant rescaling of the lensing potential of ϕ Y = ϕ GR e −z E , giving where z E = m g D l θ E / √ q and θ E is the Einstein radius of the system. Numerical calculations show that this gives a fractional error of the deflection angle of at most 6 %, for c = 1.
Since θ θ E , we can write the lens equation θ E = α(ρ 0 r γ 0 , γ, θ) in terms of θ E D Fitting to the data From eq. (C.23), given a measured θ E and assuming values for γ, m g and c, we can solve for ρ 0 r γ 0 . This is then put into the expression for the observed velocity dispersion σ 2 to give The computed value of σ can then be compared to the observed value in order to constrain the parameters of the model. Now, since the approximations employed when calculating the velocity dispersion and lensing deflection angle are correlated, it can be shown that when combined, the maximal total fractional error on the derived velocity dispersion when normalized using the lensing deflection angle, is always less than 10 %. (This error is largest when λ g ∼ r g and goes to zero as λ g → 0 or λ g → ∞). Although this error is comparable to the observational errors, it will have a negligible effect on the derived constraints on λ g and c. | 2014-02-13T08:32:07.000Z | 2013-06-05T00:00:00.000 | {
"year": 2013,
"sha1": "c0f3298e079086dbcb5ea90587a3267a2f55fc31",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1306.1086",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c0f3298e079086dbcb5ea90587a3267a2f55fc31",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
11288630 | pes2o/s2orc | v3-fos-license | Characteristics of primary care office visits to nurse practitioners, physician assistants and physicians in United States Veterans Health Administration facilities, 2005 to 2010: a retrospective cross-sectional analysis
Background Primary care, an essential determinant of health system equity, efficiency, and effectiveness, is threatened by inadequate supply and distribution of the provider workforce. The Veterans Health Administration (VHA) has been a frontrunner in the use of nurse practitioners (NPs) and physician assistants (PAs). Evaluation of the roles and impact of NPs and PAs in the VHA is critical to ensuring optimal care for veterans and may inform best practices for use of PAs and NPs in other settings around the world. The purpose of this study was to characterize the use of NPs and PAs in VHA primary care and to examine whether their patients and patient care activities were, on average, less medically complex than those of physicians. Methods This is a retrospective cross-sectional analysis of administrative data from VHA primary care encounters between 2005 and 2010. Patient and patient encounter characteristics were compared across provider types (PA, NP, and physician). Results NPs and PAs attend about 30% of all VHA primary care encounters. NPs, PAs, and physicians fill similar roles in VHA primary care, but patients of PAs and NPs are slightly less complex than those of physicians, and PAs attend a higher proportion of visits for the purpose of determining eligibility for benefits. Conclusions This study demonstrates that a highly successful nationwide primary care system relies on NPs and PAs to provide over one quarter of primary care visits, and that these visits are similar to those of physicians with regard to patient and encounter characteristics. These findings can inform health workforce solutions to physician shortages in the USA and around the world. Future research should compare the quality and costs associated with various combinations of providers and allocations of patient care work, and should elucidate the approaches that maximize quality and efficiency.
Background
Primary care, an essential determinant of health system equity, efficiency, and effectiveness [1], is threatened by inadequate supply and distribution of the provider workforce [2,3]. As the US primary care system confronts provider shortfalls due to demographic trends, the growing prevalence of chronic disease [4], and low proportions of physicians choosing primary care practice [5], a possible solution is expanded use of physician assistants (PAs) and nurse practitioners (NPs) [6]. This solution is supported by a large body of research demonstrating high quality of NP and PA care [7,8] and by recent research suggesting that higher proportions of NPs in primary care clinics are associated with improved outcomes among patients with diabetes [9,10].
The Veterans Health Administration (VHA), the United States' largest integrated health system, is a leader in primary care innovation. Since the mid-1990s, the VHA has created a model primary care system by implementing strategies to coordinate and integrate care, maintain high standards of preventive and chronic disease care, make primary care accessible to veterans across the country, and provide high quality care while controlling costs [11][12][13][14].
Throughout this transformation, the VHA has explicitly promoted the use of NPs and PAs in primary care. The VHA is the largest employer of both PAs and NPs nationally [15,16]. Although deployment of NPs and PAs varies across regional VHA networks (Veterans Integrated Service Networks, or VISNs), many of these networks have been frontrunners in the utilization of nonphysician providers with respect to both numbers of PAs and NPs and to relative autonomy and responsibility in clinical care [15][16][17]. For example, VHA primary care NPs and PAs are typically responsible for management of their own panels of patients and are generally not required to obtain physician co-signatures for prescriptions, orders, or documentation [18,19]. Evaluation of the roles and impact of NPs and PAs in the VHA is critical in ensuring optimal care for veterans and may inform best practices for use of PAs and NPs in other settings. The VHA is a promising and pertinent system to study because of its unparalleled national system of coded data, the high burden of chronic disease in its patient population, and its relatively expansive use of PAs and NPs. The purpose of this study was to characterize the use of NPs and PAs in VHA primary care and to examine whether their patients and patient care activities were, on average, less medically complex than those of physicians.
Methods
This is a retrospective cross-sectional analysis of national administrative data from VHA primary care encounters (2005 to 2010) listing a physician, NP, or PA as the first provider for the encounter. Other provider types (such as registered nurses, licensed practical nurses, pharmacists, and social workers) were the first provider listed for about 28% of all encounters and were omitted from the analysis. Encounters with physician residents were also excluded, but the number of visits for which a physician resident was listed as the first provider was small (less than 3% of total for all types). After we eliminated all provider types other than physicians, PAs, and NPs, the vast majority (>98%) of encounters in the dataset listed only one provider as involved in the encounter. Therefore, we analysed data for only the first provider listed. Our analysis of trends in the proportion of primary care visits attended by each provider type from 2005 to 2010 is based on 9.6 million to 10.6 million encounters from each year. For all of the other analyses, we used only 2010 data, comprising 10.6 million encounters.
Variables analysed by provider type included patient age, sex, race, VISN, visit primary diagnosis by International Classification of Diseases (ICD-9) code, procedures by Current Procedural Terminology (CPT W ) code, and comorbidity score. Encounter primary diagnoses (ICD-9 codes) were aggregated into 288 categories using the Health Cost and Utilization Project Clinical Classification Software [20], and then further categorized into 30 clinical categories by our team. The comorbidity score system used was that of the diagnostic cost groups (DCG), which standardizes risk compared with the average Medicare patient (DCG score = 1), where a score >1 indicates that the patient studied has a higher health risk than the average Medicare patient. This score was pre-calculated for each patient by VHA health services researchers and was obtained through the VHA Information Resource Center (VIReC).
Statistical analysis was descriptive and accomplished using SAS Version 9.2 (SAS Institute, Cary, NC). The extremely large size of our dataset produced highly precise estimates, even for differences of trivial magnitude and no clinical consequence. For this reason, and because our approach to the analysis was descriptive (rather than modelling), we chose to present summary statistics without confidence intervals or estimates of statistical significance.
This study was approved by the Durham Veterans Affairs Medical Center Institutional Review Board, which found that it complied with ethical and regulatory standards.
Results and discussion
Trends and numbers of patient encounters by provider type A substantial portion (29%) of VHA primary care encounters are with PAs and NPs. Nurse practitioners are more prominent than PAs in VHA primary care, attending approximately twice as many visits as PAs (19.2% versus 8.4% in 2010). This mirrors the non-VHA distribution, since larger numbers of NPs than PAs practice in primary care [21]. Our study cannot determine whether the predominance of NPs over PAs is due to supply factors, such as possible PA preference for subspecialty practice, or to demand factors, such as preferential recruitment of NPs for primary care positions.
The annual number of VHA primary care encounters involving the three provider types increased from 9.6 million to 10.6 million between 2005 and 2010. Almost all of this increased workload was absorbed by physicians, whose annual primary care encounters increased from 6.7 to 7.7 million annually. The percentage of total encounters attended by physicians increased from 69.8% to 72.5% over the six years studied, with corresponding minor decreases in the percentages seen by NPs and PAs ( Figure 1).
Regional variation in use of NPs and PAs
The use of NPs and PAs varies widely by regional network (VISN), with the two provider types together attending as few as 13% (VISN 21) and as many as 41% (VISN 2) of primary care encounters in 2010 ( Figure 2). In some regional networks, such as VISN 2, both NPs and PAs see relatively large numbers of patients (27% of encounters for NPs and 14% for PAs). In most VISNs, NPs attend substantially more encounters than do PAs, up to about four times as many in VISN 20 (31% for NPs and 8% for PAs). However, in two VISNs (VISN 6 and 17), PAs attend slightly more visits than NPs (12% and 11% for PAs versus 10% and 8% for NPs respectively). We did not examine variability at the facility level, which may also be extensive. This variability may provide an opportunity for comparative research across a spectrum of PA and NP use.
Patient and encounter characteristics
In 2010, the distribution of patient age, sex, and race was fairly constant across the provider groups ( Table 1). The mean age of patients whose visits were attended by physicians (62.8 years) was minimally higher than that of patients seen by NPs (61.7years) or PAs (61.1 years). Nurse practitioners saw slightly more women (10% of patient encounters) than did PAs (6.7%) and physicians (6.6%). Slightly more visits to physicians and NPs were by patients from minority groups (21 and 20%, respectively) compared with visits to PAs (18%). Differences in proportions of encounters with patients of racial and ethnic minorities may be due to geographical differences in PA and NP use. The purpose of the visit varies by provider type, with PAs seeing more patients for physical examinations to determine eligibility for benefits (9%) than physicians (3.4%) or NPs (5.2%). Physician assistants also saw more unscheduled patients (5.3%) than did physicians (4.2) or nurses (4.5%).
Nurse practitioner and PA patients had slightly lower DCG complexity scores than physician patients (physicians, 0.89; NPs, 0.84; PAs, 0.82). The differences in the DCG scores are quite small compared with the standard deviation of these measures, suggesting that the scores can be considered similar across the three provider groups. All three groups saw patients with lower DCG scores than the average Medicare patient, probably because the VHA population includes many people in the under-65 age group. The finding of only small differences in this measure of patient complexity challenges the prevailing notion that NPs and PAs see patients who are less medically complex than those cared for by physicians. Since our study did not address referral rates by provider types, we cannot assess whether PAs or NPs were more likely to refer complex patients to specialists. Analysis of referral rates and appropriateness of referrals will be important in future evaluations addressing both quality of care and cost efficiency by provider type. The most commonly seen 2010 primary visit diagnoses were similar across provider groups (Figure 3). The two leading diagnoses for all provider types were hypertension and musculoskeletal conditions. For physicians and NPs, the third most common diagnosis was diabetes mellitus, but for PAs the third most common diagnosis was "general medical examination", followed by diabetes mellitus. Physician assistants had notably more visits in the category of "medical examination" (12% of all visits to PAs) than NPs (8.5%) and physicians (5.2%). For all other diagnoses, the proportion of each provider type's visits agreed within 2% (absolute).
Procedure codes for patient visits were heavily concentrated in the evaluation and management (E/M) categories, particularly for established patients. Physician assistants performed more disability evaluations and saw more new, as opposed to established, patients for E/M encounters than did physicians. In addition, PAs had correspondingly fewer encounters with established patients than did physicians or NPs. Nurse practitioners fell between physicians and PAs on numbers of encounters in these three categories (established patients, disability evaluations, and new patients). Within encounters for established patients, physicians staffed slightly more visits towards the more complex end of the spectrum than did NPs or PAs (Figure 4). For new patients, PAs attended higher proportions of the most complex encounters ( Figure 5).
Overall, NPs, PAs, and physicians filled similar roles in VHA primary care clinics, although there were some differences in patient complexity and purpose of visits. The similarities in the patterns of patient encounter characteristics across provider types suggests that NPs and PAs function more as physician substitutes than as physician complements [8] in VHA primary care. Both provider types, however, have found specific patient care niches. Although the proportion of women patients in the VHA remains small, NPs attended more visits with these female patients. The finding that PAs attended more unscheduled visits suggests that PAs may often be used to staff walk-in or same-day appointment sections of primary care clinics. This deployment of PAs could also explain why they saw proportionately more new patients with higher complexity, since ill veterans who present to obtain care for the first time may be routed through sectors of the practice set aside for unscheduled appointments. Physician assistants and, to a lesser extent, NPs also saw more visits for the purpose of determining benefit eligibility than did physicians. While these eligibility visits are detailed and are important to veterans' financial futures, they are routine in nature and generally do not address emergent conditions. Therefore, assigning these visits to less expensive and less highly trained providers may be an efficient use of human resources.
Study strengths and limitations
Our results are strengthened by the high quality of the medical record data we used. The data are national in scope, reflecting the experience of veterans across the country. Data were recorded as part of routine administrative processes at or near the time of patient encounters, removing recall as a source of bias. Perhaps most importantly, PA and NP providers within VHA directly document their own patient encounters, so our analysis did not suffer from the common practices, such as billing "incident to" the physician, which can obscure PA and NP patient care activities in administrative datasets.
It is possible that PAs and NPs saw patients jointly with physicians more than the data reflect. The scarcity (<2%) of encounters that coded multiple providers of interest (physician, PA, NP) may be an artefact of routine practices in which the documenting provider does not code other providers who may have seen the patient. This practice may also explain why care by medical residents is not well-represented in the data. Given the substantial teaching mission of the VHA, physician resident participation may have been much larger than the 3% of visits for which a resident was listed as the primary provider. The large regional differences that we found in the use of NPs and PAs in VHA primary care could influence our results. As we discussed, this regional variation in NP and PA use probably affects the race and ethnicity differences that we found in the proportions of patients seen by each provider type. These regional variations could mask differences that are not apparent in our analysis.
The generalizability of our results is influenced by a number of factors. Most VHA providers are salaried, and may therefore behave differently than providers in the private sector, whose income may depend on patient and procedure volume. Moreover, VHA patients have a higher burden of chronic disease than the general US population. However, information about the use of NPs and PAs in caring for a population with a high prevalence of chronic disease can inform workforce planning for other similar settings. This is important for health workforce policy, since chronic disease accounts for over half of US healthcare expenditure [22].
Future research
While our study elucidates patient care activities of NPs and PAs in primary care in the VHA, future research should establish which allocations of labour maximize quality and efficiency. The large variation that we found in the magnitude of PA and NP use across regional VISN networks suggests that there may also be variation in the pattern of NP and PA use. This variation, while hidden within our nationally aggregated results, could present opportunities for research on the best use of PAs and NPs through comparison of use and outcomes across facilities or VISNs that use PAs and NPs differently. The VHA primary care data also support analysis at the team level, which was beyond the scope of this project but which could support important analyses of the effects of team structure and composition on outcomes.
Conclusions
Primary care physician shortages currently exist or are expected around the world. In response, many nations are exploring or developing roles for nonphysician providers, and information about current primary care use of NPs and PAs is highly relevant to those endeavours. *Only visits to physicians, nurse practitioners, and physician assistants were included. Cardiovascular disorders exclude hypertension since hypertension is a separate category. COPD = chronic obstructive lung disease.
Our study describes a large integrated health system that uses NPs and PAs to fill patient care roles similar to those of physicians. These results demonstrate that a highly successful nationwide primary care system relies on NPs and PAs to provide over one quarter of primary care visits to a patient population with a high prevalence of chronic disease. Future research should compare the quality and costs associated with various combinations of providers and allocations of patient care work, and should elucidate the approaches that maximize quality and efficiency. *Only visits to physicians, nurse practitioners, and physician assistants were included. Bars represent the percentage of visits to each provider type that are within the indicated CPT W code. | 2014-10-01T00:00:00.000Z | 2012-11-13T00:00:00.000 | {
"year": 2012,
"sha1": "7ad774d220c6e56c6f59c393de47bbb4e6c4a9ed",
"oa_license": "CCBY",
"oa_url": "https://human-resources-health.biomedcentral.com/track/pdf/10.1186/1478-4491-10-42",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b551dc745aed7c421847a0e913fa6122ea4687a8",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268696519 | pes2o/s2orc | v3-fos-license | The Influence of Chitosan on the Chemical Composition of Wines Fermented with Lachancea thermotolerans
Chitosan exerts a significant influence on various chemical parameters affecting the quality of wine produced using multiple strains of Lachancea thermotolerans. The impact of chitosan on these parameters varies depending on the specific strain studied. We observed that, under the influence of chitosan, the fermentation kinetics accelerated for all examined strains. The formation of lactic acid increased by 41% to 97% across the studied L. thermotolerans strains, depending on the specific strain. This effect also influenced acidity-related parameters such as total acidity, which increased by 28% to 60%, and pH, which experienced a decrease of over 0.5 units. The consumption of malic acid increased by 9% to 20% depending on the specific strain of L. thermotolerans. Nitrogen consumption also rose, as evidenced by all L. thermotolerans strains exhibiting a residual value of Primary Amino Nitrogen (PAN) of below the detection limit, and ammonia consumption increased by 90% to 100%, depending on the strain studied. However, certain parameters such as acetic acid, succinic acid, and glycerol showed contradictory results depending on the strain under investigation. In terms of volatile composition, chitosan supplementation led to increased production of i-butanol by 32% to 65%, 3-methylbutanol by 33% to 63%, and lactic acid ethyl ester by 58% to 91% across all studied strains of L. thermotolerans. Other analyzed aroma compounds exhibited varying changes depending on the specific strain of L. thermotolerans.
Introduction
The use of non-Saccharomyces yeast species in the field of enology has witnessed a significant surge in recent decades [1,2].These yeast species, and in some cases, specific strains within them, exhibit distinct abilities that differ from the widely used Saccharomyces cerevisiae in winemaking.These abilities have been demonstrated to have a positive impact on various wine quality parameters, including aroma compounds, acidity, polysaccharide concentration, glycerol content, final ethanol levels, and food safety.Several non-Saccharomyces species have been extensively studied from a scientific perspective, including Torulaspora delbrueckii, Metschnikowia pulcherrima, Hanseniaspora uvarum [3], Hanseniaspora vineae, Schizosaccharomyces pombe, Pichia kluyveri, and Lachancea thermotolerans [4][5][6][7][8].The growing interest in these yeast species has prompted yeast manufacturers to include them in their commercial product offerings [9].Consequently, winemakers worldwide can now capitalize on the advantages presented by non-Saccharomyces yeasts in their winemaking practices.
Vinification
Fermentations were conducted in triplicate using 100 mL borosilicate bottles, each containing 90 mL of Synthetic Grape Must (SGM), and maintained at a temperature of 25 • C. The SGM was prepared based on its original formulation, with slight modifications [35].In brief, equimolar concentrations of glucose and fructose at 200 g/L, 3 g/L of malic acid, and 2.5 g/L of potassium tartrate were added to the SGM.The pH was adjusted to 3.5, and the nitrogen content was adjusted to 140 mg/L from amino acids and 60 mg/L from di-ammonium phosphate, according to the original formulation.Prior to fermentation, yeast precultures were incubated in SGM for 24 h at 25 • C under orbital shaking at 150 rpm.For inoculation, the final concentration of the yeast cells was adjusted to 2 × 10 5 cells/mL (≈O.D λ 600nm = 0.02).Fermentation progress was monitored by measuring weight loss every 24 h.After the fermentations slowed down to a weight loss of less than 0.01% per day, the cultures were centrifuged at 7000 rpm for 5 min and then preserved at 4 • C for further analysis.The experiment's design essentially consisted of a regular control fermentation using Synthetic Grape Must (SGM) for the studied yeast strains, and an additional trial enriched with 0.5 g/L of the commercial product BactilessTM (Lallemand, Canada).BactilessTM contains chitosan of fungal origin, which allows us to study the influence of this compound during the fermentations.BactilessTM is a 100% natural, non-GMO, and non-allergenic biopolymer derived from the fungus Aspergillus niger.Originally utilized to control bacterial populations in wines, BactilessTM is reported by the manufacturer to be effective against a wide spectrum of bacteria while not affecting yeast populations.
Chemical Parameter Measurements
The quantification of L-malic acid, L-lactic acid, ammonia, and Primary Amino Nitrogen (PAN) was performed using a Y15 Autoanalyzer along with commercially available kits from Biosystems (Barcelona, Spain) [36].The determination of acetic acid, ethanol, glucose + fructose, succinic acid, total acidity, pH, and glycerol concentrations was conducted using the FTIR autoanalyzer Bacchus 3 (TDI, Barcelona, Spain) [37].
Volatile Compounds
The analysis of esters, higher alcohols, and fatty acids was conducted using the method developed by the Department of Microbiology and Biochemistry at Hochschule Geisenheim University, as previously reported in relevant studies [38].
Statistical Analyses
All statistical analyses were performed using R software version 4.1.2(R Developement Core Team, Vienna, Austria, 2013).The significance level was set at p < 0.05.Analysis of variance (ANOVA) and Tukey post-hoc tests were applied to compare the different groups and values.
Fermentation Kinetics
The control groups without chitosan required approximately 500 to 600 h for all studied strains of L. thermotolerans to reach a stationary phase, in which weight loss every 24 h was lower than 0.01%.However, the chitosan-treated trials, for the most part, reached this stage between 300 and 350 h, except for the L. thermotolerans strains Concerto and L3, which required around 500 h (Figure 1).In contrast, the S. cerevisiae control group took 1000 h to complete fermentation, whereas the chitosan-treated S. cerevisiae group reached the stationary stage in 350 h.These findings suggest that chitosan consistently accelerated the fermentation kinetics in all cases.This effect could be beneficial in mitigating the risk of sluggish alcoholic fermentation, although it should be noted that faster kinetics may require additional cooling measures at an industrial scale.
the stationary stage in 350 h.These findings suggest that chitosan consistently accelerated the fermentation kinetics in all cases.This effect could be beneficial in mitigating the risk of sluggish alcoholic fermentation, although it should be noted that faster kinetics may require additional cooling measures at an industrial scale.
Glucose and Fructose
All trials involving L. thermotolerans strains exhibited high final sugar concentrations exceeding 40 g/L (Figure 2).These findings align with previous studies that recommend utilizing L. thermotolerans in conjunction with more fermentative yeast genera, such as Saccharomyces or Schizosaccharomyces [8], in mixed or sequential fermentations to ensure the complete consumption of residual sugars when the primary objective is to produce dry wine [8].Although most L. thermotolerans strains displayed increased consumption of glucose and fructose under the influence of chitosan, only three strains exhibited statistically significant differences.Specifically, in the chitosan-enriched trials, the L. thermotolerans strains BD-612, L3, and Octave demonstrated enhanced glucose and fructose consumption by 37%, 30%, and 27%, respectively.In comparison, the S. cerevisiae strain consumed all sugars in the chitosan-treated trial, while the regular control, without chitosan, attained a final glucose and fructose concentration of 17 g/L.
Figure 1.Fermentation kinetics of variants, gravimetrically measured by total weight loss during the pure fermentation of SGM examinates, for all the studied strains.Solid lines depict the pure fermentation of regular SGM, while dashed lines stand for the pure fermentation of SGM enriched with chitosan.
Glucose and Fructose
All trials involving L. thermotolerans strains exhibited high final sugar concentrations exceeding 40 g/L (Figure 2).These findings align with previous studies that recommend utilizing L. thermotolerans in conjunction with more fermentative yeast genera, such as Saccharomyces or Schizosaccharomyces [8], in mixed or sequential fermentations to ensure the complete consumption of residual sugars when the primary objective is to produce dry wine [8].Although most L. thermotolerans strains displayed increased consumption of glucose and fructose under the influence of chitosan, only three strains exhibited statistically significant differences.Specifically, in the chitosan-enriched trials, the L. thermotolerans strains BD-612, L3, and Octave demonstrated enhanced glucose and fructose consumption by 37%, 30%, and 27%, respectively.In comparison, the S. cerevisiae strain consumed all sugars in the chitosan-treated trial, while the regular control, without chitosan, attained a final glucose and fructose concentration of 17 g/L.
Ethanol
The final ethanol concentrations varied from 6.45% to 8.65% (v/v) for the investigated L. thermotolerans strains (Figure 3).These results are consistent with prior studies, suggesting that L. thermotolerans species have the ability to ferment between 5% and 10% (v/v) in pure alcoholic fermentations [8].While some trials exhibited slight increases in ethanol content for specific L. thermotolerans strains, only one strain out of the thirteen we exam-
Ethanol
The final ethanol concentrations varied from 6.45% to 8.65% (v/v) for the investigated L. thermotolerans strains (Figure 3).These results are consistent with prior studies, suggesting that L. thermotolerans species have the ability to ferment between 5% and 10% (v/v) in pure alcoholic fermentations [8].While some trials exhibited slight increases in ethanol content for specific L. thermotolerans strains, only one strain out of the thirteen we examined demonstrated statistically significant differences.The commercial strain Octave displayed an ethanol production rate that was 0.8% (v/v) higher under chitosan conditions compared to the regular control without chitosan.Chitosan did not have an influence on the ethanol content of the S. cerevisiae control.This observation aligns with a previous study that reported no impact on the ethanol production rates of S. cerevisiae or S. pombe [34].
Figure 2. The final glucose + fructose concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Ethanol
The final ethanol concentrations varied from 6.45% to 8.65% (v/v) for the investigated L. thermotolerans strains (Figure 3).These results are consistent with prior studies, suggesting that L. thermotolerans species have the ability to ferment between 5% and 10% (v/v) in pure alcoholic fermentations [8].While some trials exhibited slight increases in ethanol content for specific L. thermotolerans strains, only one strain out of the thirteen we examined demonstrated statistically significant differences.The commercial strain Octave displayed an ethanol production rate that was 0.8% (v/v) higher under chitosan conditions compared to the regular control without chitosan.Chitosan did not have an influence on the ethanol content of the S. cerevisiae control.This observation aligns with a previous study that reported no impact on the ethanol production rates of S. cerevisiae or S. pombe [34].
L-Lactic Acid
The addition of chitosan significantly enhanced lactic acid production in all examined strains of L. thermotolerans (Figure 4).The extent of the increase varied significantly, ranging from 41% to 97%, indicating the occurrence of different reactions depending on the specific L. thermotolerans strain.Among the strains, the commercial strain Excellence exhibited the most pronounced response.The final lactic acid concentrations ranged from 0.19 g/L to 5.19 g/L for the control groups without chitosan, whereas for the fermentations enriched with chitosan, the final values ranged from 2.74 g/L to 14.32 g/L.These findings suggest that, while the initial intention of employing chitosan in the management of L. thermotolerans was to address its reported sensitivity to sulfur dioxide [8], its utilization represents an intriguing option for enhancing the distinct capability of L. thermotolerans to produce lactic acid.
The potential impact of chitosan on the enhancement of lactic acid production can be multifaceted.Firstly, this polymer, characterized by its positively charged groups, possesses the capability to engage in electrostatic interactions with anions derived from dissociated organic acids [39].Furthermore, the polymer may exert an influence on the permeability of the cellular membrane.Acting as a membrane-binding molecule, its interaction with the cellular membrane has the potential to impede the diffusion rates of weak organic acids towards the intracellular medium [40].Consequently, this hindrance in diffusion may attenuate the consequential effects of these acids on cellular homeostasis, thereby facilitating heightened production.sociated organic acids [39].Furthermore, the polymer may exert an influence on the permeability of the cellular membrane.Acting as a membrane-binding molecule, its interaction with the cellular membrane has the potential to impede the diffusion rates of weak organic acids towards the intracellular medium [40].Consequently, this hindrance in diffusion may attenuate the consequential effects of these acids on cellular homeostasis, thereby facilitating heightened production.
Titratable Acidity
The production of lactic acid caused a significant increase in total acidity across all studied strains of L. thermotolerans.The magnitude of this increase varied from 28% (strain A11-612) to 60% (strains BD-612 and Excellence), depending on the specific strain (Figure 5).In trials involving chitosan, the final concentrations of total acidity ranged from 6.97 g/L to 17.34 g/L, while in the regular control groups, they varied from 4.56 g/L to 8.74 g/L.Although the acidification effect was significant, certain values could be excessive and may potentially impede the performance of other yeast partners, such as S. cerevisiae.These results indicate that the influence of chitosan on total acidity must be assessed during selection processes to prevent excessive acidification or compatibility issues with S.
Titratable Acidity
The production of lactic acid caused a significant increase in total acidity across all studied strains of L. thermotolerans.The magnitude of this increase varied from 28% (strain A11-612) to 60% (strains BD-612 and Excellence), depending on the specific strain (Figure 5).In trials involving chitosan, the final concentrations of total acidity ranged from 6.97 g/L to 17.34 g/L, while in the regular control groups, they varied from 4.56 g/L to 8.74 g/L.Although the acidification effect was significant, certain values could be excessive and may potentially impede the performance of other yeast partners, such as S. cerevisiae.These results indicate that the influence of chitosan on total acidity must be assessed during selection processes to prevent excessive acidification or compatibility issues with S. cerevisiae.Previous studies have reported increases in total acidity of up to 10.4 g/L in L. thermotolerans strains without the influence of chitosan [8].cerevisiae.Previous studies have reported increases in total acidity of up to 10.4 g/L in L. thermotolerans strains without the influence of chitosan [8].
pH Values
In the fermentations enriched with chitosan, eleven out of thirteen strains of L. thermotolerans exhibited significant decreases in pH (Figure 6).Only two strains (NG-108 and MJ-311) did not demonstrate statistically significant differences.The regular control groups without chitosan displayed pH values ranging from 3.09 to 3.3, and were obtained
pH Values
In the fermentations enriched with chitosan, eleven out of thirteen strains of L. thermotolerans exhibited significant decreases in pH (Figure 6).Only two strains (NG-108 and MJ-311) did not demonstrate statistically significant differences.The regular control groups without chitosan displayed pH values ranging from 3.09 to 3.3, and were obtained from an SGM with an initial pH of 3.5.This reduction in pH can be attributed to lactic acid formation.Prior studies have reported pH decreases of up to 0.5 units; this closely aligns with the 0.41 unit decrease observed in strain A11-612 [8].Conversely, the fermentations enriched with chitosan exhibited final pH values ranging from 2.66 to 3.15, representing decreases in pH ranging from 0.35 to 0.84 units.Nine strains displayed pH decreases exceeding 0.5 units (EnartirFermQK, Levulia, Laktia, Excellence, L3, L1, EM-119, BD-612, and A11-612).While pH reduction can be advantageous in certain scenarios, it is important to consider that excessive pH reductions could potentially compromise the alcoholic fermentation process or the performance of associated strains from other species, such as S. cerevisiae, which are essential for completing the alcoholic fermentation process under industrial conditions.
Malic Acid
All the studied L. thermotolerans strains demonstrated reductions in malic acid ranging from 20% to 30% (Figure 7).In all cases, the addition of chitosan intensified the effect of malic acid reduction, resulting in final values ranging from 25% to 45%.The increase in malic acid consumption varied from 9% for the A11-612 strain to 20% for the BD-612 strain.Recent studies highlight the consumption of malic acid as an important secondary selective parameter for L. thermotolerans, following the production of lactic acid [19,20].This property is particularly desirable when producing red wines, as it is important to minimize the presence of malic acid prior to bottling to avoid unwanted refermentation.While no study has reported any L. thermotolerans strain capable of completely consuming all malic acid in red wine, specific L. thermotolerans strains have been shown to synergize with other oenological microorganisms, such as Oenococcus oeni, Lactiplantibacillus plantarum, or S. pombe, which are capable of consuming malic acid [10].These results demonstrate that incorporating chitosan can be a compelling approach towards enhancing the desired reduction of malic acid during the production of red wine.
Malic Acid
All the studied L. thermotolerans strains demonstrated reductions in malic acid ranging from 20% to 30% (Figure 7).In all cases, the addition of chitosan intensified the effect of malic acid reduction, resulting in final values ranging from 25% to 45%.The increase in malic acid consumption varied from 9% for the A11-612 strain to 20% for the BD-612 strain.Recent studies highlight the consumption of malic acid as an important secondary selective parameter for L. thermotolerans, following the production of lactic acid [19,20].This property is particularly desirable when producing red wines, as it is important to minimize the presence of malic acid prior to bottling to avoid unwanted refermentation.While no study has reported any L. thermotolerans strain capable of completely consuming all malic acid in red wine, specific L. thermotolerans strains have been shown to synergize with other oenological microorganisms, such as Oenococcus oeni, Lactiplantibacillus plantarum, or S. pombe, which are capable of consuming malic acid [10].These results demonstrate that incorporating chitosan can be a compelling approach towards enhancing the desired reduction of malic acid during the production of red wine.
This property is particularly desirable when producing red wines, as it is important to minimize the presence of malic acid prior to bottling to avoid unwanted refermentation.While no study has reported any L. thermotolerans strain capable of completely consuming all malic acid in red wine, specific L. thermotolerans strains have been shown to synergize with other oenological microorganisms, such as Oenococcus oeni, Lactiplantibacillus plantarum, or S. pombe, which are capable of consuming malic acid [10].These results demonstrate that incorporating chitosan can be a compelling approach towards enhancing the desired reduction of malic acid during the production of red wine.
Acetic Acid
Chitosan supplementation led to increased production of acetic acid in 9 out of the 13 strains studied (Figure 8).However, all final concentrations remained well below the detectable threshold of 0.6-0.9g/L [41] that can generally be associated with faulty vinegar characteristics, although it depends on the wine style.The strains NG-108, BD-612, EM-119, and L1 did not exhibit significant differences between the controls without chitosan and their chitosan-treated counterparts.Among the L. thermotolerans strains, the increases in acetic acid varied from 0.1 g/L (strain L3) to 0.4 g/L (strain Excellence).The final concentrations of acetic acid in the fermentations enriched with chitosan ranged from 0.11 g/L to 0.34 g/L, except for strain Excellence, which reached a significantly higher value of 0.46 g/L.Notably, strain Excellence had one of the lowest concentrations in the control group without chitosan.These results highlight the strain-dependent influence of chitosan on acetic acid production, emphasizing the need to consider this factor during the selection process of L. thermotolerans strains.The final concentrations of acetic acid in the control groups without chitosan were very low, below 0.1 g/L, which aligns with previous studies describing L. thermotolerans strains as lower producers of volatile acidity compared to S. cerevisiae [8].A prior investigation, centered on the impact of chitosan on the non-Saccharomcyes S. pombe, revealed a noteworthy 0.1 g/L increase in acetic acid under the influence of chitosan [34].
Foods 2024, 13, 987 9 of 22 enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Acetic Acid
Chitosan supplementation led to increased production of acetic acid in 9 out of the 13 strains studied (Figure 8).However, all final concentrations remained well below the detectable threshold of 0.6-0.9g/L [41] that can generally be associated with faulty vinegar characteristics, although it depends on the wine style.The strains NG-108, BD-612, EM-119, and L1 did not exhibit significant differences between the controls without chitosan and their chitosan-treated counterparts.Among the L. thermotolerans strains, the increases in acetic acid varied from 0.1 g/L (strain L3) to 0.4 g/L (strain Excellence).The final concentrations of acetic acid in the fermentations enriched with chitosan ranged from 0.11 g/L to 0.34 g/L, except for strain Excellence, which reached a significantly higher value of 0.46 g/L.Notably, strain Excellence had one of the lowest concentrations in the control group without chitosan.These results highlight the strain-dependent influence of chitosan on acetic acid production, emphasizing the need to consider this factor during the selection process of L. thermotolerans strains.The final concentrations of acetic acid in the control groups without chitosan were very low, below 0.1 g/L, which aligns with previous studies describing L. thermotolerans strains as lower producers of volatile acidity compared to S. cerevisiae [8].A prior investigation, centered on the impact of chitosan on the non-Saccharomcyes S. pombe, revealed a noteworthy 0.1 g/L increase in acetic acid under the influence of chitosan [34].
Succinic Acid
The importance of yeast strain selection has gained prominence in recent years due to the association of succinic acid with the sensory descriptor of minerality, a distinctive parameter in certain wines [10].S. cerevisiae exhibits strain variability in succinic acid production, ranging from 0.5 g/L to 1.8 g/L (Figure 9).The average values of succinic acid concentrations in the regular control groups without chitosan varied from 0.34 g/L to 0.72 g/L, while in the fermentations enriched with chitosan, the values ranged from 0.22 g/L to 0.94 g/L for the studied L. thermotolerans strains.However, only three out of the thirteen L.
Succinic Acid
The importance of yeast strain selection has gained prominence in recent years due to the association of succinic acid with the sensory descriptor of minerality, a distinctive parameter in certain wines [10].S. cerevisiae exhibits strain variability in succinic acid production, ranging from 0.5 g/L to 1.8 g/L (Figure 9).The average values of succinic acid concentrations in the regular control groups without chitosan varied from 0.34 g/L to 0.72 g/L, while in the fermentations enriched with chitosan, the values ranged from 0.22 g/L to 0.94 g/L for the studied L. thermotolerans strains.However, only three out of the thirteen L. thermotolerans strains displayed statistically significant differences.One strain demonstrated a higher final concentration of succinic acid (strain NG-108), with an increase of 0.32 g/L compared to the regular control without chitosan, while two strains exhibited reduced final concentrations (strains Excellence and Laktia), with decreases of 0.25 g/L and 0.08 g/L, respectively.Under the influence of chitosan, the S. cerevisiae control group exhibited an increase of 0.24 g/L in the final concentration of succinic acid.These findings suggest that only a limited number of strains demonstrate a notable impact on succinic acid production under the influence of chitosan.
Foods 2024, 13, 987 10 of 22 0.32 g/L compared to the regular control without chitosan, while two strains exhibited reduced final concentrations (strains Excellence and Laktia), with decreases of 0.25 g/L and 0.08 g/L, respectively.Under the influence of chitosan, the S. cerevisiae control group exhibited an increase of 0.24 g/L in the final concentration of succinic acid.These findings suggest that only a limited number of strains demonstrate a notable impact on succinic acid production under the influence of chitosan.
Glycerol
Chitosan influenced the glycerol production of six out of the thirteen studied strains of L. thermotolerans (Figure 10).Among these, four strains (NG-108, Concerto, Laktia, and Octave) exhibited moderate but statistically significant increases in their final concentrations of glycerol, of 20%, 13%, 11%, and 20%, respectively.On the other hand, two strains (Excellence and Levulia) displayed the opposite effect, significantly reducing their final glycerol concentrations by 47% and 37%, respectively.Previous studies have highlighted substantial variability in glycerol production attributed to the strain of L. thermotolerans [8].Similar effects have been observed in other yeast genera, such as Saccharomyces and Schizosaccharomyces [34].
Glycerol
Chitosan influenced the glycerol production of six out of the thirteen studied strains of L. thermotolerans (Figure 10).Among these, four strains (NG-108, Concerto, Laktia, and Octave) exhibited moderate but statistically significant increases in their final concentrations of glycerol, of 20%, 13%, 11%, and 20%, respectively.On the other hand, two strains (Excellence and Levulia) displayed the opposite effect, significantly reducing their final glycerol concentrations by 47% and 37%, respectively.Previous studies have highlighted substantial variability in glycerol production attributed to the strain of L. thermotolerans [8].Similar effects have been observed in other yeast genera, such as Saccharomyces and Schizosaccharomyces [34].
tions of glycerol, of 20%, 13%, 11%, and 20%, respectively.On the other hand, two strains (Excellence and Levulia) displayed the opposite effect, significantly reducing their final glycerol concentrations by 47% and 37%, respectively.Previous studies have highlighted substantial variability in glycerol production attributed to the strain of L. thermotolerans [8].Similar effects have been observed in other yeast genera, such as Saccharomyces and Schizosaccharomyces [34].
Ammonia
Chitosan exerted a significant influence on ammonia consumption for twelve out of the thirteen studied strains of L. thermotolerans (Figure 11).The control groups without chitosan displayed final ammonia concentrations ranging from 22 mg/L to 41 mg/L, while fermentations enriched with chitosan predominantly resulted in final concentrations of 0 mg/L for most strains (NG-108, A11-612, MJ-311, BD-612, EM-119, L1, L3, Concerto, and Laktia).Strain Excellence exhibited a final ammonia concentration 64% lower than the control without chitosan, while Levulia and Concerto showed reductions of 95% and 79%, respectively.The only L. thermotolerans strain that did not display significant differences between the control and the chitosan-treated version was strain EnartisFermQK.This finding highlights the importance of considering nutrient deficiencies when utilizing chitosan, as it may lead to undesired situations of nutrient shortage, potentially affecting the performance of more fermentative yeasts, like S. cerevisiae, which typically concludes the alcoholic fermentation process in sequential combinations with L. thermotolerans at an industrial scale.To mitigate these potential issues, a second addition of nutrients during the alcoholic fermentation process could be implemented [8].enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Ammonia
Chitosan exerted a significant influence on ammonia consumption for twelve out of the thirteen studied strains of L. thermotolerans (Figure 11).The control groups without chitosan displayed final ammonia concentrations ranging from 22 mg/L to 41 mg/L, while fermentations enriched with chitosan predominantly resulted in final concentrations of 0 mg/L for most strains (NG-108, A11-612, MJ-311, BD-612, EM-119, L1, L3, Concerto, and Laktia).Strain Excellence exhibited a final ammonia concentration 64% lower than the control without chitosan, while Levulia and Concerto showed reductions of 95% and 79%, respectively.The only L. thermotolerans strain that did not display significant differences between the control and the chitosan-treated version was strain EnartisFermQK.This finding highlights the importance of considering nutrient deficiencies when utilizing chitosan, as it may lead to undesired situations of nutrient shortage, potentially affecting the performance of more fermentative yeasts, like S. cerevisiae, which typically concludes the alcoholic fermentation process in sequential combinations with L. thermotolerans at an industrial scale.To mitigate these potential issues, a second addition of nutrients during the alcoholic fermentation process could be implemented [8].
Primary Amino Nitrogen
All fermentations enriched with chitosan exhibited final concentrations of PAN of 0 mg/L (Figure 12), while the regular control groups without chitosan displayed values ranging from 25 mg/L to 42 mg/L.Despite the significant advantages observed, such as
Primary Amino Nitrogen
All fermentations enriched with chitosan exhibited final concentrations of PAN of 0 mg/L (Figure 12), while the regular control groups without chitosan displayed values ranging from 25 mg/L to 42 mg/L.Despite the significant advantages observed, such as increased acidity and lactic acid production, the higher demand for nutrients such as ammonia and PAN indicates the need for optimized management to avoid potential technical issues at an industrial scale.This effect may also be of interest during the production of wines with low levels of biogenic amines, as the absence of amino acids would reduce the precursors of undesirable hazardous compounds, such as biogenic amines.
Volatile Compounds
So far, there have not been any studies exploring how chitosan impacts the aroma of wine, particularly when fermented by L. thermotolerans.This lack of research makes direct comparisons impossible.However, a previous study examined the effects of chitosan on another non-Saccharomyces yeast species, S. pombe [34].That study reported that a specific strain of S. pombe produced lower levels of 3-methylbutanol, 2-phenylethanol, acetic acid ethyl ester, acetic acid 3-methylbutyl ester, acetic acid 2-methylbutyl ester, acetic acid hexyl ester, acetic acid 2-phenylethyl ester, butyric acid ethyl ester, hexanoic acid ethyl ester, decanoic acid ethyl ester, isovaleric acid, hexanoic acid, and decanoic acid.Conversely, increases in i-butyric acid ethyl ester and propionic acid ethyl ester were observed.While this study provides insights into the potential impact of chitosan on aroma composition in non-Saccharomyces yeast fermentations, further research is necessary to understand its specific effects on L. thermotolerans strains.
i-Butanol
Under the influence of chitosan, twelve out of the thirteen strains of L. thermotolerans exhibited a significant increase in i-butanol production.Strains NG-108, A11-612, MJ-311, BD-612, EM-119, L1, L3, Excellence, Laktia, Levulia, EnartisFermQK, and Octave demonstrated increases of 57%, 59%, 56%, 47%, 39%, 43%, 65%, 50%, 36%, 40%, 34%, and 32%, respectively (Figure 13).In comparison, the S. cerevisiae control group displayed a 57% higher i-butanol production rate in the presence of chitosan compared to the regular control without chitosan.These findings emphasize the potential of chitosan in enhancing ibutanol production in L. thermotolerans strains, as well as its impact on S. cerevisiae.In a single prior study detailing the impact of chitosan on a non-Saccharomyces during fermentation [34], it is noted that a specific strain of Schizosaccharomyces pombe exhibited a 15% higher concentration of i-butanol compared to the S. pombe control undergoing alcoholic fermentation without chitosan influence.
Volatile Compounds
So far, there have not been any studies exploring how chitosan impacts the aroma of wine, particularly when fermented by L. thermotolerans.This lack of research makes direct comparisons impossible.However, a previous study examined the effects of chitosan on another non-Saccharomyces yeast species, S. pombe [34].That study reported that a specific strain of S. pombe produced lower levels of 3-methylbutanol, 2-phenylethanol, acetic acid ethyl ester, acetic acid 3-methylbutyl ester, acetic acid 2-methylbutyl ester, acetic acid hexyl ester, acetic acid 2-phenylethyl ester, butyric acid ethyl ester, hexanoic acid ethyl ester, decanoic acid ethyl ester, isovaleric acid, hexanoic acid, and decanoic acid.Conversely, increases in i-butyric acid ethyl ester and propionic acid ethyl ester were observed.While this study provides insights into the potential impact of chitosan on aroma composition in non-Saccharomyces yeast fermentations, further research is necessary to understand its specific effects on L. thermotolerans strains.
i-Butanol
Under the influence of chitosan, twelve out of the thirteen strains of L. thermotolerans exhibited a significant increase in i-butanol production.Strains NG-108, A11-612, MJ-311, BD-612, EM-119, L1, L3, Excellence, Laktia, Levulia, EnartisFermQK, and Octave demonstrated increases of 57%, 59%, 56%, 47%, 39%, 43%, 65%, 50%, 36%, 40%, 34%, and 32%, respectively (Figure 13).In comparison, the S. cerevisiae control group displayed a 57% higher i-butanol production rate in the presence of chitosan compared to the regular control without chitosan.These findings emphasize the potential of chitosan in enhancing i-butanol production in L. thermotolerans strains, as well as its impact on S. cerevisiae.In a single prior study detailing the impact of chitosan on a non-Saccharomyces during fermentation [34], it is noted that a specific strain of Schizosaccharomyces pombe exhibited a 15% higher concentration of i-butanol compared to the S. pombe control undergoing alcoholic fermentation without chitosan influence.
2-Methylbutanol
Out of the studied strains of L. thermotolerans, only four exhibited statistical differences in 2-methylbutanol production.Specifically, strains Concerto, Laktia, Levulia, and Octave demonstrated significantly lower levels of 2-methylbutanol, with reductions of 39%, 40%, 39%, and 48%, respectively, in their chitosan-enriched fermentations compared to the regular controls (Figure 15).In contrast, the S. cerevisiae control group showed a 63% increase in 2-methylbutanol production in their chitosan-enriched fermentations.These findings emphasize the strain-specific effects of chitosan on 2-methylbutanol production in L. thermotolerans and its contrasting impact on S. cerevisiae.The sole prior study examining the influence of chitosan on a non-Saccharomyces during fermentation [34] reports that a selected strain of Schizosaccharomyces pombe produced an 11% higher concentration of 2-methylbutanol than the S. pombe control undergoing alcoholic fermentation without chitosan influence.s 2024, 13, 987 14 of
2-Methylbutanol
Out of the studied strains of L. thermotolerans, only four exhibited statistical diff ences in 2-methylbutanol production.Specifically, strains Concerto, Laktia, Levulia, a Octave demonstrated significantly lower levels of 2-methylbutanol, with reductions 39%, 40%, 39%, and 48%, respectively, in their chitosan-enriched fermentations compar to the regular controls (Figure 15).In contrast, the S. cerevisiae control group showed a 6 increase in 2-methylbutanol production in their chitosan-enriched fermentations.The findings emphasize the strain-specific effects of chitosan on 2-methylbutanol producti in L. thermotolerans and its contrasting impact on S. cerevisiae.The sole prior study exa ining the influence of chitosan on a non-Saccharomyces during fermentation [34] repo that a selected strain of Schizosaccharomyces pombe produced an 11% higher concentrati of 2-methylbutanol than the S. pombe control undergoing alcoholic fermentation witho chitosan influence.
2-Phenylethanol
Among the L. thermotolerans strains investigated, five strains (A11-612, EM-119, Laktia, and Octave) exhibited significantly higher final concentrations of 2-phenylethan (Figure 16).Under chitosan conditions, these strains displayed increases of 24%, 32%, 48 29%, and 32%, respectively.These findings highlight the ability of chitosan to enhance t production of 2-phenylethanol in select L. thermotolerans strains.In the only previo study on the influence of chitosan on a non-Saccharomyces during fermentation [34], i highlighted that a specific strain of Schizosaccharomyces pombe presented a 16% lower co centration of 3-methylbutanol compared to the S. pombe control undergoing alcoholic f mentation without chitosan influence.Notably, unlike L. thermotolerans, no prior stu characterizes S. pombe as a significant producer of 2-phenylethanol.
2-Phenylethanol
Among the L. thermotolerans strains investigated, five strains (A11-612, EM-119, L1, Laktia, and Octave) exhibited significantly higher final concentrations of 2-phenylethanol (Figure 16).Under chitosan conditions, these strains displayed increases of 24%, 32%, 48%, 29%, and 32%, respectively.These findings highlight the ability of chitosan to enhance the production of 2-phenylethanol in select L. thermotolerans strains.In the only previous study on the influence of chitosan on a non-Saccharomyces during fermentation [34], it is highlighted that a specific strain of Schizosaccharomyces pombe presented a 16% lower concentration of 3-methylbutanol compared to the S. pombe control undergoing alcoholic fermentation without chitosan influence.Notably, unlike L. thermotolerans, no prior study characterizes S. pombe as a significant producer of 2-phenylethanol.
Acetic Acid Ethyl Ester
The addition of chitosan to fermentations resulted in higher final concentrations of acetic acid ethyl ester in all cases.This effect can be attributed to the increased production of acetic acid observed under chitosan conditions, as mentioned previously.Among the thirteen studied strains of L. thermotolerans, nine exhibited statistically significant differences between the regular control and the chitosan-enriched fermentation (Figure 18).Specifically, strains NG-108, MJ-311, BD-612, EM-119, Concerto, Excellence, Levulia, EnartisFermQK, and Octave showed significantly higher levels of acetic acid ethyl ester, with increases of 58%, 40%, 40%, 36%, 39%, 66%, 40%, 33%, and 61%, respectively.In the S. cerevisiae control group with chitosan, there was a significant increase of 41% in acetic acid ethyl ester production.These findings demonstrate the pronounced impact of chitosan on enhancing acetic acid ethyl ester production in L. thermotolerans fermentations, as well as its effect on S. cerevisiae.In an earlier study, it was observed that chitosan led to a 28% decrease in ethyl acetate for S. pombe, while the control group with S. cerevisiae showed a 9% increase [34].
Acetic Acid Ethyl Ester
The addition of chitosan to fermentations resulted in higher final concentrations acetic acid ethyl ester in all cases.This effect can be attributed to the increased product of acetic acid observed under chitosan conditions, as mentioned previously.Among thirteen studied strains of L. thermotolerans, nine exhibited statistically significant diff ences between the regular control and the chitosan-enriched fermentation (Figure 1 Specifically, strains NG-108, MJ-311, BD-612, EM-119, Concerto, Excellence, Levulia, E artisFermQK, and Octave showed significantly higher levels of acetic acid ethyl ester, w increases of 58%, 40%, 40%, 36%, 39%, 66%, 40%, 33%, and 61%, respectively.In the cerevisiae control group with chitosan, there was a significant increase of 41% in acetic a ethyl ester production.These findings demonstrate the pronounced impact of chitosan enhancing acetic acid ethyl ester production in L. thermotolerans fermentations, as well its effect on S. cerevisiae.In an earlier study, it was observed that chitosan led to a 2 decrease in ethyl acetate for S. pombe, while the control group with S. cerevisiae showe 9% increase [34].
Propionic Acid Ethyl Ester
Among the 13 studied strains of L. thermotolerans, only 5 demonstrated signific statistical differences between their regular controls and chitosan-enriched trials (Figu 19).These strains include NG-108, MJ-311, L1, Excellence, and Octave, all of which exh ited higher concentrations in their chitosan-enriched fermentations compared to the r ular controls, with increases of 58%, 43%, 48%, 44%, and 30%, respectively.These findin highlight the strain-specific responses to chitosan and underscore the importance of co sidering individual strain characteristics when evaluating the effects of chitosan on L. th motolerans fermentations.Examining the influence of chitosan on the non-Saccharomyce pombe, a previous study reported a 28% rise in propionic acid ethyl ester [34].3.13.7.Propionic Acid Ethyl Ester Among the 13 studied strains of L. thermotolerans, only 5 demonstrated significant statistical differences between their regular controls and chitosan-enriched trials (Figure 19).These strains include NG-108, MJ-311, L1, Excellence, and Octave, all of which exhibited higher concentrations in their chitosan-enriched fermentations compared to the regular controls, with increases of 58%, 43%, 48%, 44%, and 30%, respectively.These findings highlight the strain-specific responses to chitosan and underscore the importance of considering individual strain characteristics when evaluating the effects of chitosan on L. thermotolerans fermentations.Examining the influence of chitosan on the non-Saccharomyces S. pombe, a previous study reported a 28% rise in propionic acid ethyl ester [34].
i-Butyric Acid Ethyl Ester
When enriched with chitosan, nine strains of L. thermotolerans exhibited higher final concentrations of i-butyric acid ethyl ester compared to the regular controls, while one strain showed a decrease (Figure 20).Specifically, strains NG-108, A11-612, MJ-311, BD-612, EM-119, L1, L3, Excellence, and Levulia demonstrated increases of 51%, 33%, 57%, 37%, 42%, 40%, 72%, 64%, and 48%, respectively, in i-butyric acid ethyl ester production.In contrast, strain Octave showed a decrease of 31% in i-butyric acid ethyl ester production when enriched with chitosan.These findings highlight the strain-dependent effects of chitosan on i-Butyric acid ethyl ester production in L. thermotolerans fermentations.The sole preceding study investigating the impact of chitosan on a specific non-Saccharomyces strain, namely S. pombe, noted a 33% increase in i-butyric acid ethyl ester [34].3.13.9.Butyric Acid Ethyl Ester Among the studied strains of L. thermotolerans, five strains exhibited slightly lower final concentrations of butyric acid ethyl ester when fermented with chitosan (Figure 21).Strains A11-612, BD-612, EM-119, Concerto, Levulia, and EnartisFermQK displayed decreases of 12%, 6%, 5%, 8%, 7%, and 10%, respectively, in butyric acid ethyl ester production during their chitosan-enriched fermentations compared to the regular controls.In contrast, the S. cerevisae strain showed an increase of 28% in butyric acid ethyl ester production for the trial involving chitosan compared to the regular control.These findings underscore the strain-specific effects of chitosan on butyric acid ethyl ester production in L. thermotolerans fermentations and highlight the contrasting impact on S. cerevisiae.An earlier investigation into the influence of chitosan on the non-Saccharomyces S. pombe documented a 42% decrease in butyric acid ethyl ester [34].
Foods 2024, 13, 987 18 of 22 3.13.9.Butyric Acid Ethyl Ester Among the studied strains of L. thermotolerans, five strains exhibited slightly lower final concentrations of butyric acid ethyl ester when fermented with chitosan (Figure 21).Strains A11-612, BD-612, EM-119, Concerto, Levulia, and EnartisFermQK displayed decreases of 12%, 6%, 5%, 8%, 7%, and 10%, respectively, in butyric acid ethyl ester production during their chitosan-enriched fermentations compared to the regular controls.In contrast, the S. cerevisae strain showed an increase of 28% in butyric acid ethyl ester production for the trial involving chitosan compared to the regular control.These findings underscore the strain-specific effects of chitosan on butyric acid ethyl ester production in L. thermotolerans fermentations and highlight the contrasting impact on S. cerevisiae.An earlier investigation into the influence of chitosan on the non-Saccharomyces S. pombe documented a 42% decrease in butyric acid ethyl ester [34].
Acetic Acid 3-Methylbutyl Ester
Among the studied strains of L. thermotolerans, only three strains demonstrated significant statistical differences in concentrations of acetic acid 3-methylbutyl ester (Figure 22).These strains-namely NG-108, Excellence, and Octave-exhibited increases of 26%, 22%, and 17%, respectively, when fermented with chitosan.These findings highlight the strain-specific responses to chitosan and underscore the importance of considering individual strain characteristics when evaluating its effects on L. thermotolerans fermentations.In previous research focusing on chitosan's effects on non-Saccharomyces strains, specifically S. pombe, a 49% decrease in acetic acid 3-methylbutyl ester was reported under chitosan influence, while the S. cerevisiae control group exhibited a 29% increase [34].
Acetic Acid 3-Methylbutyl Ester
Among the studied strains of L. thermotolerans, only three strains demonstrated significant statistical differences in concentrations of acetic acid 3-methylbutyl ester (Figure 22).These strains-namely NG-108, Excellence, and Octave-exhibited increases of 26%, 22%, and 17%, respectively, when fermented with chitosan.These findings highlight the strainspecific responses to chitosan and underscore the importance of considering individual strain characteristics when evaluating its effects on L. thermotolerans fermentations.In previous research focusing on chitosan's effects on non-Saccharomyces strains, specifically S. pombe, a 49% decrease in acetic acid 3-methylbutyl ester was reported under chitosan influence, while the S. cerevisiae control group exhibited a 29% increase [34].
Acetic Acid 2-Methylbutyl Ester
Seven strains of L. thermotolerans (BD-612, EM-119, L1, Concerto, Laktia, Levulia, and EnartisFermQK) exhibited no detectable production of acetic acid 2-methylbutyl ester when fermented with chitosan, while the controls yielded small amounts ranging from 2 to 30 µg/L (Figure 23).In contrast, the NG-108 strain displayed an opposing effect, producing 5.06 µg/L under chitosan influence compared to 0.89 µg/L for the regular controls.These findings highlight the strain-dependent variations in acetic acid 2-methylbutyl ester production during L. thermotolerans fermentation, emphasizing the impact of chitosan on this particular compound.The preceding research that addressed non-Saccharomyces and chitosan reported a 28% reduction in acetic acid 2-methylbutyl ester for S. pombe during fermentation with chitosan [34].
Acetic Acid 2-Methylbutyl Ester
Seven strains of L. thermotolerans (BD-612, EM-119, L1, Concerto, Laktia, Levulia, and EnartisFermQK) exhibited no detectable production of acetic acid 2-methylbutyl ester when fermented with chitosan, while the controls yielded small amounts ranging from 2 to 30 µg/L (Figure 23).In contrast, the NG-108 strain displayed an opposing effect, producing 5.06 µg/L under chitosan influence compared to 0.89 µg/L for the regular controls.These findings highlight the strain-dependent variations in acetic acid 2-methylbutyl ester production during L. thermotolerans fermentation, emphasizing the impact of chitosan on this particular compound.The preceding research that addressed non-Saccharomyces and chitosan reported a 28% reduction in acetic acid 2-methylbutyl ester for S. pombe during fermentation with chitosan [34].
Acetic Acid 2-Methylbutyl Ester
Seven strains of L. thermotolerans (BD-612, EM-119, L1, Concerto, Laktia, Levulia, and EnartisFermQK) exhibited no detectable production of acetic acid 2-methylbutyl ester when fermented with chitosan, while the controls yielded small amounts ranging from 2 to 30 µg/L (Figure 23).In contrast, the NG-108 strain displayed an opposing effect, producing 5.06 µg/L under chitosan influence compared to 0.89 µg/L for the regular controls.These findings highlight the strain-dependent variations in acetic acid 2-methylbutyl ester production during L. thermotolerans fermentation, emphasizing the impact of chitosan on this particular compound.The preceding research that addressed non-Saccharomyces and chitosan reported a 28% reduction in acetic acid 2-methylbutyl ester for S. pombe during fermentation with chitosan [34].
Conclusions
Chitosan did not exhibit a significant impact on the fermentative power of L. thermotolerans, but it did significantly affect several other kinetic and chemical parameters of oenological relevance.Notably, chitosan demonstrated a significant influence in increasing various acidification-related parameters, including lactic acid production, total acidity, and pH reduction for all the studied strains of L. thermotolerans.Therefore, chitosan represents an intriguing tool for enhancing the acidification potential of L. thermotolerans.Additionally, chitosan significantly influenced other oenological parameters such as malic acid consumption, PAN, i-butanol, 3-methylbutanol, and lactic acid ethyl ester.The impact of chitosan exhibited considerable strain variability, dependent on the specific L. thermotolerans strain under investigation.These findings highlight the multifaceted influence of chitosan on various oenological parameters, emphasizing the importance of strain selection when employing chitosan in L. thermotolerans fermentations.
Figure 1 .
Figure 1.Fermentation kinetics of variants, gravimetrically measured by total weight loss during the pure fermentation of SGM examinates, for all the studied strains.Solid lines depict the pure fermentation of regular SGM, while dashed lines stand for the pure fermentation of SGM enriched with chitosan.
Figure 2 .
Figure 2. The final glucose + fructose concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 2 .
Figure 2. The final glucose + fructose concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 3 .
Figure 3.The final ethanol concentration percentages (v/v) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 3 .
Figure 3.The final ethanol concentration percentages (v/v) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 4 .
Figure 4.The final L-lactic acid concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 4 .
Figure 4.The final L-lactic acid concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 5 .
Figure 5.The final total acidity concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 5 .
Figure 5.The final total acidity concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Foods 2024, 13 , 987 8 of 22 Figure 6 .
Figure 6.The final pH values of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 6 .
Figure 6.The final pH values of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 7 .Figure 7 .
Figure 7.The final L-malic acid concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-
Figure 8 .
Figure 8.The final acetic acid concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 8 .
Figure 8.The final acetic acid concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 9 .
Figure 9.The final succinic acid concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 9 .
Figure9.The final succinic acid concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 10 .Figure 10 .
Figure 10.The final glycerol concentrations (g/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-
Figure 11 .
Figure 11.The final ammonia concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 11 .
Figure 11.The final ammonia concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Foods 2024, 13 , 987 12 of 22 Figure 12 .
Figure 12.The final PAN concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 12 .
Figure 12.The final PAN concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Foods 2024, 13 , 987 13 of 22 Figure 13 .
Figure 13.The final i-butanol concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 14 .
Figure 14.The final 3-methylbutanol concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 13 .
Figure 13.The final i-butanol concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Foods 2024, 13 , 987 13 of 22 Figure 13 .
Figure 13.The final i-butanol concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosanenriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 14 .
Figure 14.The final 3-methylbutanol concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 14 .
Figure 14.The final 3-methylbutanol concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 15 .
Figure 15.The final 2-methylbutanol concentrations (mg/L) of the final wines fermented by the amined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) a chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant diff ences at a significance level of p = 0.05.
Figure 15 .
Figure 15.The final 2-methylbutanol concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 16 .
Figure 16.The final 2-phenylethanol concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 17 .Figure 16 .
Figure 17.The final lactic acid ethyl ester concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Foods 2024, 13 , 987 15 of 22 Figure 16 .
Figure 16.The final 2-phenylethanol concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 17 .Figure 17 .
Figure 17.The final lactic acid ethyl ester concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 18 .
Figure 18.The final acetic acid ethyl ester concentrations (mg/L) of the final wines fermented by examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) a chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant diff ences at a significance level of p = 0.05.
Figure 18 .
Figure18.The final acetic acid ethyl ester concentrations (mg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 19 .
Figure 19.The final propionic acid ethyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 20 .Figure 19 .
Figure 20.The final i-butyric acid ethyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Foods 2024, 13 , 987 17 of 22 Figure 19 .
Figure 19.The final propionic acid ethyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 20 .Figure 20 .
Figure 20.The final i-butyric acid ethyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 21 .
Figure 21.The final butyric acid ethyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 21 .
Figure 21.The final butyric acid ethyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 22 .
Figure 22.The final acetic acid 3-methylbutyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 23 .
Figure 23.The final acetic acid 2-methylbutyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 22 .
Figure 22.The final acetic acid 3-methylbutyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Foods 2024, 13 , 987 19 of 22 Figure 22 .
Figure 22.The final acetic acid 3-methylbutyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 23 .
Figure 23.The final acetic acid 2-methylbutyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05.
Figure 23 .
Figure 23.The final acetic acid 2-methylbutyl ester concentrations (µg/L) of the final wines fermented by the examined yeast strains are illustrated, indicating their fermentation in chitosan-free SGM (blue) and chitosan-enriched SGM (orange).Distinct letters are used to indicate statistically significant differences at a significance level of p = 0.05. | 2024-03-27T15:46:30.849Z | 2024-03-23T00:00:00.000 | {
"year": 2024,
"sha1": "19c502e26c1f413d03c8eff13220623a25c63d1f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/13/7/987/pdf?version=1711202963",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ecb1882ce179f6fcd1b50886e44dccd6ad0b944d",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260415414 | pes2o/s2orc | v3-fos-license | Joint Encryption Model Based on a Randomized Autoencoder Neural Network and Coupled Chaos Mapping
Following an in-depth analysis of one-dimensional chaos, a randomized selective autoencoder neural network (AENN), and coupled chaotic mapping are proposed to address the short period and low complexity of one-dimensional chaos. An improved method is proposed for synchronizing keys during the transmission of one-time pad encryption, which can greatly reduce the usage of channel resources. Then, a joint encryption model based on randomized AENN and a new chaotic coupling mapping is proposed. The performance analysis concludes that the encryption model possesses a huge key space and high sensitivity, and achieves the effect of one-time pad encryption. Experimental results show that this model is a high-security joint encryption model that saves secure channel resources and has the ability to resist common attacks, such as exhaustive attacks, selective plaintext attacks, and statistical attacks.
Introduction
With the gradual development of cryptanalysis and the continuous improvement of computer processing speeds, certain weaknesses of traditional cryptography are gradually being exposed [1].In order to ensure the secure storage and transmission of information in cloud computing, big data, and other new fields, there is an urgent need to research and design new cryptographic technology and theory.In recent years, chaotic cryptography, as a new encryption technology, has attracted the attention of researchers in various fields at home and abroad [2][3][4].The two major research directions of chaotic cryptography are stream ciphers and block cipher systems based on chaos theory, and chaos synchronizationcentered secure communication systems.Many encryption algorithms based on chaos have been proposed and applied [5][6][7][8][9].The certainty and unpredictability of chaotic systems meet the basic requirement of information encryption, which is information hiding and recovery [10].In addition, the sensitivity of chaos to initial conditions conforms to the diffusion characteristics of cryptography [11], and the pseudo-randomness conforms to the confusion characteristics of cryptography [12].These two characteristics allow chaotic stream encryption to naturally have a superior information-hiding function.
However, early studies also proved that chaotic stream encryption has obvious drawbacks, such as the poor resistance of low-dimensional chaotic encryption systems to exhaustive attacks [13][14][15]; the equivalent keys in designing the encryption system, which cannot effectively resist known plaintext attacks [16,17], etc.Therefore, the traditional chaotic stream cryptography needs to be improved to enhance security.Continuous-time high-dimensional hyperchaotic systems have been proposed [18,19] encryption security has greatly improved because it has higher dimensionality and more complex dynamics, and the iterative equations have more initial values and control parameters.To continuously improve encryption security, researchers are constantly striving to find new high-security methods to participate in the encryption step, such as the encryption method based on a neural network.Among them, the typical one is the neural network [20,21] constructed by a memristor, which not only has behavior similar to a chaotic system, but can also form a complex neural network model, produce rich discharge patterns, and have high security.Ref. [22] designs a flexible image encryption algorithm by using BP neural network compression technology and chaotic mapping and proves its security.From many practical and forward-looking encryption algorithms, it can be seen that an encryption algorithm combining multiple steps and algorithms offers higher security than a single algorithm.
With further demands for security performance, combinations of the one-time pad and chaotic encryption are also being proposed.It has been proven that the one-time pad technique has strong security [23][24][25][26].The one-time pad chaotic stream encryption proposed by [23,27] is similar in nature to the one-time pad proposed by Shannon, which requires real-time sharing of keys through secure channels or management of complex key systems.However, this way of sharing keys in real time adds additional communication burdens, especially for short messages.
In view of the disadvantages of the chaotic encryption schemes mentioned above, such as insufficient security or low encryption efficiency, two low-dimensional chaotic coupling schemes are proposed to relatively balance security and encryption efficiency.Since the autoencoder neural network (AENN) has enough nonlinear complexity to make up for the shortage of nonlinear complexity in chaotic stream encryption, a joint encryption model based on a randomized AENN and improved chaotic coupling map is proposed.
The contributions of this paper are summarized as follows: • Based on the auto-learning of AENN, an AENN encryption model through randomized selection is constructed, which enlarges the non-linear complexity and key space of the encryption algorithm.
•
Aimed at the key shortage in a one-dimensional chaotic space and the complexity of high-dimensional chaotic operations, a new chaotic map coupled with improved Chebyshev map (ICM) and improved logistic map (ILM) is designed.
•
In view of the large amount of channel resources consuming the traditional one-time pad encryption method, the iterative data of a chaotic map are used to perform another iteration and combine the number of communications to comprehensively calculate a new initial condition of the chaotic map, so that the system can share a key for the first time and the key can be continuously changed and transmitted.After the verification of the encryption experiment, we can conclude that the joint encryption model that we propose is a high-security encryption model that saves the one-time pad channel resources and can resist common attacks.
The principles of AENN, ICM, and ILM are introduced in Section 2; in Section 3, the implementation of AENN randomization and the new coupled chaos mapping is described; in Section 4, the encryption and decryption algorithm of the joint encryption model is described; in Section 5, the security performance of the joint encryption model is analyzed from the perspectives of key space, key sensitivity, and resistance to selective plaintext and statistical attacks; in Section 6, the joint encryption model is used to encrypt text and images, respectively, and the experimental results are analyzed; Section 7, summarizes the innovation and its effects and proposes further improvements.
Improved Chaos Map
A nonlinear system, which is highly sensitive to small changes in initial conditions, is called a chaotic system.A chaotic system has a high sensitivity to control parameters, good pseudo-randomness, ergodicity, and long-term unpredictability, which are qualities that are similar to the confusion and diffusion characteristics found in cryptography [28].Using chaotic dynamic behavior to diffuse and confuse plaintext has an avalanche effect, so chaotic stream encryption has more advantages than traditional encryption methods.The security of chaos is related to the complexity of chaotic systems.Common chaotic maps are logistic, sine, tent, and so on [29,30].These one-dimensional chaotic maps offer small key spaces and limited resistance to violent exhaustive attacks, and some chaotic maps have obvious short-period phenomena, making them vulnerable to statistical attacks [31].To address the above drawbacks of chaotic encryption, an ICM and an ILM are adopted for cross-mapping to improve the key space and spacetime complexity of chaotic systems, greatly improving the security of chaotic encryption.
In order to expand the range of the control parameter µ and increase mapping complexity, in [32], the µ value is modified and an ICM is defined, as follows: where, the parameter µ ∈ (0, 10], The chaotic bifurcation diagram and Lyapunov exponent (LE) diagram of the Chebyshev map and ICM [32] are shown in Figure 1.The bifurcation diagram shows the abnormal behavior of the Chebyshev mapping µ value between 1 and 2, while the ICM exhibits chaotic behavior throughout the entire parameter range.It can also be seen from the LE diagram that the LE of the original Chebyshev map is less than 1 when µ < 1, indicating that the original Chebyshev map is not in a chaotic state when µ < 1.The LE of ICM is greater than 1 at µ ∈ (0, 10), which indicates that ICM is in a chaotic state at µ ∈ (0, 10).The maximum LE of ICM at µ ∈ (0, 10) is 12.63, which is larger than the maximum LE of the original Chebyshev map (2.30), making ICM more secure than the Chebyshev map.One-dimensional logistic maps can easily generate a large number of sequences quickly, but their low complexity results in weak security performance.So an ILM is designed in [33], which is defined as follows: where the initial value X 0 ∈ (−1, 1).The premise of cross-coupling mapping of different chaos necessitates that they share the same range of values.The range of ICM is U k ∈ [0, 1] (special treatment applies for U k = 0 or U k = 1), so we calculated the absolute value of ILM, so that X 0 ∈ (0, 1); Equation (2) can be expressed as follows: Figure 2 shows the bifurcation diagram and LE diagram of the logistic map and ILM.According to the comprehensive analysis of the bifurcation diagram and LE diagram, the logistic map is in a chaotic state when a ∈ (3.6, 4), and ILM is in a chaotic state when a ∈ (0, 1).When ILM is at a ∈ (0, 1), the maximum LE is 0.6812, which is greater than the maximum LE of the logistic map (0.6586).Moreover, the LE of ILM in the whole value range of a is greater than 0, and it has a wider range of chaotic behavior, and its overall performance is better than that of the logistic map.The sequence generated by chaotic mapping iteration is a real number sequence and cannot be directly used for encryption.It is necessary to quantize the real number sequence to obtain the bit sequence.The commonly used quantization methods include binary quantization, multiple coarse-graining, integer remainder quantization, multi-level uniform quantization, and so on.The chaotic sequence quantized by multi-level uniform quantization has good uniformity and correlation characteristics, and it is difficult to reconstruct it by inverse iteration.Moreover, each iteration of the chaotic function can obtain multiple bits, which can speed up the generation of the sequence.To obtain a uniform bit sequence, the L power of two is usually used as the quantization coefficient, so the expression of transforming the chaotic real number sequence {x k } into the integer sequence {y k } is as follows: where • is used for rounding down.The obtained integer sequence is converted into equally long binary sequences, which are then concatenated together to form a binary bit sequence that can be used for encryption.
Autoencoder Neural Network (AENN)
Neural networks are used to identify and process potential representations in digital signals by simulating the operation of the human brain [34,35].AENN is an unsupervised neural network model, which usually consists of an encoder and a decoder.Unlike other encoders (ENNs), the task of the autoencoder is to map itself.In theory, the AENN encoder attempts to map the original data with smaller dimensions without losing data information.
The AENN decoder reconstructs the encoded low-dimensional data to restore the original data.In this paper, the AENN encoder is used to encode the data, and the decoder is used to reconstruct the data, thus realizing the encoding and decoding of the data.In order to improve the encryption efficiency under the condition of complete decoding, this paper follows the principle of simplicity and effectiveness and chooses an AENN model with the least parameters and the simplest structure for encryption.The AENN architecture is shown in Figure 3.As shown in Figure 3, the network structure of AENN includes an input layer, a hidden layer, and an output layer, wherein the input layer and the hidden layer in the middle constitute an encoder (Figure 3a), and the hidden layer in the middle and the output layer constitute a decoder (Figure 3b).It should be emphasized that the hidden layer in the middle serves as both the output of the encoder and the input of the decoder, and the coding function is embodied through the hidden layer.Suppose there are N input nodes, the ith input node is represented by x i , and the jth hidden node is represented by z j .The weight parameter between the ith input node and the jth hidden node is marked as w ij , and b h(j) represents the bias of the jth hidden node.Then z j can be expressed as follows, which is the weighted sum: In the neural network, the value of the hidden node calculated from the upper layer is generally not directly used as the input of the next layer but needs to be transformed by the activation function.The commonly used activation functions are the sigmoid function, ReLU function, and so on.If using the sigmoid function, the hidden node can be represented as follows: The ReLU function is a piecewise linear function defined as the positive part of the parameter.If using the ReLU function, the hidden node can be represented as follows: In order to obtain a reasonable weight value to fit the relationship between the input layer and the output layer, and obtain the output result with a small loss value, it is necessary to establish a reasonable loss function [36] to evaluate the output result.Mean square error (MSE) [37] is a commonly used evaluation function, which can be expressed as follows: The smaller the MSE, the closer the output of the autoencoder to the input.The training process involves making E → 0. Gradient descent (GD) is a commonly used optimization algorithm in deep learning algorithms.When the neural network is initialized, the objective function J(w, b) is not optimal, and the weight values w and biases b are very small random values close to zero.The gradient of the objective function J(w, b), with regard to parameters w and b, is the fastest rising direction of the objective function.As long as one steps along the opposite direction of the gradient, the optimization of the objective function can be achieved.By constantly updating weights and biases, we can finally find the point with the least error.The gradient of the objective function J(w, b), with regard to parameters w and b, can be expressed as follows: During each update, all gradients need to be calculated on the entire dataset, which will lead to redundancy in the calculation and slow down the speed of training.Thus, to improve the training speed, the gradient can be updated with each training sample, which is the stochastic gradient descent (SGD) method [38].By using the SGD method, the gradients of the objective function J(w, b) with respect to w and b can be expressed as follows: Among them, α is the learning rate, ∇ w ij is the partial derivative of J(w, b) with respect to w ij , and ∇ b ij is the partial derivative of J(w, b) with regard to b ij .By constantly updating w ij and b ij , the loss function E → 0 and the optimized objective function J(w, b) are obtained.
Joint Encryption Model
Due to the openness of the internet and the convenience of data connections, information protection has become an inevitable demand in contemporary social development.Since the information age, people have been exploring information encryption algorithms to protect the storage and secure transmission of data, among which, common typical encryption algorithms are asymmetric, represented by RSA (Rivest-Shamir-Adleman), standard encryption represented by AES (advanced encryption standard), S-box, chaotic encryption, compression encryption, encoding encryption, neural network encryption, etc.These algorithms play an important role in the security and encryption efficiency of different encryption objects and application scenarios to protect data security.The advantages and disadvantages of the above typical encryption algorithms are summarized as shown in Table 1.From Table 1, it is clear that different single encryption algorithms have different advantages and disadvantages, so the strengths should be used to make up for the weaknesses, using different encryption algorithms to jointly encrypt to achieve the effect of increasing security.For example, a joint encryption scheme is proposed in [39], which encrypts the original image through the improved AES algorithm and the wheel key generated by the chaotic system.This joint encryption scheme not only reduces the time complexity of the algorithm, but also increases the diffusion ability in the algorithm, and has plaintext sensitivity and key sensitivity.The combined encryption algorithm not only enlarges the key space but also has the advantages of the two encryption algorithms, which greatly improve encryption security.In recent years, chaos combined with other encryption algorithms, such as RSA encryption [40,41], neural network encryption [22,42,43], compression encryption [44,45], and encoding encryption [46,47], have further improved security performance.
Generally speaking, a larger key space means that the more the permutation and combination of keys, the greater the difficulty of exhaustive cracking.The combined key spacing of two or more encryption algorithms is larger than that of a single key space.Among them, a special one is neural network encryption.What sets neural network encryption apart from traditional encryption algorithms is that instead of performing mathematical-or algebraic-based operations on the key and data, it uses a pre-set neural network model to continuously train the encryption and decryption processes of the data to obtain an encryption and decryption model with many neuron parameters.Because of the uniqueness of the neuron parameters of each trained model, the neuron parameters can be used as part of the key space, giving the neural network encryption algorithm a massive key space.
Among them, the most typical model is the anti-neural network encryption model proposed by [48].This model eliminates the possibility for eavesdroppers to use a large number of ciphertexts to train and decrypt the network through countermeasure training.However, the biggest disadvantage of this model is that the decryption model has a certain decryption failure rate.In [49], an improved scheme of this model is proposed.The training model uses two optional keys to encrypt and decrypt data, allowing eavesdroppers to increase the choice and discrimination of keys.The results of countermeasure training give the model resistance to selective ciphertext.However, this model does not solve the decryption failure rate problem.Therefore, using neural network encryption requires solving the decryption failure problem, and using bytes as the unit for AENN training can achieve 100% decryption.
Realization of AENN Random Selection and New Coupled Chaos Mapping
The AENN randomization and the improved chaotic coupling mapping used in the joint encryption model are described in detail in Sections 3.1 and 3.2 below.
Realization of AENN Randomization
There are two stages for the implementation of AENN random selection.Stage 1: Construct the AENNs.The designed AENN encoding network is shown in Figure 4. Two hidden layers are used to encode the 0-1 bit data into three floating-point numbers (float type) as output results.Encoding is carried out in bytes, meaning every eight bits of binary code are encoded into three floating-point numbers.Because the output floating-point is a 32-bit binary number with strong expressive force, and the input 8 bits are to the power of 2, it is lossless to represent the 8th power of 2 with the 32nd power of 2. According to the description in Section 2.2, eight bits of a byte belong to the input layer, and three floating-point numbers belong to the middle layer, also considered the output eigenvalues of the encoder.The three floating-point numbers output by the encoder cannot be directly transmitted as codes, and need to be quantized.The floating-point quantization process is shown in Algorithm 1.Its essence is to convert the normalized floating-point number into binary representation using the algorithm of multiplying decimals by 2 to obtain integers.The results of the operation retain the finite precision of 8 bits, and finally, the 8 bits are combined into a byte as the quantized output.
Algorithm 1 Decimal to byte
Input: A normalized floating-point number X and the max bit length L = 8 (a byte) for conversion.Output: Byte R. 7: i = i + 1 8: end while 9: R = byte(S) // Binary decimal parts converted into bytes 10: return R To correctly learn the input information, the hidden layer needs to have enough neurons, and gradually reduce the number of neurons in the encoding process to extract data features.In this paper, the number of nodes in hidden layer 1 adopts about 64 nodes, which is more reasonable than 8 input neurons.The number of nodes in hidden layer 2 is about 32 nodes.Such a network structure can both correctly express the input information and quickly reduce the size of neurons.
Another important component of the AENN coding network is the activation function.Choosing an appropriate activation function is very important for neural network training.We used the ReLU function as the activation function in hidden layer 1 and hidden layer 2. This is because the derivative of the ReLU function is a piecewise linear function, and it is not easy to cause the gradient to disappear when there is no saturation problem (the function slope is called function saturation when it is 0).The two hidden layers, 1 and 2, have more neurons, so using the ReLU function can avoid a lot of exponential operations and improve the calculation efficiency.As the middle layer of the encoder output, it must have strong expressive force to better express the characteristics of input data.Therefore, we do not use the ReLU function in the middle layer but use the sigmoid function, which is fully derivable as the activation function.Using this combined activation function scheme both increases the training speed and prevents the majority of neurons from dying (the output is 0).The AENN reconstruction network is shown in Figure 5, which adopts a network structure that is symmetrical to the encoding network.The output characteristic value of the AENN encoder is used as the input of the decoder.In Algorithm 1, the eigenvalues (three floating-point numbers) of the encoder output have been quantized into three bytes, so the bytes need to be dequantized before being input into the decoder.The inverse quantization process is shown in Algorithm 2. First, the byte is converted into a bit sequence, which is used as the binary weight of the decimal part, and then it is converted into a floating-point number by the method of "weighted addition".Then, taking the three floating-point numbers as the input, eight floating-point numbers as outputs are obtained.Then, the outputs are rounded and forcibly converted to bit data, i.e., eight bits.Hidden layer 3 and hidden layer 2 have the same number of nodes, and hidden layer 4 and hidden layer 1 have the same number of nodes.Similarly, in order to speed up the training and prevent the overfitting problem, this paper does not use the activation function in hidden layer 3, and the ReLU function is used as the activation function in hidden layer 4 and the output layer.
AENN has an auto-learning characteristic, i.e., the training process is the process of learning automatic encoding and decoding, and the weight parameters obtained can be regarded as the key of the encryption algorithm.For a single AENN coding network, there is an equivalent key.This is because each training of the neural network will almost obtain different weight parameters, i.e., there are many equivalent keys.Therefore, the security performance of a single AENN-encoding network is low.The premise of obtaining the equivalent keys is to obtain enough input and output coding pairs with strong correlation and to establish a similar network for training.The stronger the randomness of the autoencoder, the higher the security.To enhance the randomness of the encoder, multiple AENN networks can be used for selective coding.Following the simple and effective principle, this paper takes four AENN networks as examples for selective coding.The input and output coding pairs obtained by the diversified AENN structure have weak correlation, so the equivalent key cannot be obtained, and even they cannot be trained effectively.A variety of AENN structures greatly increase the key space and nonlinear complexity.To make the AENN have greater differentiation and reduce the correlation, the number of nodes is fine-tuned in the two hidden layers and four differentiated AENNs are obtained, as shown in Table 2.
Algorithm 2 Byte to decimal
Input: Byte R. Output: A floating-point number X.
1: i ← 0, X ← 0 2: S = bin(R) //Byte is converted to binary 3: for i = 0 until len(S) do )//'int' means the function of integer conversion. 5: X = X + v 6: end for 7: return X The random selection of the AENN structure depends on chaotic sequences.This is because the chaotic sequence has high randomness and unpredictability, and it is easy to realize the synchronization between the two sides of the communication.The AENN selection sequence is generated using ILM, and the initial value entered is marked as X 0 .
The chaotic sequence obtained by ILM is a series of random numbers {x k } in the interval (0, 1), with the form of {0.15537184388449532, 0.21636839614257042, 0.8841541779975159, 0.15110890584401204, 0.5299402990385715, • • • }.To be used in encryption, we must quantize the chaotic sequence.The quantization method adopts the multi-level uniform quantization introduced in Section 2.1.If the L value of Equation ( 4) is taken as 2, then the real number sequence {x k } is converted to the integer sequence 2 2 x k .Because x i ∈ (0, 1), i = 0, 1, 2, • • • , the range of values for integer sequences is {0, 1, 2, 3}.The discrete integer value will determine the selection of AENN.
The randomly selected AENN encoding and decoding processes are shown in Figure 6.As the number of encoders increases, the hybrid coding structure becomes more complex.Therefore, choosing an appropriate number of encoders not only prevents the encoding results from being easily learned, but also improves a reasonable operation speed.Here, four encoders are chosen to mix and encode in parallel because the security is enhanced without being too complicated.Among them, one byte is encoded and quantized into three bytes, and the encoding space of a single byte is 2 24 .Here, the encoding process is the encryption process.Consequently, the neuron parameters in the encoder are all keys.Still, since the attacker can perform brute force cracking by enumerating the encoding table, the key size of the encoder can be accurately described only by using the encoding space.Moreover, the generation of the randomization selection sequence depends on the ILM.Therefore, X 0 is one of the encryption keys, which needs to be shared by both parties before using AENN coding and decoding.
A New Type of Coupled Chaos Mapping
In order to increase the complexity of chaos maps, different low-dimensional chaos can be used for coupling.The coupling mapping of ICM and ILM, introduced in Section 2.1, is used to improve the complexity.
Figure 7 shows the cross-mapping process of two types of chaos, comprising two branches.The cross-mapping process is as follows: 1.
In the left branch, set the control parameter µ 1 and initial value U 1 of ICM, and obtain a real number y 1 after the ICM operation; 2.
In the left branch, set the control parameter a = 1 of ILM, input the real number y 1 into ILM, and calculate to obtain a real number y 2 to complete the first cross-mapping of the left branch; 3.
If the number of cross-mappings n > 1, the real number y 2 is input into the ICM of the left branch, and steps 1 and 2 are repeated until a real number y 2n is output after n times of mapping and added to the real number sequence f 0 ; 4.
In the right branch, set the control parameter a = 1 and the initial value X 1 of ILM, and obtain a real number z 1 after the ILM operation; 5.
In the right branch, set the control parameter µ 2 of ICM, input the real number z 1 into the ICM, and calculate to obtain a real number z 2 to complete the first cross-mapping of the right branch; 6.
If the number of cross-mappings m > 1, the real number z 2 is input into the ILM of the right branch, and steps 4 and 5 are repeated until a real number z 2m is output after the mapping for m times and added to the real number sequence f 1 ;
7.
Every time the Y map and Z map output, the real numbers obtained are not only added to the sequences f 0 and f 1 , but are also used as the initial values of the next iteration of their respective branches.The two branches iterate separately until real number sequences f 0 and f 1 of sufficient length are obtained; 8.
Set the L value in the quantization Equation ( 4), quantize f 0 and f 1 into two bit sequences, respectively, and perform the XOR operation to obtain the bit sequence f 2 .It should be noted that ILM is a full map at a = 1, while ICM is a full map in the range of µ, so control parameter a of the ILM is set to 1, and both the control parameters µ 1 and µ 2 of ICM can be used as keys.In Figure 7, the left branch ICM and ILM form a new map, the Y map, and the right branch ILM and ICM form a new map, the Z map.By setting different cross-mapping times, n and m of the Y map and Z map to different values, respectively, and calculating the LE, as shown in Figure 8, it can be seen that the LE of the new map is greater than that of the single ILM and ICM when µ is equal (Figures 1d and 2d), showing that cross-mapping improves chaos performance.When n and m are greater than or equal to 2, the LE of the new map tends to be stable, and is greater than the LE when n and m are equal to 1. Therefore, the number of cycles n and m of the new chaos is set to 2, which not only improves the chaos performance but also reduces the calculation amount.The cross-mapping method effectively improves the chaos performance.We list the maximum LE, initial conditions, and range of parameters of the new chaos proposed in recent years, as shown in Table 3.It can be seen that the cross-mapping proposed in this paper has a very high LE, and through cross-mapping, it not only has multiple chaotic initial values, but also increases the number of chaotic control parameters.
µ ∈ (0, 10] ILM (Ref.[33]) 0.6812 X 0 ∈ (0, 1) a ∈ (0, 1) Ref. [50] 0.6453 2.5876 2.0194 6.7502 1.3099 Since the correlation analysis of sequences can clearly describe the linear relationship between two sequences, we use the Pearson correlation coefficient to calculate the similarity of sequences generated by chaotic cross-mapping at different times.The equation for calculating the Pearson correlation coefficient is [55]: The value range of ρ X,Y is [−1, 1].When the value is 1, it means that there is a complete positive correlation between the two series; when the value is -1, it means that there is a complete negative correlation between the two series; when the value is 0, there is no linearity between the two series relation.To test the correlation of the sequences generated by chaotic cross-mapping, the Pearson correlation coefficient of the sequences generated by chaotic cross-mapping at different times is calculated and illustrated in Figures 9 and 10.It can be seen from Figures 9 and 10 that the autocorrelation coefficient and cross-correlation coefficient of the sequence are both close to 0 and evenly distributed, indicating that the sequence has no correlation.
Encryption and Decryption Algorithm of the Joint Encryption Model 4.1. Encryption Algorithm
The purpose of this system is to achieve efficient encrypted communication, and the plaintext messages we transmit are processed and analyzed in binary format.For each frame of data, we first encode it according to bytes to obtain the corresponding floatingpoint data, and then quantify the floating-point data to obtain the binary data.Finally, the chaotic map introduced in Section 2.1 is used to couple the data, as per the method described in Section 3.2, to generate a chaotic sequence, which is combined with the encoded binary data using "bitwise XOR".The result of this diffusion is transmitted as ciphertext.
The encryption process is described as follows: 1.
According to the chaotic sequence X of step 1, the corresponding AENN (from 0 to 3) is selected to encode the plaintext bytes to obtain the floating-point sequence F; 3.
Quantize the floating-point number sequence F according to Algorithm 1, and convert the quantization result into a bit sequence B; 4.
The real number sequence is generated by the cross-mapping of ICM and ILM, and the binary chaotic sequence H with the same length as the bit sequence B is intercepted after quantization; 5.
Conduct the bitwise XOR operation between sequences H and B to obtain the ciphertext C.
The entire encryption process is shown in Figure 11.
Decryption Algorithm
The decryption process is the inverse process of the encryption, as far as symmetric encryption is concerned.The control parameters and initial conditions of the chaotic mapping of the decryption side should be consistent with that of the encryption side, and then the ciphertext should be processed according to the inverse process of the encryption algorithm.The steps in the decryption process are described below: 1.
Set the same parameters and initial values as encryption, generate a real number sequence by cross-mapping of ICM and ILM, and intercept a binary chaotic sequence H with the same length as ciphertext C after quantization; 2.
Ciphertext C and binary chaotic sequence H are operated by bit-wise XOR to obtain binary sequence B; 3.
The binary sequence B is converted into the byte sequence, and the inverse quantization operation is carried out to obtain the floating-point sequence F (there is a slight difference with F in Section 4.1, which is caused by the accuracy of quantization and does not affect the decoding result.)4.
The sequence X is generated by ILM, which is used to select AENN (from 0 to 3).In addition, the length of X is 1/3 of the floating-point sequence F . 5.
Use the selected AENN to reconstruct the corresponding floating-point numbers in F , so that the original byte is decrypted out.Thereby, the entire plaintext P could be obtained.
The entire decryption process is shown in Figure 12.
It should be noted that the above encryption and decryption processes are only described for one frame of data.In secret communication where plaintext is divided into different frames, the initial conditions X 0 , X 1 , and U 1 need to be changed, respectively, to resist common password attacks, which will be described in Section 5.
Safety Performance Analysis
A good encryption algorithm must have enough key space, high key sensitivity, strong pseudo-randomness, and can resist common attacks.In order to evaluate the security of the proposed joint encryption model, we will analyze the security of the joint encryption model in the key space, key sensitivity, resistance to selective plaintext attacks, and resistance to statistical attacks.
Key Space Analysis
Although the weight parameter of the AENN is equivalent to the key of the encryption algorithm, the neural network does not need to be simulated, while the coding space is used in the enumeration process.In this paper, when coding, the inputs of 8 bits are converted into three floating-point numbers, which are converted into three bytes for transmission, so the coding space of a single byte is 2 24 .Because there are 256 different bytes, the total coding space is (2 24 )!. Therefore, the AENN proposed in this paper has a huge key space.
For a chaotic sequence, both the initial value of the chaotic system and the control parameters of the chaotic system can be used as the keys.The ICM and the ILM are introduced in Section 2.1, and then we couple the improved maps to obtain a new chaos map with lower computational complexity and a sufficiently large key space.
Among them, the neural network selection uses the sequence generated by ILM, which has the initial value X 0 ∈ (0, 1) and its variable step size is 10 −16 .Thus, we have S X 0 = 1 × 10 16 .ICM and ILM are used as sequence generators for coupled chaos mapping.For ICM, parameters µ 1 , µ 2 ∈ (0, 10], and µ 1 , µ 2 have variable step sizes 10 −16 , and the initial values U 1 ∈ [0, 1] and U 1 have variable step sizes 10 −16 .Thus, we have S µ 1 µ 2 = 1 × 10 34 and S U 1 = 1 × 10 16 .For ILM, its initial value X 1 ∈ (0, 1) and X 1 have variable step sizes 10 −16 .So S X 1 = 1 × 10 16 is obtained.As the generating factor of chaotic initial conditions, the communication times N are included in the enumerated range of initial values above.Thus, the key space generated by using all these chaos sequences is as follows: The advantage of joint encryption is not only to increase the complexity of the algorithm, but also to expand the key space.And the total key space of the joint encryption model proposed by this paper is as follows: S = S 0 S 1 > (2 24 )! × 10 82 2 100 , which is enough to resist exhaustive attacks.
Key Sensitivity Analysis
Key sensitivity is an important factor in the evaluation of encryption algorithms, i.e., a small change in the key can cause sufficient changes in the plaintext.From a statistical point of view, the change rate must be close to 50%.The chaos map is highly sensitive to control parameters and initial conditions.The initial conditions of the chaotic map used in this paper are all double-precision floating-point numbers.For changes in the initial conditions less than 10 −16 , the chaotic map will produce the same result.Therefore, at the premise of keeping the other conditions and parameters unchanged, the initial condition of ILM is modified 39 times with each step size 10 −16 .And the bit-changing rate of the last nine times is calculated, as shown in Table 4.As can be seen from Table 4, the bit change rate of the last nine encryptions is close to 50%.It shows that ILM satisfies key sensitivity when the initial condition change is greater than or equal to 10 −16 .The same tests are conducted on the initial conditions of several other chaotic maps, and the results are consistent, which indicates that the proposed algorithm has key sensitivity.
Analysis of Resisting Chosen Plaintext Attack
The chosen plaintext attack consists of the eavesdropper obtaining the authority of part of the encryption machine, which heavily threatens the security of the ciphertext.Different from known plaintext attacks, chosen plaintext attacks can help crack the plaintext of the key for encryption analysis.In the known cracking methods, it is difficult to crack the initial conditions of chaos and the failure rate is high.However, in the case of the chosen plaintext attack, as long as the equivalent key can be obtained, other encryption data can be cracked.
The steps for a generally chosen plaintext attack with equivalent keys are: 1.
Choose a regular text or image to encrypt with an encryption machine to obtain ciphertext C; 2.
Plaintext M is diffused by conducting the XOR operation with the chaotic sequence K , then K is the equivalent key of the algorithm.The equivalent key K can be obtained by XOR between the ciphertext C and plaintext M: According to the obtained equivalent key K , an equivalent key K of a different length can be intercepted according to the ciphertext of a different length, and other plaintext information M can be obtained: Aimed at the above methods of the chosen plaintext attacks, the traditional ways are: • Calculating the initial conditions of chaotic mapping based on hash algorithms and plaintexts, which can resist differential attacks [23,27]; • Generating the initial conditions of chaotic mapping based on true random number generators [25].
The above two methods can achieve approximately one-time pad encryption, but they are also very costly.Both communication parties need to use a secure channel to share keys in real time or perform very complex key management.The resource consumption of the secure channel and the complicated key management leads to the reduction of the efficiency of encryption communication.To improve the efficiency of encryption, two methods are designed in this paper to eliminate the equivalent key K :
•
Simultaneous replacement of initial conditions of chaotic mapping.Chaotic mapping has the principle of unpredictability and determinism, i.e., the chaotic iteration result cannot be predicted if the chaotic initial conditions are not known, while chaotic synchronous iteration produces the same result if the chaotic initial conditions are known.Therefore, after sharing the initial conditions once for both communication parties, the chaotic synchronous iteration produces the same iterative result and the iterative result is used as the initial condition for the next communication, showing that different communication frames have different initial conditions; • Using other synchronizable factors, such as the generation factors of the initial conditions.In continuous communication, using chaotic synchronous iterations to change the initial conditions is equivalent to using the same initial conditions.Therefore, adding other easily synchronizable factors, such as the number of communications and real-time time, can further increase encryption security.In this paper, the reciprocal of the number of communications is used to make subtle changes to the initial conditions, thus eliminating the equivalent key.
The proposed method only requires the communication parties to share the initial conditions once at the beginning of the communication.It can continuously encrypt different data frames without sharing the key in real time while the communication parties perform error-free encryption transmission.This method, combined with the sensitivity of chaotic mapping to initial conditions, approximately achieves the effect of one-time pad encryption and has the ability to resist the chosen plaintext attacks.
Analysis of Resisting Statistical Attack
Using statistical analysis tools to perform statistical analysis on ciphertext is one of the most commonly used methods in ciphertext-only attacks.Therefore, the randomness test of the ciphertext can verify the ability of the encryption algorithm to resist statistical analysis.NIST SP800-22 [56] is the NIST statistical test suite provided by the National Institute of Standards and Technology of America, which contains 15 of the most commonly used statistical test methods.Each statistical method can obtain different test-result credibility, according to different p-values.The default p-value of the test suite is 0.01, which means that the test result has a 99% confidence level.
We use the cross-mapping in Section 3.2 to generate real number sequences, set the parameter L = 8 in Equation ( 4), quantize each real number into eight bits, splice them, intercept the bit sequence with the size of 1 Gb, and apply NIST test to it.The p-values for the 15 tests are shown in Table 5.As can be seen from Table 5, all p values of the ciphertext are greater than 0.01, passing all the NIST test items successfully.
Experimental Results and Analysis
By using the joint encryption model in passive optical network communication, the encrypted communication experiment will be simulated in this section.The system diagram of the encrypted communication experiment can be seen in Figure 13.
Text Encryption Analysis
The proposed algorithm was implemented in Python, the software platform was PyCharm 2021 (Community Edition), the operating environment was Windows 10 Home Edition, the CPU specification was Intel(R) Core(TM) i5-10300H @ 2.50GHz, and the RAM size was 16GB.
Encrypting the same plaintext can verify the sensitivity of the algorithm.Here, we encrypt the same string "KKKKKKKK" as an example, and the result is shown in Table 6.\x02\x00\xae\x02ITW\x9a\xfd\xb5p\xa6u\x9b\xab\xd5\xad#\xe2w(\xdd\x93 As shown by the experimental results in Table 6, the high sensitivity of the algorithm is demonstrated by the following two points:
•
The same character encryption obtains a completely different ciphertext sequence; • The same plaintext information obtains completely different results in each encryption.
Image Encryption Analysis
To further illustrate the security of the encryption algorithm for other types of data, the image is selected as the object of encryption, and we compare the evaluation data of the image encryption with other chaotic image encryption algorithms.Based on the purpose of the equivalence comparison, this paper only chooses the coupled chaos mapping to encrypt the image, in order to prove the security performance of the proposed coupled chaos mapping in image encryption.
"Lena" is often used as the object of image encryption.In this paper, the difference between the two initial values of the ILM in the coupled chaos mapping is set to 10 −16 , and other chaotic parameters and initial values remain unchanged.We use these two sets of parameters to encrypt Lena to obtain two encrypted images, as shown in Figure 14b,c The histogram reflects the statistical properties of the image pixels.Figure 15a-c represent the histograms of the original image and the two encrypted images, respectively.
It can be seen from the histogram that the original image has certain pixel distribution characteristics, while the encrypted image presents a uniform distribution, and there are no similar characteristics between the histograms of the two encrypted images.This shows that the attacker cannot use the statistical characteristics of the image to analyze the encrypted image.Therefore, the algorithm proposed in this paper has the ability to resist statistical analysis in image encryption.In addition to the histogram, many image encryption indicators can evaluate the image encryption security.In this paper, several commonly used evaluation criteria, such as the pixel number change rate (NPCR), unified average change intensity (UACI), information entropy, and encrypted image pixel correlation, are calculated, and different images are encrypted; the results are shown in Table 7. NPCR and UACI are metrics used to evaluate the sensitivity of image encryption, and their ideal values are 99.6094% and 33.4635%, respectively [57].Table 7 shows that the NPCR and UACI of the coupled chaos mapping proposed in this paper are very close to the ideal values, i.e., the sensitivity of the security key is strong enough.Information entropy is an index used to evaluate the randomness of an image.The larger the entropy value, the better the randomness.The ideal entropy value is eight.Table 7 shows that the entropy value of the coupled chaos mapping encrypted image proposed in this paper is 7.9993, which is very close to the ideal value.It means that the encrypted image pixel value is evenly distributed.The adjacent pixel correlation analysis is an index used to evaluate the similarity strength between adjacent pixels in an image.The smaller the correlation parameter, the weaker the correlation of pixels [58], and the encryption algorithm with high security has to make the correlation between pixels close to 0. Table 7 shows that the correlation parameter of the algorithm proposed in this paper is close to 0, i.e., the encryption algorithm effectively eliminates the correlation between pixels.Therefore, the encryption algorithm in this paper has high security.
In addition, we compare the encryption results of the "Lena" graph with those of other image encryption algorithms published in recent years, see Table 8.The results in Table 8 show that our proposed encryption algorithm can obtain similar or lower correlation results with other encryption algorithms only in the chaotic part, which can meet the requirements of image encryption for image correlation.
Conclusions
This paper addresses the short-period phenomenon of one-dimensional chaos, the shortcomings of low complexity, and the traditional one-time pad encryption that requires real-time key sharing to occupy a large amount of channel resources.This paper proposes a randomly selected AENN to improve the non-linear complexity and key space of the encryption algorithm.Its byte encryption is controlled by a chaotic sequence.Secondly, two low-dimensional ICM and ILM are used to couple into a new chaotic map, which can yield excellent chaotic dynamics performance, eliminate the short-period phenomenon, and increase the size of the key space.The iterative data of the chaotic map are used for another iteration, combined with the communication number to calculate a new initial condition of the chaotic map, so that the system can continuously change the key after sharing the key once.Finally, we propose a joint encryption model based on randomized AENN and a new coupled chaos mapping.Our encryption experiments have verified that this joint encryption model can save secure channel resources, resist common attacks, and thus, offer high security.
In future work, we will combine the advantages of different encryption algorithms to design a more reliable encryption scheme to improve data transmission security and realize the efficient and secure transmission of information.For example, chaotic encryption can be combined with the dynamic S-box, demonstrating good nonlinearity and differential uniformity, or a neural network with a multi-layer neuron structure, complex nonlinear relationships, etc.Thus, this addresses the issue where a single encryption method cannot provide comprehensive security assurance due to its own shortcomings, making data transmission more secure and efficient.
Figure 1 .
Figure 1.Bifurcation for (a) the Chebyshev map and (b) ICM and LE for (c) the Chebyshev map (d) and ICM.
Figure 2 .
Figure 2. Bifurcation for (a) the logistic map and (b) ILM and LE for (c) the logistic map (d) and ILM.
Figure 7 .
Figure 7.The schematic diagram of coupled chaos mapping.
Figure 11 .
Figure 11.Schematic diagram of the encryption process.
Figure 12 .
Figure 12.Schematic diagram of the decryption process.
Figure 13 .
Figure 13.Experiment of the encrypted communication in passive optical networks.
Table 1 .
Advantages and disadvantages of different encryption algorithms.
Table 2 .
The number of nodes in the hidden layer of the four AENN networks.
Table 3 .
Comparison with recent chaotic maps in encryption and cryptographic technology.
Table 4 .
The bit changing rate of the encrypted ciphertext for the last 9 times.
Table 6 .
Encryption results of the same plaintext at different times.
Table 7 .
Entropy, correlation, NPCR, and UACI results of colored cipher images in the USC-SIPI database.
Table 8 .
"Lena" graph encryption analysis of different chaotic image encryption algorithms. | 2023-08-03T15:10:36.115Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "1ebb9b2e5d0d6a1310e279da78e05864ff555f3a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1099-4300/25/8/1153/pdf?version=1690881055",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "587ae03f36c414b399960eb352e17258dc730905",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
38084260 | pes2o/s2orc | v3-fos-license | Couple Unmet Need in Kenya Factors Influencing Couples ’ Unmet Need for Contraception in Kenya
Many studies on unmet need have been women-based with some passing inferences made for men and couples yet reproductive decisions are not made by women alone, but are dyadic in nature. This paper examines couple’s unmet need for contraception in Kenya by using the married couple as the unit of analysis, rather than the individual man or woman. The paper specifically estimates couple’s unmet need and identifies factors that have influenced this. The data used is from the matched couple data derived from the Kenya Demographic and Health Survey, 1998 (KDHS). Only fecund couples in monogamous unions are included in the analysis. The results give the total couple’s unmet need of 16.5 percent (which is 7.5 percent lower than the level of unmet need for currently married women and 3.7 percent higher than the Bankole-Ezeh estimate of couples’ unmet need, using 1993 KDHS). About 7 percent of this accounted for unmet need for limiting while 9.8 percent accounted for unmet need for spacing. In terms of factors influencing couple’s unmet needs, region of residence, ethnicity, number of living children and couples’ discussion of and other reproductive health issues, were the most significant predictors of couples’ unmet need. In order to reduce the unmet need, region specific programs should be emphasized and that couple’s should be encouraged to make joint decisions on reproductive health issues.
Introduction
The concept of unmet need evolved in the 1960's when data from surveys of contraceptive knowledge, attitude and practice (KAP) showed that a considerable number of women were not using contraceptives despite their desire to space and stop childbearing.This discrepancy, the 'KAP-gap' was later to be referred to as the unmet need for contraception (Westoff, 1978).
These early studies of unmet need, however, focused mainly on married women due to a number of factors that include the fact that women had been the central focus of research since they were more directly involved in reproduction; methods for women were more developed, and that, as opposed to men, they were more motivated to adopt contraception.Recently, some researchers have embarked on extensive study of men's role in (Ngom, 1997;Nzioka, 1998;Mbizvo and Adamchak, 1991;Posner and Mbodji, 1989;Omondi-Odhiambo, 1997;Onyango, 2001;Otieno, 2000).These studies have not however, improved our knowledge and understanding of family planning practice as a dyadic process involving the support, cooperation and agreement of both partners.Just as the woman has been central in family planning, so should the man since he may fail to give consent for such practice, or fail to cooperate in using methods like the natural method.
Therefore using women's data to make conclusions on unmet need status of couples may be grossly misleading.
http://aps.journals.ac.zaElsewhere, studies have shown that programs aimed at couples do better than those targeting individual men or women (Terefe and Larson, 1993;Fisek and Sombuloglu, 1978;Becker, 1996).This study argues for a reorientation of unmet need to the couple-based approach.The reorientation is expected to reduce the level of unmet need considerably (Bankole and Ezeh, 1999).
Context
Everyday more than 400,000 conceptions take place around the world; of which about half are deliberate while the other half, are unintentional (Potts, 2000).In Kenya, over half of the population growth is accounted for by unwanted fertility (Kekovole, 1996).In the past two decades, Kenya has achieved a considerable increase in contraceptive use, resulting in an appreciable decline in fertility.Contraceptive Prevalence Rates have increased from a mere 7 percent in 1977 to 27 percent in 1989; 33 percent in 1993; 39 percent in 1998 and 2003 respectively.Despite these achievements, the level of unmet need still remains high.This implies that couples` control over reproduction is far from perfect, and as a result, the number of undesired births is substantial.About 24 percent of the women interviewed in the KDHS 1998 said that they wanted either to postpone or to avoid childbearing but were not using contraceptives.
Since most researches on unmet need have been based on women, there is need to explore a couple-based approach to the phenomenon.This approach has become relevant due to the evidence that men and women have different fertility preferences (Bankole, 1995;Ezeh, 1993).Furthermore, the paradigm shift for family planning to a broader concept of Reproductive Health including Sexually Transmitted Infections (STIs) and HIV/AIDS has made it necessary to use data from both men and women (Becker, 1996).
http://aps.journals.ac.zaConsequently, the sexually active couple is found to be the most appropriate unit for the study of unmet need.
Unmet need for family planning is multifaceted and cannot be deciphered clearly if individuals are treated in isolation.This is because the desire for and timing of additional children and contraceptive practice are influenced by extra-individual factors, such as ability to communicate, lack of knowledge, societal disapproval and husband's approval (Ngom, 1997).The ICPD Program of Action encouraged reproductive health care programs to move away from considering men and women separately and to adopt a more holistic approach that includes men and focuses on couples (UN, 1995).
Very few studies on the couple unmet need have been done (Bankole and Ezeh, 1999;Becker, 1999;Dodoo, 1993).In Kenya, Bankole and Ezeh used data from KDHS 1993 to estimate the level of couple unmet need which according to them was 12.8 percent after reclassifying the pregnant and amenorrheric couples.They included only a limited number of background factors -education, age and type of residence.The last factor they included was type of marriage.More studies are needed in this field.
Data and Methods
The study utilized data from the Kenya Demographic and Health Survey (KDHS, 1998).Currently pregnant and amenorrhea couples were excluded from the analysis.The rationale for excluding the couples in which the wife is either pregnant or amenorrhea was in a bid to revert back to the original formulation of unmet need-that they are not currently exposed to the risk of becoming pregnant regardless of the planning status of their pregnancies (West off and Pebley, 1981).Bong arts (1991) argued that including this group would overestimate the level of unmet need.Furthermore, it negates http://aps.journals.ac.za the point-in-time principle of the measurement of unmet need that ought to be taken into consideration (Bankole and Ezeh, 1999).
The idea of including the pregnant women in unmet need category is problematic, since women who are pregnant are less likely to state that their pregnancies are unwanted or mistimed (Northman, 1982).Postpartum amenorrhea has been noted not to guarantee perfect protection from getting pregnant (Northman, 1982).This is because ovulation can occur prior to resumption of the menses.Several studies have found out that women conceived during postpartum amenorrhea (WHO, 1981: Billewicz, 1979).
According to Northman (1982), women should not rely on postpartum amenorrhea beyond the very early months of birth.West off (2000), in one of his recent publications has suggested the exclusion of the pregnant and the amenorrhea women from the measurement of unmet need (West off, 2000).It is with these considerations in mind that these two groups of couples were excluded in this study.
The study only considered monogamous couples since polygamous husbands were not asked in the KDHS whether they wanted to space or limit births or use contraceptives with each of the wives.Moreover, the majority of births occurred in monogamous unions (Becker, 1999).Out of the 1362 matched couples, 90 percent were monogamous.
Descriptive statistics were used to describe the basic features of the data, bivariate analysis to show association between the unmet need for limiting and spacing and socio-demographic and intermediate variables while logistic regression was used to identify predictors of total unmet needs of couples.The conceptual model used in the study has been adopted from Casterline et al. (1997).The modification made on the Casterline framework was the inclusion of socio-demographic factors. http://aps.journals.ac.za
Couple Unmet Need in Kenya
This paper has attempted to describe the levels of unmet need for spacing, limiting and total unmet need among couples by socio-demographic characteristics such as age, education, region of residence, and type of place of residence; and to identify the factors that may predict total unmet need among couples in Kenya.
RESULTS
The magnitude of couple's unmet need in Kenya.
This section presents the results of the magnitude of unmet need.Figure 1 below shows the algorithm used in defining unmet need among monogamous-fecund couples.Out of the 1362 matched couples in the 1998 KDHS, 1170 were monogamous.The monogamous couples were grouped into four categories; the pregnant, fecund, in fecund and amenorrhea.Among all the couples, 12.2 percent of the women were pregnant.6.1 percent of the pregnancies were intended, 4.7 percent mistimed, and 1.3 percent unwanted.For the amenorrhea, 9.5 percent of the cases were those where the last child was intended, 8.1 percent were intended but later, while 2.7 percent were not intended at all.These two categories were not included in the estimation of the level of couples' unmet need.The infecund formed 12.6 percent of all the monogamous couples.wanted more children, 20 percent were cases where both partners wanted more, 3.5 percent where only the wife wanted more, and 7.7 percent where only the husband wanted more.The remaining 7.3 percent were unexposed, that is, either sterilized, declared infecund, or never had sex.Among those who wanted more (both partners), 10.3 percent wanted a child soon while 9.7 percent wanted later.The former had no need for contraception, since they want a child immediately.About 7 percent had unmet need for limiting since they wanted no more children yet they were not using contraceptives.Those who were using contraceptives were doing so for limiting purposes.
Where only the husband wanted more children, and none of them was not using contraceptives, then the couple had unmet need for spacing (3.9 percent) and those who were using (3.8 percent) were using for spacing purposes.Among couples where only the wife wants more children, 1.8 percent were using contraceptives (for spacing), and 1.7 percent were not using contraceptives and therefore had unmet need for spacing.Couples where the partner wanted more children later 5.5 percent were using contraceptives (for spacing) and 4.2 percent were not using thus having unmet need for spacing.The total couples' unmet need was therefore found to be 16.5 percent.Bankole and Ezeh (1999) had estimated the couple's unmet need using KDHS 1993 at 12.8 percent after reclassifying pregnant and ammenorrheic women.My estimate is 3.7 percent higher than the http://aps.journals.ac.za Bankole-Ezeh estimate.Out of the 16.5 percent, about 7 percent was accounted for by unmet need for limiting while the rest (9.8 percent), for spacing.
Unmet Need for Contraception
The table below presents the bivariate results of couples who had unmet need for contraception by selected socio-demographic and proximate variables. http://aps.journals.ac.za
Unmet need for Spacing
Couples who had greatest spacing need were from Western, rural, had no education, Mijikenda/Swahili, aged below 25, other religion, and had 0-2 children.Couples who knew less methods (<3 methods) had the highest unmet need for spacing, while those who knew more methods had the lowest levels of unmet need for spacing.Husbands who discussed family planning with their wives once or twice had greater unmet need for spacing (24.9 percent) while those who discussed often had the lowest unmet need (13.3 percent).However, for the wives, those who never discussed with their husbands had the highest unmet need for spacing.Couples who http://aps.journals.ac.za disapproved of family planning had higher unmet need for spacing than those who approved, that is, 24.5 and 17.0 percent respectively.
Unmet need for limiting
Unmet need for limiting was highest among couples from Rift Valley, rural areas, husbands with primary education, wives without any education, Kalenjin, aged 35 and above, protestant, and had 6 or more living children.
Indeed, older couples and those who already had many children were likely to want to stop childbearing, contrary to existing literature.Couple who knew few methods or never discussed family planning with partner, or disapproved of family planning had the highest unmet need for limiting as compared to those who knew more methods, discussed family planning often or approved of family planning use.
Predictors of Couples' Unmet Need
This section presents the results from multivariate analysis.Three models were run in this analysis.The dependent variable was total unmet need because in the previous section the algorithm produced sparse data to allow for running of separate models for limiting and spacing.The first model consisted of only the socio-demographic variables run against the dependent variable, total unmet need.The second model involved running the intermediate variables against the dependent variable, while the third and final model consisted of both the socio-demographic and the proximate variables, but omitting those that were found to be statistically insignificant in the individual models. http://aps.journals.ac.za
Model One
Levels of unmet need vary substantially according to demographic and social characteristics, (Robey et al., 1996).Table 2 shows the results of logistic regression for a model which includes socio-demographic characteristics of the couples.The results show that couples in all ethnic groups were less likely to have unmet need than those of Mijikenda/Swahili (reference category).The ethnic differences in unmet need reflect variation in levels of With regard to education, only women with secondary and above level seemed to matter however, Husband's educational level did not appear to matter.This highlights the importance of female education but only when it is above primary level.As expected, the higher the number of living children, the more likely is the unmet need.The odds of unmet need were 3.1 times higher in women with six or more children and 2.4 times higher in women with between three and five children.It shows an increase in the unmet need with increase in the number of children.Couples with higher number of living children may have reached their desired family size goals hence more likely to prefer use of contraception.
Model Two
Several reasons combined may explain why there are unmet need among couples such as, lack of information and spousal communication on issues relating to family planning or contraceptive use (Robey et al 1996).Couples with more children have a greater desire to stop childbearing, which may not be translated into actual practice, because of other factors affecting the decision to use family planning, or those that affect the supply and accessibility of family planning.However, the argument is that, just as those who have more children have unmet need (for limiting), so do those who have fewer children (for spacing).Furthermore, unmet need for spacing is higher than that for limiting.Wafula (2001) also found unmet need to be high among women with more children.
The study conformed to the expectation that wives with higher levels of education had low levels of unmet need, since they were capable of transforming their desire (to space or to limit) into practice (of using family planning).Couples who are more educated can afford to buy contraceptives, are more likely to reside in the urban areas where contraceptives are more accessible, are more informed about the available methods and are more likely to prefer smaller families than their less educated counterparts.
http://aps.journals.ac.za and population growth.If the ongoing fertility transition is to be enhanced, unmet need should be tackled appropriately.The prevalence of unmet need reflects a lag in the implementation of couples' fertility decisions due to inhibiting constraints to the use of contraception.Such constraints must be eliminated first in any meaningful strategy. http://aps.journals.ac.za contraceptive use and stage in fertility transition.It further compares well with the results of the regions, whereby areas with low fertility and high contraceptive prevalence are less likely to have high unmet need.It is important to note here that with the exception of Nairobi, regions such as Eastern, Central, and Rift Valley are predominantly inhabited by the ethnic groups that also are less likely to have higher unmet need.For example, Central Province is inhabited mainly by Kikuyus, Eastern by Kamba, Meru and Embu while the Kalenjin are mainly in the Rift Valley.Nairobi being cosmopolitan with access to health facilities and other services is also less likely to have unmet need.
Table 1 : Percent Distribution of couples by level of unmet need for contraception by selected socio-demographic and proximate variables
35+ age group had the highest total unmet need.However, while unmet need increased with husband's age, it did fluctuate with wife's age.Catholics exhibited the lowest unmet need followed by the Muslims.Total unmet need increased with the couple's number of living children.However, it decreased with the increased number of methods known to wife.For the husbands, this fluctuated, with those who knew 3-5 methods having the highest.Couples who had never discussed family planning or disapproved of it had the highest total unmet need.
Table 4 : Logistic regression analysis for total unmet need (with both background and intermediate variables)
://aps.journals.ac.za hence have lower unmet need than those who live in patriarchal communities of Nyanza, Coast, Western and Rift Valley.The likelihood of having unmet need seemed to increase with the number of living children.Couples who have more living children are more likely to have unmet need than the ones who have fewer children or none at all. http | 2017-09-07T13:56:40.666Z | 2013-10-28T00:00:00.000 | {
"year": 2013,
"sha1": "b7fd063c2802210bbf9372feb97d7b15a369ba28",
"oa_license": "CCBYSA",
"oa_url": "https://aps.journals.ac.za/pub/article/download/342/308",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b7fd063c2802210bbf9372feb97d7b15a369ba28",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13459565 | pes2o/s2orc | v3-fos-license | Statin Use and Clinical Osteoarthritis in the General Population: A Longitudinal Study
BACKGROUND One hypothesis has posited whether abnormal lipid metabolism might be a causal factor in the pathogenesis of osteoarthritis (OA). Routine statin use in clinical practice provides the basis for a natural experiment in testing this hypothesis. OBJECTIVE To test the hypothesis that statins reduce the long-term occurrence of clinically defined OA. DESIGN Cohort design with a 10-year follow-up. PARTICIPANTS 16,609 adults cardiovascular disease cohorts aged 40 years and over from the UK General Practice Research Database with data available to 31 December 2006. INTERVENTION Statins were summarised as annual mean daily dose and dose change over two-year time periods. MAIN MEASURES Incident episode of clinically defined osteoarthritis was assessed within 2 years, and at 4-year and 10-year follow-up time periods, using Cox and discrete time survival analysis. Covariates included age, gender, deprivation, body mass index, cholesterol level, pain-modifying drug co-therapies, and duration and severity of cardiovascular disease. KEY RESULTS Higher therapeutic dose of statin, with a treatment duration of at least 2 years was associated with a significant reduction in clinical OA compared to non-statin users in the follow-up time period. The estimated adjusted rate ratios were as follows: lowest statin dose quartile 1: 2.5 (95 % CI 2.3, 2.9); quartile 2: 1.3 (1.1, 1.5); quartile 3: 0.8 (0.7, 0.95); and highest statin dose quartile 4: 0.4 (0.3, 0.5). The largest statin dose increments were associated with significant reductions estimated at 18 % in OA outcome within 2 years and 40 % after 4 years, compared to non-statin users. CONCLUSIONS This longitudinal study from a national clinical practice setting provides evidence that higher statin dose and larger statin dose increments were associated with a reduction in clinically defined OA outcome.
BACKGROUND Osteoarthritis (OA) is a complex disease that encompasses change in articular, bone and cartilage structures. 1 Current clinical and research focus has been modification of mechanical loading as a causal factor, or treatment of psychosocial factors, or treatment and replacement of intra-articular cartilage. 2 Yet, studies of generalised osteoarthritis suggest the potential role of systemic processes, 3,4 and from this framework, it has been hypothesised that disorder of lipid metabolism may play a role in the pathogenesis of osteoarthritis. 5,6 The hypothesis is generated from different evidence in the cellular and bio-molecular pathways. First, adipocytes share mesenchymal origin with articular cells, providing a potential cellular link between lipid metabolism and osteoarthritis. 7,8 Second, in vitro studies have shown that excessive lipid levels in the synovial fluid induce arthritic changes, and higher levels of leptin found in obesity has also been shown in joint cartilage destruction. 9,10 Supporting epidemiological studies indicate that these two chronic diseases commonly co-occur, 11 share similar risk factors, 12,13 and are both associated with higher mortality. 14 Recent literature suggests that statins may have a modifying role in osteoarthritis. 15 Our previous work has shown that the risk factors for cardiovascular disease are also associated with OA over a 30 year time period of follow-up, 16 and smaller studies have established "proof of principle" for such a link, in radiographically confirmed subgroups of OA. 17,18 However, the full public health potential of statins remains to be investigated, as well as whether there is an association with the larger group presenting with clinical OA. For populations at risk, statins are a key drug preventative therapy and form the basis of quality care guidelines to prevent long-term cardiovascular events. 19 The objective of this study was to use large data sets available from the primary care population, where OA is a common presenting problem and statin use is routine, for investigating the hypothesis of whether statins are associated with a reduction in OA occurrence.
METHODS
We used the General Practice Research Database (GPRD) on over 300 practices, which links clinical diagnosis to other data, such as prescribed drugs, test results and measurements such as body mass index (BMI). The GPRD is representative of England and Wales population, since most people are registered with a General Practitioner (GP), and such data on a large scale population over a longitudinal time period provides the basis for a variety of hypothesis-testing epidemiological studies. 20,21 All data is routinely computer recorded, with diagnostic data coded as patients present their clinical complaints, using a standard clinical classification to record chronic diseases such as cardiovascular disease and osteoarthritis. 22 Permission to access the GPRD data was given by the Independent Scientific Advisory Committee.
Selection of Cohort Population
Cardiovascular disease (CVD) cohort populations aged 40 years and over were identified on the basis of a 2-year time period (1 January 1995 to 31 December 1996) with no clinical record of OA during this time period and in any available patient records in the preceding time period. This cohort (n=16,609) had linked records to drugs prescribed and to any incident clinically defined OA outcome in the following 10-year time period (1 January 1997 to 31 December 2006).
Within the overall cohort, there were six exclusive subgroups of CVD, ordered as follows: (1) hypertension, (2) atrial fibrillation, (3) angina, (4) myocardial infarction and (5) heart failure. Groups were allocated to the most 'severe' diagnostic category, irrespective of other CVD multimorbidity; for example, if a patient has heart failure and hypertension, then they were allocated to the heart failure cohort group. The sixth group consisted of any 'other' CVD symptoms and morbidities outside of the five specific categories. This broader 'other CVD' cohort, with some aspect of vascular disorders in their clinical records, was chosen to provide validation for the specific diagnostic groups.
Measures of Statin Use
Within the overall cohort, prescribed drugs had been coded using the standard British National Formulary (BNF) classification. 23 Lipid-regulating drugs within this classification were used to categorise statin users and other statins were standardised to the equivalent Simvastatin dose. 24 Statin dose was then summarised as the mean daily dose in each 12-month time period, which equates to the prescription dose x frequency of daily dose x quantity of tablets, divided by 365 days. This process was done for the each of the total 12 years of observation. For the overall cohort, data was organised into statin users, and non-users were classified on the basis of the whole of the 12-year time period.
Clinically Defined OA Outcome
In the 10-year follow-up period, the incident outcome of "OA" was defined on the basis of any coded clinical entry irrespective of joint site, and there were 147 OA-related diagnostic categories from a standard clinical classification. 22 OA diagnoses were recorded by GPs in the actual consultations, when it was the primary presenting problem care. These diagnostic codes focus on the specific use of the term "osteoarthritis", and not diffuse pain complaints or syndromes. Previously, we have shown that these OA categories are a marker of health severity, representing distinct diagnostic application when OA is established. 25 These clinical definitions represent a different measure from radiographic definitions of OA, but they are an important epidemiological clinical measure in large general populations consulting over time, 26,27 which provides the setting for an a priori natural experiment. Current evidence also shows that OA can be viewed primarily as a clinical joint pain syndrome, since clinical and radiographic features are not always concordant. 28,29 Other Factors Duration of each CVD in years was also estimated on the basis of time between the age at first diagnosis and the date of diagnosis in the cohort sampling window, and was used as a proxy marker of the 'immortal' time in which an OA event might have occurred. 30 Other measures included serum cholesterol levels (mmol/l) and obesity, as summarised by BMI (kg/ m 2 ). We used the first recorded cholesterol level or a BMI record for an individual as a measure of baseline status, and repeated measures were not used in analyses, because this type of data was not fully available over the follow-up time period.
We also wanted to consider the potential role of other painmodifying drug co-therapies, such as analgesia (non-opioids, opioids and non-steroidal anti-inflammatories) and anti-depressants, which might be associated with a reduction in OA presentation. 31 These drug co-therapies based on BNF classification were summarised into either analgesia users or non-users in the six 2-year time periods for the whole period of observation. Deprivation was measured by the Index of Multiple Deprivation (IMD) for each practice, and was based on the 2004 UK census and is an area-level measure of deprivation. IMD is based on the postcode, and is a weighted score of seven sub-domains relating to income; employment; health; education, skills and training; barriers to housing, and access to local services; crime; and living environment. 32
Modelling Statin Dose and Latency Period
In terms of hypothesising the time that it might take for a person to be on statin to reduce the clinical occurrence of OA (potential 'latency' treatment time period), we estimated that a minimum time period of 2 years of statin use was required. This hypothetical time period for 'treatment effect' is based on evidence from other studies, which show this is the duration of statin use required in order to achieve a significant reduction in cardiovascular disease outcomes. 33,34 The statin daily dose measure was modelled in two ways. In the overall cohort Cox regression analysis, statins were defined as mean daily dose per year, to categorise statin users of more than 2 years in the 10-year follow-up period. The four quartile dose categories, ranging from quartile 1 (low dose) to quartile 4 (high dose). If any drug users had had less than 2 years of statin, they were allocated to the non-user group (n=556).
In the discrete time analysis, we wanted to assess change in statin dose use in individual patients over time, and this approach again incorporated 'immortal' time in which an OA event might have occurred. 30 Therefore, we split the 12 years into six 2-year time periods and summarised the dose changes for each individual from the baseline 2-year time period to each of the five respective follow-up time windows. This approach resulted in four incremental dose change groups, ranging from Group 1 (smallest dose change) to Group 4 (largest dose change), with the pooled estimate indicating that there was an increase in statin dose of 3.61 mg every 2 years (also see Fig. 1).
Analysis
First, for the four mean daily dose quartile groups compared to non-users, we estimated OA outcomes in the 10-year follow-up period using Kaplan Meier plots. Then, using Cox regression methods, with time to OA event, we adjusted for age, gender, IMD status, BMI, cholesterol level, other pain-modifying drug co-therapies and duration and category of cardiovascular disease group. Assumption of proportionality of hazard ratios was tested throughout.
Second, using discrete time survival analysis, we wanted to assess the influence of incremental changes in statin dose in two shorter time periods of observation: (1) within each 2-year time window and (2) a temporal 4-year approach. For within each 2-year time period, we analysed the association between changing statin dose and OA outcome, which means there were six time windows. We then constructed a temporal element by linking each initial 2-year time period with the consecutive 2-year time period, to create five 4-year time windows. In this 'temporal approach', each initial 2-year time period was OA free, so that OA outcome was only assessed in the subsequent 2-year window. Individual follow-up time was first converted to 2-year blocks, and outcome and covariates status were determined for each block. Discrete time survival analysis was then used to model the risk of an OA event. This method treats time not as a continuous variable, but as being divided into discrete units. Within each time window, quartiles of statin exposure were determined and used as a time varying ordinal variable to reflect changing dose, which includes time without possible statin exposure. Logistic regression methods were used to compare changing dose groups with non-statin users for OA outcomes for these shorter time frames, adjusting for all covariates.
Finally, the statin analyses were repeated for each of the six exclusive CVD subgroups, using the chi-square test for trend. However, since the total number of statin users within each CVD group was small, the study groups were categorised into non-statin users, low dose statin users and high dose statin users, using the mean daily dose per year estimates. Statistical significance was defined as p<0.05, all hypothesis testing was two-tailed, and analyses were performed using SPSS (version 18.0) and MLwiN (version 2.21).
Cohort Population
Within the overall study population, there were 4,976 statin users who had been on at least 2 or more years of the statins with an mean daily dose of 15 mg, and 11,633 non-statin users (Table 1). Statins users were similar to non-users in terms of age, deprivation, BMI and cholesterol characteristics, but men were more likely to be on statins than women. Of the statin users, there were higher proportions with angina or myocardial infarction than non-users.
Statin Dose and OA Outcome
The mean quartile statin dose was as follows: quartile 1 (lowest dose), up to 5 mg daily; for quartile 2, up to 10 mg daily; for quartile 3, up to 18 mg daily; and for quartile 4 (highest dose), over 18 mg daily. The associations between statin dose quartiles and OA outcome in the follow-up period are shown in Kaplan Meier plots (Fig. 2). Over the 10-year follow-up period, higher mean daily statin dose was significantly associated with a decreased likelihood of clinical OA (Table 2), and the results showed a dose-gradient response. Compared to statin non-users, the relative adjusted estimates were as follows: quartile 1 (lowest dose), rate ratio 2.55 (95 % confidence interval 2.3-2.9); quartile 2, 1.31 (1.1-1.5); quartile 3, 0.82 (0.7-0.95); and quartile 4 (highest dose), 0.41 (0.3-0.5). Older age, female gender and higher BMI were significantly associated with increased clinical OA outcome, but disease categories, cholesterol levels, duration of disease and specified drug co-therapies did not influence the likelihood of clinical OA outcome.
Changing Statin Dose and OA Outcome
The influence of changing statin dose was also assessed for its impact on OA outcome. In the 10-year follow-up period, the 25th centile dose change was around 5 mg and the 75th centile dose change was over 20 mg, as shown in Fig. 2.
First, in the six shorter within 2-year time windows, larger dose change of statins was associated with a reduction in clinical OA, again showing a dose-gradient response ( Table 3). Compared to statin non-users, the adjusted (for age, gender, deprivation and drug co-therapies) estimates were as follows: group 1 (smallest dose increment): odds ratio 1.07 (95 % confidence interval 0.9- relative reduction estimated at 18 % in OA outcome compared to non-statin users, and after 4 years, the relative reduction was estimated at 40 %.
Statin Dose and OA Outcome by CVD Severity
Within all the CVD subgroups, except heart failure, higher dose of statin was associated with a reduction in clinical OA outcome ( Table 4). The statistically significant trends (p< 0.001) in the association between higher dose of statin and reduction of OA outcome compared to non-statin users were significant for atrial fibrillation, angina, myocardial infarction, and 'other' CVD group.
DISCUSSION
Our study shows that increasing dose of statin use and larger statin dose increments were associated with a reduction in clinical OA compared to non-statin users. These findings were not explained by the duration or severity of associated cardiovascular disease, or pain-modifying drug co-therapies or age, gender, deprivation, or baseline cholesterol levels or BMI status. At the highest statin daily dose, which was a therapeutic dose of around 20 mg daily, there was approximately a 60 % relative reduction in clinical OA outcome, compared to non-statin users. Larger increments in the dose of statins were also associated with a 40 % relative reduction in clinical OA outcome compared to non-statin users over a 4year time period. Other emerging evidence over the life course is further adding to idea of shared pathogenesis for OA and cardiovascular disease. Studies have shown that the co-occurrence of OA and cardiovascular disease is common. 11,35,36 Severity of hand osteoarthritis is associated with atherosclerosis 37 and it has also been shown that OA is a predictor of all-cause mortality, particularly for cardiovascular disease-related mortality. 38 An additional relevant study suggests that diabetes, as part of the metabolic syndrome, may influence the onset of OA. 39 It is postulated that statin modification of OA might be through two mechanisms, either through lowering of the cholesterol levels 5 or through anti-inflammatory properties. 15 There is evidence of local inflammatory processes that occur in osteoarthritis joints, 40 and it is thought that traumatic stimuli induce mechanical receptors in chondrocytes to produce cytokines 41 and matrix metalloproteinases, which lead to a degradation of articular cartilage. 42 Statins may modify these inflammatory mechanisms, and this may be separate to that of cholesterol lowering activity.
The study findings need to be placed within the context of the design and measurement issues. Statins measurement was based on prescribing by general practitioners, and in the United Kingdom they are still clinician prescribed-only drugs. This means the statin users and non-users were likely to be clearly defined, and the construction of dose quartile groups showed a gradient effect as hypothesised. The study design time frame from 1995 to 2006, during which there was increasing use of statins and statin doses, also provides the appropriate cardiovascular population at risk for investigating treatment hypothesis without indication.
The definition of OA was based on clinician-defined diagnosis as presented by patients, and the study findings for older age, female gender and body mass index associations show the validity of such a definition. This approach is reasonable, as in the UK it is largely presented to and managed by GPs, and the population-based database used has also been shown to have high clinical validity for a range of clinical conditions. 43 Finally, it is unlikely that clinical recording of OA diagnostic labels would have been influenced by statin prescribing, other than the random variation in the recording of clinically defined OA, but the large number of practices provide a reliable reflection of clinical OA in the general population.
We also considered the potential effects of disease severity and duration. Severity of cardiovascular disease may attenuate the potential impact of statins, and the associations between low dose statin groups and higher OA outcome compared to reference group seemed to suggest this, but our study showed that there was a gradient effect of statins within all but the heart failure cohort. It is probable that the heart failure group represents end-stage cardiovascular disease, when modification of OA pathogenesis is too late to take effect. The incorporation of duration of disease and statins modelled on change in dose over time, also addressed the issue of unexposed time, which can be an analytic bias 30 in pharmaco-epidemiology studies. An a priori treatment hypothesis was tested, and even though there may still be residual unmeasured confounders, it would be difficult to propose alternative explanations of the dose response gradient or the magnitude of statin effects that were shown.
CONCLUSIONS
This study provides evidence that therapeutic statin dose and larger statin dose increments were associated with a reduction in clinically defined OA outcome. These findings further support the hypothesis that biologic modification of OA may be plausible, and the potential clinical implication is that OA management may share preventative approaches with cardiovascular disease. Exclusive ordered groups, with hypertension as 'least severe' and heart failure 'most severe' category. Patients consulted for the diagnostic category in a 2-year time window (1995)(1996). The exclusiveness of severity categories is that allocation of an individual to one of these was based on the most severe category; for example, if an individual had consulted for hypertension and heart failure, they would be classified into the heart failure category *Statin dose summarised as mean daily dose **Adjusted Cox regression rate ratios for all covariates: age, gender, deprivation, BMI, cholesterol level, specific drug co-therapies (opioids, non-opioids, non-steroidal anti-inflammatories or antidepressants), and duration and severity of cardiovascular disease | 2016-05-12T22:15:10.714Z | 2013-03-08T00:00:00.000 | {
"year": 2013,
"sha1": "78a85187070e0c1f664996eed46454f4995fc74e",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc3682050?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "78a85187070e0c1f664996eed46454f4995fc74e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
64568308 | pes2o/s2orc | v3-fos-license | Analysis and implementation of cross lingual short message service spam filtering using graph-based k-nearest neighbor
SMS (Short Message Service) is on e of the communication services that still be the main choice, although now the phone grow with various applications. Along with the development of various other communication media, some countries lowered SMS rates to keep the interest of mobile users. It resulted in increased spam SMS that used by several parties, one of them for advertisement. Given the kind of multi-lingual documents in a message SMS, the Web, and others, necessary for effective multilingual or cross-lingual processing techniques is becoming increasingly important. The steps that performed in this research is data / messages first preprocessing then represented into a graph model. Then calculated using GKNN method. From this research we get the maximum accuracy is 98.86 with training data in Indonesian language and testing data in indonesian language with K 10 and threshold 0.001.
Introduction
SMS (Short Message Service) is one of the communication services that are still the main choice in various parties. Along with the development of other communication media, the country lowered the SMS rate to remain the top choice for mobile users [1]. This results in increased spamming of spam that some irresponsible people use, such as advertising and fraud. So with the number of SMS fraud or advertising action is causing discomfort for most SMS recipients. SMS spam is an SMS that contains information unwanted by the recipient of the message. Spam SMS is sent from one sender to many numbers obtained randomly (randomly).
Given the multi-lingual nature of documents in a message, web, etc., the need for effective multilingual or cross-lingual processing techniques is becoming increasingly important. Therefore, the hope of this research system can recognize a message with multi-language exactly that is Indonesian or English language and classify it into class of spam or ham (not spam) [2] [3].
In this research, a message will be classified in the category of spam or not spam (ham). The dataset in this research using English language and Indonesian language. For the classification process, the data already obtained needs to be preprocessed in advance to clear the data without changing any contained information [4].
There are classification methods that have been used in previous research, such as Nave bayes [5][6] [7], KNN, GKNN with accuracy maximum 95,5. From the research mentioned that Graphbased K-nearest Neighbor can perform SMS classification with accuracy level reaches 98, 9 [8]. Based on this research, GKNN method was chosen as algorithm used to perform the process of classification on SMS filtering in this research. From this research the system will classify data in spam or ham class (not spam).
Related Works
Palmieri et all says that The mobile networks are indeed very popular in our lives but raises serious security concerns their increasing vogue and widespread coverage [9]. It means that smartphones may now represent an most ideal target for malicious authors. Kim et all says SMS is not only used for texting among people but it is also used as security like authentication method, token (mobile banking, double authentication, one-time password delivery, etc) [8].
When we embarked on this line of research, we did not nd any publications addressing the area of Cross-lingual SMS spam filtering. On the other hand, there is a rich literature addressing the related problem of Cross-Lingual Information Retrieval (CLIR) or Cross-Lingual Text Categorization(CLTC). olsson et al says that Both CLIR and CLTC are based on some computation of the similarity between texts, comparing documents with queries or class proles [10]. The main contrasting between CLIR and CLIC is that CLIR is queries based, consisting of a few words only, whereas in CLTC have class and each class is describe by an major profile (may be seen as a adequate documents collection) [10]. In this research we used CLTC to solve our problem.
Proposed Schema
System development will be divided in four phase, these are: documents translation as needed, preprocessing, build graph, classification using graph-based K-Nearest Neighbor. In figure 3-1, represent system overview.
Dataset
A dataset is an object that represents data. The dataset that used are contain of unstructured text which later data will be transformed into a form of data that is structured to facilitate the process of storing, calling, and classification on the system. The Indonesian dataset is the result of manual collection from previous researchers [6][5], while the English dataset is obtained from SMS Corpus: SMS Spam Collection v.1.
Document Translation
The data used will be adjusted to the data partition scenario. For a cross-lingual approach, data will be translated into other languages with the help of machine translator i.e. Google Translate. Documents translation is only done for test scenario 3, 4 and 5. Before preprocessing, data will be translated. For example, English data will be translated into Indonesian language, figure 2 illustrated this example.
Preprocessing Data
At this phase, training data and testing data will be processed to produce better data to be processed in the next phase. In the preprocessing phase, there are 5 step of the process that must be done. Here is an explanation of each step done on preprocessing: 3.3.1. Case Folding Case Folding and remove punctuation: is the process of converting capital letters contained in the dataset into lowercase for all datasets and also remove characters and punctuation.
Tokenizing
Tokenizing is the process to separate the sentence contained in the review into a snippet of words, this piece of words that will be input of the next step after preprocessing.
Slang Handling
In dataset, there are many informal words, it called slang word. To solve these cases, so we created a dictionary containing the words slang and equipped with the meaning of the word. Glossary for the English slang dictionary from " Slang Dictionary-Text Slang Internet Slang Words" on the website http://www.noslang.com/dictionary/, while for Bahasa Indonesia obtained from http: //en.wikipedia .org /Wiki/Indonesian s lang.
Build Graph
After preprocessing, it will generate data that ready to be processed in the next step. Data will be represented in graph. Formation of graph in data training is done by dividing data into 5 documents / messages into 1 (one) graph. So suppose, there are 100 documents/messages, then split into 20 graphs. While for data testing representation, each message/document will be 1 (one) graph. Each node that forms in a graph shows a token selected at the preprocessing step. Edge is formed based on the order of occurrence between 2 words. The weights of the edge are shown with the Feature Weight Matrix. Graph formed consists of 2 graphs, namely training graph and testing graph. Training graph will group the data by category, spam and ham. While testing graph is a graph formed from data testing.
Classification Using G-KNN
Graph-based K-Nearest Neighbors (GKNN) is a evolution of K-Nearest Neighbors, which in GKNN data will be represented in the form of graph models. From that model will be calculated the similarity between documents classified by the large number of documents [8]. In a graph consists of nodes, edges, and weights of edges. To measure the similarity between 2 graphs (spam graph and ham graph), then the classification is measure the similarity. There is a Feature Weight (FW) which defines the similarity between 2 graphs based on the weight of the nodes and edges on the graph. The Node Fit Percent (NFP) shows how many nodes in the sample graph with their weights bigger than zero also appear in the test graph NFP can be formulated as follows: Calculate the value of NFP from testing graph and training graph by calculating the frequency of each feature on the graph that is the value of W (i, i). If the Nfp value is greater than the threshold, it will calculate the FW value of 2 pieces of graph. Otherwise if the Nfp value is smaller than the threshold, then the 2 calculated graphs are not in one category, so there is no need to calculate the FW value. For the calculation of NFP and Feature Weight can be seen in Figure 3 and Figure 4. The process of comparing the values of NFP and FW is done until all training and testing graph have been compared. In the end, the initially empty RL list will be filled with the FW value and the category of the training graph. The most categories that appear on the list will be a new category of testing graphs tested that represent category of SMS testing.
Analysis and Experiment
For the classification in the Graph K-Nearest Neighbor method is used K value that will determine the length of list / array. The K value used in this test is 10, as well as by varying the threshold value to see the effect of the threshold value in find the most optimal threshold value in this research.
First Scenario Testing Analysis
The first test was conducted using Indonesian training data, and data testing using Indonesian Language. The threshold values used in this test are 0.001, 0.025 and 0.05. Based on the first test ( Figure 5), with 10-fold by changing the threshold value obtained the highest accuracy of the highest average accuracy with the threshold of 0.001 is 84.44 with the highest accuracy value of 100 available in the sample data C. In the Indonesian language dataset especially the spam class has a pattern Almost similar, it creates a weighted value on each edge that is getting higher, so when searching for FW value, then having a higher weight value will have a higher chance of being in that category.
Second Scenario Testing Analysis
The second test result for 10-fold cross validation that consist of training data using Indonesian, and data testing using English (without translated) with a combination of threshold value 0.0005 and 0.0001. Based on the results of the second test (
Thrid Scenario Testing Analysis
The third test result for 10-fold validation that consist of data training using Indonesian and English (without translation), and data testing using English (translated to Indonesian) with the combination of threshold value 0.0001, 0.005 and 0.05. Based on the third test (Figure 7), by changing the threshold value obtained the highest accuracy with the highest threshold 0.0001 is 90.82, with the highest accuracy of 92.4 in sample data A and C. It can be seen also that the value of F1-measure is low, it is also caused by the use of different language in training data and data testing.
Fourth Scenario Testing Analysis
The fourth test was performed using 10-fold validation using Indonesian and English training data (translated into Indonesian), and testing data using Indonesian and English (translated to Indonesian). The threshold values used in this test are 0.001, 0.0075 and 0.05. Based on the fourth test (Figure 8), by changing the threshold value obtained the highest accuracy on the threshold 0.001 of 96.27, with the highest accuracy of 97.68 in a sample data G. From the results of this test is considered the most optimal for cross-lingual handling, because in this scenario has a good combination of data sharing and data translation. The best results of the overall experimental results are from this fourth scenario.
Fifth Scenario Testing Analysis
The fifth test was performed using 10-fold validation using Indonesian (translated to English) and English training data and testing data using Indonesian (translated to English) and English.
The threshold values used in this test are 0.001, 0.075 and 0.05. Based on the fifth test (Figure 9), by changing the threshold value obtained the highest accuracy on the threshold 0.001 of 93, with the highest accuracy of 94.78 in a sample data F. From the results of this test is considered the most optimal for cross-lingual handling, because in this scenario has a good combination of data sharing and data translation, but this scenario is not the best result.
Conclusion
Based on experiment result we found that the fourth scenario is the optimum one. with the accuracy 97.86. with based model in indonesia language(Bahasa), data training translated into bahasa and data testing translated into bahasa. That can be conclude with this dataset is the best using Indonesia language/Bahasa as primary language for analysis. | 2019-02-17T14:19:52.827Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "48699a2eaf1574ba5b5cc61e8ec46d43239c94ae",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/971/1/012042",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7bafa8409f850dbca5c4a3e91e0ef088238df31b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
252086782 | pes2o/s2orc | v3-fos-license | Investigation on the Effects of Modifying Genes on the Spinal Muscular Atrophy Phenotype
Introduction Spinal muscular atrophy (SMA) is an autosomal recessive neuromuscular disorder caused by the degeneration of motor neurons, muscle weakness, and atrophy that leads to infant's death. The duplication of exon 7/8 in the SMN2 gene reduces the clinical severity of disease, and it is defined as modifying effect. In this study, we aim to investigate the expression of modifying genes related to the prognosis of SMA like PLS3 , PFN2 , ZPR1 , CORO1C , GTF2H2 , NRN1 , SERF1A , NCALD , NAIP , and TIA1. Methods Seventeen patients, who came to Trakya University, Faculty of Medicine, Medical Genetics Department, with a preliminary diagnosis of SMA disease, and eight healthy controls were included in this study after multiplex ligation-dependent probe amplification analysis. Gene expression levels were determined by real-time reverse transcription polymerase chain reaction and delta–delta CT method by the isolation of RNA from peripheral blood of patients and controls. Results SERF1A and NAIP genes compared between A group and B + C + D groups, and A group of healthy controls, showed statistically significant differences ( p = 0.037, p = 0.001). Discussion PLS3, NAIP , and NRN1 gene expressions related to SMA disease have been reported before in the literature. In our study, the expression levels of SERF1A , GTF2H2 , NCALD , ZPR1 , TIA1 , PFN2 , and CORO1C genes have been studied for the first time in SMA patients.
Introduction
Spinal muscular atrophy (SMA) was first described by Guido Werdnig in 1891 in two baby siblings. Seven SMA cases were later reported by Johan Hoffman between 1893 and 1900. 1 SMA is a neuromuscular disease with an autosomal recessive inheritance that leads to early death with muscle weakness and atrophy, and it has a prevalence of 1 in 5,000 to 1 in 10,000. The SMA-associated chromosomal location is mapped in the 5q11.2-5q13.3 region. In 1994, the association between the survival motor neuron 1 (SMN1) gene and SMA was reported in the 5q13.2 region after defining the exact chromosomal location of the gene. 2 Exon 7/8homozygous deletion in the SMN1 gene was detected in 95 to 98% of SMA patients, and point mutations in this gene were detected in 2 to 5% of patients. 3 Exon 7/8 homozygous deletion in the SMN1 gene is the main factor in the diagnosis of SMA. The SMN1 gene encodes the survival motor neuron (SMN) protein. As a result of a pathogenic variation, a defective or low-level expression of the SMN protein leads to the loss of function of α neuronal cells in the anterior horn region of the spinal cord, causing the skeletal muscles to weaken and shrink. It has been reported that the c.859G > C (p. Gly287Arg) variation in the seventh exon of the SMN2 gene increases the copy number of the transcript at the mRNA level. The increase in the copy number of the SMN2 gene contributes to the progression of the disease, and this has been defined as a modifying effect. [1][2][3][4] The SMN1 and SMN2 genes are mapped in the centromeric and telomeric parts of the chromosome 5q13.2 region. The nucleotide sequences between these two genes are more than 99% similar, encoding a 294-amino acid-long, 28-kDa SMN protein. 5 Although the SMN protein produced by the SMN2 gene is sufficient for all cells, it is not enough for motor neurons. 6 As a result of the studies performed with heterogeneous nuclear ribonucleoproteins (hnRNPs), the SMN protein is determined to interact with the regulator gene of the glucosyltransferase (RGG) box. After the SMN protein forms a complex with the hnRNPs and the RGG box, the complex interacts with pre-mRNA and nuclear mRNA and plays an important role in the processing and transport of mRNA. 7 Owing to the faulty production of the SMN protein, snRNPs cannot interact with other molecules and the functions of motor axons are negatively affected because of splicing errors. 8 The faulty production of the SMN protein leads to defective functions of both molecular biological features and metabolic activities. 9 The clinical severity of SMA disease may differ among patients. Whereas some patients die in infancy, some with the same mutation may survive with many symptoms, such as muscle discomfort, wheelchair dependency, or milder clinical symptoms. Exon 7/8 homozygous deletion in the SMN1 gene may cause different phenotypes in patients. Therefore, SMA is divided into five clinical subtypes according to physical examination findings: SMA type 0, type I, type II, type III, and type IV. 10 An increase in the number copies of the SMN2 gene increases the clinical severity of SMA. 1 The results of the studies conducted in recent years have revealed the existence of new modifying genes in SMA and the SMN2 gene. We know SMA is a monogenic disease but shows clinical heterogeneity. The severity of SMA clinic is related to SMN2, and this disease manifestation must have more modifying effects. So, the potential modifying genes were classified according to their functions in multiple protein interactions with SMN protein or relation to motor neuron survival, the effect on the promoter region, transcription, splicing, and expression parts. 11 This study aimed to investigate the association between the expression levels of genes considered to have modifying effects on SMA and the prognosis of the disease in patients with SMA.
Patients and Control Subjects
Seventeen patients (5 female and 12 male) who applied to the outpatient clinic of Trakya University Hospital, Medical Genetics Department, Genetic Diseases Diagnosis Center with a prediagnosis of SMA were included in the study. Six patients had SMA type I, eight had SMA type II, and three had SMA type III according to the clinical prediagnosis. The patients were aged between 9 months and 15 years, and the ages of the individuals included in the control group were determined to be close to the average age of the patients. The control group was composed of eight individuals who did not have a neurological disease or had no family history. Among the SMA patients, deletions and duplications in the SMN1 and SMN2 genes were evaluated using multiplex ligation-dependent probe amplification (MLPA) probemix (P460-A1 and P060-B2, MRC-Holland Salsa) SMA carrier probes through the MLPA method. Patients with exon 7/8 homozygous deletion in the SMN1 gene were included in the study. ►Table 1 presents the detailed patient information.
The patients signed an informed consent form prior to participating in the study. Ethics approval was granted by the Scientific Research Ethics Committee of Trakya University Faculty of Medicine (No. 16/30 dated January 10, 2018).
Methods
A total of 2-mL peripheral venous blood taken from the patients and healthy controls was placed into ethylenediaminetetraacetic acid for RNA isolation, which was performed in accordance with the protocol of the kit (QIAGEN QIAamp RNA Isolation Kit, Germany). The concentration and purity (260/280 nm) of the isolated RNA samples were measured using a nanodrop device. RNA samples with the proper concentration and purity were stored at À80°C. Each RNA sample was transformed into complementary DNA (cDNA) in accordance with the protocol of the kit (Thermo Fisher Scientific High Capacity cDNA Reverse Transcription Kit, Lithuania). Subsequently, PLS3, PFN2, ZPR1, CORO1C, GTF2H2, NRN1, SERF1A-H4F5, NCALD, NAIP, and TIA1 gene expression studies were performed by the TaqMAN Gene Expression Assay Kit (Thermo Fisher Scientific, Wilmington, MA, United States) in Applied Biosystems Step One Plus (Thermo Fisher Scientific, Wilmington, MA, United States) using appropriate primers and assays created specifically for each gene. The procedure was repeated three times for each patient on a 96-well plate. As an endogenous control, the β-actin housekeeping gene was used as an internal control.
The delta CT (DCt) values were calculated from the data obtained using the following formula: DCt ¼ gene Ct-housekeeping Ct. 12 The values were calculated using DDCt and 2-DDCt, and the gene expression levels were determined.
Results
Patients were divided into four groups based on the MLPA results: Group A: This group comprised eight patients with SMN1 exon 7/8 homozygous gene deletion and normal SMN2 copy number (no increase in copy number) (P2, P7, P9, P10, P11, P12, P16, and P17). The most common clinical findings in patients were unable to sit without support, tongue fasciculation, decreased or absent reflexes, muscle weakness and atrophy, and hypotonia. Group B: This group comprised four patients with SMN1 exon 7/8 homozygous gene deletion and an increased copy number of exon 7 in the SMN2 gene (P1, P3, P5, and P8). The patients could sit unsupported but never ambulated, and they had problems while walking or could not walk.
Group C: This group comprised two patients with SMN1 exon 7 homozygous deletion/exon 8 heterozygous deletion and an increased copy number of SMN2 exon 7 (P4 and P6). Patients had muscle weakness, they could sit by themselves, and they had late muscle development.
Group D: This group comprised three patients with SMN1 exon 7/8 homozygous deletion and an increase in the copy number of SMN2 exon 7/8 (P13, P14, and P15). Patients had problems while walking and muscle soreness.
The patients included in this study were divided into three clinical subtypes based on the clinical classification criteria of the 1991 SMA International Consortium: The Ct value of the NAIP gene could not be calculated in two patients (P2 and P10). This was interpreted as a deletion of the NAIP gene, and RNA primers were designed for confirmation. The expression of each gene in the control group was calculated as 2^( -DDCt) based on the Ct values obtained at the end of each study. Statistical analysis was performed (►Table 3).
The results were presented as the mean AE standard deviation. The suitability of the quantitative data to normal distribution was determined using the Shapiro-Wilk test. Comparison of the gene expression levels (PLS3, PFN2, ZPR1, CORO1C, GTF2H2, NRN1, SERF1A, NCALD, NAIP, and TIA1) between the groups (control, group A and group B þ C þ D) and SMA types (type I, II, and III) was performed using the Kruskal-Wallis test.
The SPSS version 20.0 (License No: 10240642) program was used for statistical analysis by the Department of Biostatistics and Medical Informatics at Trakya University. A (p) value of less than 0.05 was considered statistically significant.
Evaluation of Group A, group B þ C þ D, and the Control Group Using the Nonparametric Kruskal-Wallis Test A statistically significant difference was determined using the Kruskal-Wallis test in the SERF1A and NAIP gene expression levels among group A, group B þ C þ D, and the control group (p ¼ 0.001) (►Fig. 1).
SERF1A Gene Expression
A statistically significant difference was found in the SERF1A gene expression between group A and group (B þ C þ D) (p ¼ 0.037) and between group A and the control group (p ¼ 0.001). No statistically significant difference was observed in the SERF1A gene expression between group (B þ C þ D) and the control group (p ¼ 0.090) (►Fig. 2a).
NAIP Expression
A statistically significant difference was found in the NAIP gene expression between group A and group (B þ C þ D) (p ¼ 0.001) and between group A and the control group (p ¼ 0.001). No statistically significant difference was observed in the NAIP gene expression between group (B þ C þ D) and the control group (p ¼ 0.873) (►Fig. 2b).
P12
SMA type III, group A. We predicted that the patient with exon 7/8 homozygous deletion in the SMN1 gene, normal SMN2 gene copy number, and a clinical diagnosis of SMA type III clinical diagnosis was associated with a higher PLS3 gene expression level (9.8 times) compared with other patients and the control group (►Fig. 3).
P10
SMA type II, group A. We predicted that the patient with exon 7/8 homozygous deletion in the SMN1 gene, normal SMN2 gene copy number, and a clinical diagnosis of SMA type II was associated with a higher NRN1 gene expression level (4.29 times) compared with other patients and the control group (►Fig. 3). The result of the real-time polymerase chain reaction (PCR) analysis showed that the Ct value associated with the NAIP gene expression level could not be detected in our patient. Anticipating that this could be related to a possible deletion in the NAIP gene, we designed an RNA primer specific to the NAIP gene. Exon regions with no band patterns were detected by PCR, and agarose gel electrophoresis confirmed the deletion (►Fig. 4).
P2
SMA type I, group A. The Ct value associated with the NAIP gene expression level was not detected (►Fig. 3) as a result of the real-time PCR analysis in the patient with exon 7/8 homozygous SMN1 gene deletion and normal SMN2 gene copy number. Anticipating that this could be related to a
P4, P6
SMA type II, group C. In these two patients, we suggested that the heterozygous deletion in the exon 8 SMN1 gene did not contribute to the progression of the patients' clinic, as P4 and P6 had the same clinical diagnosis (SMA type II) and P14 and P15, which are in group D, had exon 8 homozygous deletion in the SMN1 gene (►Fig. 3).
P13
SMA type III, group D. The patient with exon 7/8 homozygous deletion in the SMN1 gene and an increased copy number in exon 7/8 in the SMN2 gene had an SMA type III clinical diagnosis, unlike the other patients in group D (P14, P15). Thus, this could be related to another gene expression level other than those investigated in our study (►Fig. 3).
The comparison of the expression levels according to SMA clinical type showed that the PLS3 expression level increased in SMA type III and that the NAIP expression level decreased in SMA type I (►Fig. 5).
The mean PLS3 gene expression level was found to be 1.8, 1.6, and 4.25 in SMA type I, type II, and type III, respectively. Although the PLS3 gene expression level increased in SMA type III, no statistically significant difference was found between SMA type I and type II (p ¼ 0.197) (►Fig. 5). This result could be related to the low number of patients included in the SMA type III patient group (P5, P12, and P13). A statistically significant result could be obtained if the study were repeated with an increased number of patients. The mean NAIP gene expression level was found to be 0.12, 0.81, and 0.46 in SMA type I, type II and type III, respectively. Although the NAIP gene expression level was lower in SMA type I than in types II and III, no statistically significant difference was found (p ¼ 0.081) (►Fig. 5). The low average NAIP expression level in SMA type I could be related to the failure to obtain the NAIP gene-specific Ct value in our two patients (P2 and P10). Statistically significant results could be obtained if the study were repeated with an increased number of patients.
Discussion
SMA is an autosomal recessive neuromuscular disease characterized by weakness and atrophy in the proximal muscles. Homozygous deletion in the 7th and 8th exons of the SMN1 gene or only in the seventh exon was found in 95% of the patients. As a result of the deletions in these exons, weakness and atrophy in skeletal muscles can be observed, as the SMN protein cannot be produced or is damaged. 13 Although the protein expressed by the SMN2 gene compensates for the deficiency of the protein that the SMN1 gene cannot express, some patients do not achieve the expected level of recovery. The better clinical course in SMA patients with an increased number of SMN2 gene copies was interpreted as the modifying effect of the SMN2 gene. 14 In some single-gene disorders, the effect of the mutation in a particular gene on the phenotype may differ among individuals carrying the same mutation. This can be explained by the change in expressivity and/or penetrance. Similar phenotypic differences can also be detected in SMA patients. The SMA consortium has clinically divided SMA into five subtypes, namely SMA type 0, SMA type I, SMA type II, SMA type III, and SMA type IV, taking into account the physical examination findings of the patients. 10 In his study on humans and mice, Nadeau 15 reported that modifying genes could be a factor affecting the clinical course of a disease (age and penetrance). The detection of modifying genes will contribute to a better understanding of the pathogenesis of SMA and to the development of drug studies. 14 Riordan et al also emphasized the importance of modifying genes in their studies. 16 In the present study, to explain the prognostic differences in patients diagnosed with SMA, we investigated the expression levels of PLS3, PFN2, ZPR1, CORO1C, GTF2H2, NRN1, SERF1A, NCALD, NAIP, and TIA1 genes, which we predicted to have modifying effects on the patient and healthy control groups. We found a statistically significant difference in the SERF1A and NAIP gene expression levels (p ¼ 0.001). The SERF1A gene expression level being low and the prognosis being worse in the patients in group A could be explained by the modifying effect of the SERF1A gene. Considering the results on the NAIP gene expression level, we found that the NAIP gene also had a modifying effect.
Arkblad et al investigated the relationship between SMA disease and the SERF1A gene using the MLPA method. They reported that a deletion in an allele of the SERF1A gene was detected in all SMA type I patients included in the study, 50% of SMA type II patients, and 31% of SMA type III patients. However, no significant relationship was observed between the number of copies of the SERF1A gene and the clinical severity of SMA. 17 In their study performed on 26 SMA patients, Amara et al found that the copy number of the SERF1A gene was observed as one copy in exon 1 in 60% of the mild SMA type I cases and as two copies in the mild clinical ones of the SMA type II and type III cases. 18 Medrano et al reported that a deletion in the NAIP and SERF1A genes was observed in approximately 73 and 35% of SMA type I patients, respectively. As a result, deletions in the NAIP and SERF1A genes were detected in 90% of SMA type II and type III patients and 21% of SMA type I patients, as these two genes could have modifying effects. 19 In 34 SMA patients, Tran et al investigated the SMN2 and NAIP genes, which are considered to have modifying effects. The SMN2 gene copy number was detected as three copies in only one of the 13 patients with SMA type I and two copies in the remaining patients; three copies in 9 of the 11 patients with SMA type II and two copies in the remaining patients; and two copies in 2 of the 10 patients with SMA type III, three copies in five patients, and four copies in the remaining three patients. This result was reported as a modifying effect of the Fig. 4 Agarose gel electrophoresis image of the NAIP gene (RNA primers designed specifically for the NAIP-208 ENST00000517649.6 transcript). The results of the real-time PCR analysis showed that the cDNA of control 1 (C1), control 2 (C2), P2 with an undetected Ct value and NC (negative control) were amplified by PCR to exclude possible contamination. Amplicons obtained by PCR were subjected to agarose gel electrophoresis and imaged in an ultraviolet transilluminator. No band pattern was seen in P2, P10, and NTC. This result excluded possible contamination while confirming the NAIP gene deletion at the same time. The nonspecific band patterns seen in exons 8 to 12 and exons 9 to 11 in P2 and P10 were interpreted as possible amplification of the NAIP pseudogene. PCR, polymerase chain reaction. SMN2 gene. The NAIP gene was observed as a homozygous deletion in five SMA patients, a single copy in 20 patients, and a normal copy number in nine patients. Therefore, it was interpreted as the modifying effect of the NAIP gene. 20 No studies have yet examined the relationship between SMA prognosis and the expression levels of SERF1A and NAIP genes. SERF1A gene function is a general regulator of protein aggregations, and NAIP gene function is related to the negative regulator of motor-neuron apoptosis. 21 According to their functions, NAIP gene is directly related to IAP (Inhibitor of Apoptosis) apoptosis family and has direct effect on motor neurons which are related to SMA prognosis, and SERF1A gene may regulate protein aggregation of SMN proteins; the location of SERF1A is near to SMN1 and SMN2 gene but there is not yet any functional study performed about the contribution of SMA prognosis for these two genes.
So, the expression levels of the SERF1A and NAIP genes being lower in SMA type I patients and the statistical comparisons between the groups in our study support the idea that the SERF1A and NAIP genes have modifying effects. 19,20 As a result of the comparison of the expression levels of the genes with SMA (SMA type I, SMA type II, and SMA type III) types, the PLS3 gene expression increased in SMA type III and the NAIP gene expression decreased in SMA type I. However, we did not find a statistically significant difference between the SMA clinical types and the expression levels of the PLS3 and NAIP genes. We considered this result to be related to the low number of patients included in our study. The result can become statistically significant if the number of patients is increased.
The NRN1 gene synthesizes the protein required for the growth of neurite. In the only study in the literature examining the relationship between the SMA and the NRN1 gene, the expression levels of the PLS3 and NRN1 genes were analyzed in nine patients with SMA diagnosis in four different families. Both siblings, P1 (21 months old, unable to walk) and P2 (14 years old, able to walk), in the first family were diagnosed with SMA type III, and the expression level of the NRN1 gene was reported to be 0.9 and 1.4 times higher in P1 and P2, respectively. This result was interpreted as a modifying effect of the NRN1 gene. No statistically significant difference was found between the PLS3 gene expression level and the clinical course of SMA. In the second family, three siblings (P3, P4, and P5) with SMA type III diagnosis were included in the study, and the clinical course of H4 was more severe in P3 and P5. Contrary to the result for the first family, the PLS3 gene had a modifying effect but the NRN1 gene had no modifying effect. In the third family, two siblings were diagnosed with SMA type III, and the expression level of the PLS3 gene was found to be associated with the clinical course. Moreover, the NRN1 gene expression level was found to be unrelated to the clinical course in the third family, and the gene had no modifying effect. In the fourth family, two siblings were diagnosed with SMA type II (P9) and type III (P8) and included in the study. The PLS3 gene expression level was determined to be 1.7 in the sibling diagnosed with SMA type II (P9) and 0.8 in the sibling diagnosed with SMA type III (P8). Thus, the NRN1 gene was reported to have no modifying effect. 21,22 In this study, the NRN1 gene expression was higher (4.29 times) in our patient with SMA type II (P10) than in the other patients and the healthy control group. This result supports the literature reporting the modifying effect of the NRN1 gene.
In the study in which the PLS3 gene expression and SMA relationship were investigated in 88 SMA patients (29 males less than 11 years old, 12 males older than 11 years old, 29 prepubertal females, and 18 postpubertal females), the highest PLS3 gene expression was found in SMA type III postpubertal females. PLS3 gene expression was reported as a modifier gene in females, as it varied according to age and puberty stage. 23 Analyzing the PLS3 gene expression levels in 19 SMA type I patients, 21 SMA type II patients, 25 SMA type III patients, and 59 healthy controls, Yanyan et al evaluated the SMN2 copy number using the MLPA method and found three copies of the SMN2 gene in 76.9% of the patients, two copies in 21.5% of the patients and four copies in 1.5% of the patients. The PLS3 gene expression levels were found to be 56.7% lower in SMA type II patients (i.e., those with one and two copies of the SMN2 gene) than in SMA type III patients. The PLS3 gene expression was 62.6% less in SMA type II patients (i.e., those with three copies of the SMN2 gene) than in SMA type III patients. 24 This study showed that PLS3 gene expression was increased 9.8 times in our SMA type III patient (P12 with SMN1 gene exon 7/8 homozygous deletion and SMN2 gene exon 7/8 without an increased copy number) compared with other patients and the healthy controls. Although the SMN2 copy number of our patient was normal, the good clinical course could be related to the increased PLS3 gene expression. This result can be interpreted as the modifying effect of the PLS3 gene.
He et al investigated the copy number changes in the SMN2, NAIP, GTF2H2, and H4F5 genes in 157 SMA patients and found that the SMN2 gene copy number was 8.72% single copy, 73.83% two copies, 15.43% three copies, and 2.01% four copies. They showed that all patients with a single SMN2 gene copy number having a diagnosis of SMA type I support the findings about the modifying properties of the SMN2 gene. In the same study, the NAIP gene copy number changes were evaluated in 149 patients and homozygous deletion was found in 15 patients, heterozygous deletion in 126 patients, and normal in eight patients. 25 Liu et al examined the NAIP and GTF2H2 genes in 75 patients consisting of 41 SMA type I patients, 29 SMA type II patients, and five SMA type III patients. The SMN2 gene was found as two copies in 28 patients, three copies in 29 patients and four copies in 18 patients. They reported that NAIP and GTF2H2 gene deletions were detected in five patients (fourth exon in four patients and fifth exon deletion in one patient) and 10 patients, respectively. 26 In our study, we did not find any statistically significant results supporting the modifying effect of the GTF2H2 gene expression level.
In their study conducted on mice, Torres-Benito et al found that antisense oligonucleotide therapy targeting the NCALD gene was effective for SMA disease. 27 No study investigating the NCALD gene expression level in SMA patients has been reported in the literature. Our study is the first to examine the relationship between SMA and the NCALD gene expression level. We found no statistically significant difference between the SMA disease phenotype and the NCALD gene expression level.
More than one study has examined the modifying effect of the ZPR1 gene on SMA patients, and most of these studies were conducted by Gangwani et al. In their study on mice, Gangwani et al reported that the deficiency of the ZPR1 protein synthesized by the ZPR1 gene caused neurodegeneration. 28 Ahmad et al analyzed the changes made by increasing and decreasing the expression level of the ZPR1 gene in mice with SMA and found that the low expression level of the ZPR1 gene caused loss of motor neurons, hypermyelination of the phrenic nerves, respiratory distress, and a more severe clinical course. They suggested that the high expression level of the ZPR1 gene stimulates neurite growth and repairs axonal growth defects. 29 Genabai et al also reported that the ZPR1 gene had a positive effect on SMN2 gene expression. 30 Note that most of the studies that have investigated the relationship between the ZPR1 gene and SMA are mice studies. Unlike the literature, our study is the first to analyze the relationship between the ZPR1 gene expression level and SMA in humans. No statistically significant relationship was found between the ZPR1 gene expression level and the SMA phenotype.
The TIA1 gene regulates alternative splicing at the seventh exon of the SMN2 gene. Owing to this feature, it has been defined as a positive regulator in SMA disease. 31 In the literature, there is no study investigating the relationship among the TIA1 gene copy number, the TIA1 gene expression level, and SMA. In this respect, our study is the first to analyze the association between the TIA1 gene expression level and SMA. No statistically significant difference was found between the TIA1 gene expression level and the SMA phenotype in this study. 32 Wadman et al investigated the PFN2 gene variations in SMA patients using the DNA sequence analysis method but could not find any significant relationship. 33 No statistically significant difference was found in our study, which is the first to examine the relationship between the PFN2 gene expression level and SMA.
In their study on the functions of PLS3 and CORO1C genes in SMA patients, McCabe et al 32 reported that PLS3 and CORO1C genes could interact with F-actin and SMN1 protein. 33 However, no study has yet investigated the expression level of the CORO1C gene in SMA disease. In this respect, our study is the first to analyze the relationship between the CORO1C gene expression level and SMA, and we found no statistically significant difference.
There are some limitations of this study. The number of patients in this study is low. Besides, this study does not include or exclude the effect of studied modifier genes depending on the treatments. Conducting similar studies in different populations with an increased number of patients can provide important insights into SMA disease and make significant contributions to the literature.
Conclusion
The results of our study, in which we investigated the relationship between the expression levels of PLS3, NAIP, and NRN1 genes and SMA, were previously reported in the literature. However, no study has yet investigated the relationship between the expression levels of SERF1A, GTF2H2, NCALD, ZPR1, TIA1, PFN2, and CORO1C genes and SMA. Therefore, this study is the first of its kind in the literature.
Although the results of the study support the modifying effects of SERF1A, NAIP, NRN1, and PLS3 genes in SMA, we did not find a statistically significant difference in the modifying effects of GTF2H2, NCALD, ZPR1, TIA1, PFN2, and CORO1C genes on SMA.
Conflict of Interest
None declared.
Ethical Approval
The patients signed an informed consent form prior to participating in the study. Ethics approval was granted by the Scientific Research Ethics Committee of Trakya University Faculty of Medicine (No. 16/30 dated January 10, 2018). | 2022-09-07T05:11:33.934Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "393c6db92d9d53780998a14faa0ce1476fc0ceda",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "393c6db92d9d53780998a14faa0ce1476fc0ceda",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234321365 | pes2o/s2orc | v3-fos-license | Effect of Compressive Deformations on the Final Microstructure of a Low Carbon High Silicon Bainitic Steel Thermomechanically Processed
Due to a combination of advantages, high-performance steel components, especially for automotive manufacturing applications, are generally forged parts. In the forging industry, bainitic steels are being increasingly used, because they can reduce the processing chain and energy consumption. In this case, the bainitic microstructure can be obtained immediately after forging, with controlled cooling, and without any subsequent heat treatment. In the present work, the effect of thermomechanical routes performed in the austenitic and bainitic fields on the final microstructure and final hardness of 18MnCrSiMo6-4 bainitic steel has been discussed. Thermomechanical processing routes were tested and evaluated in a Gleeble® 3800 testing machine with one and two-step deformation. In both cases, the sample had its height reduced by 40% and the strain rate used was 0.1s-1. It could be shown that the plastic deformation promoted in the bainite field induces the bainite transformation. The results also show a strong dependence of bainite morphology concerning the deformation temperature of the steel. Moreover, the knowledge of the hot and warm stress-strain curves is an important result because it allows estimating the necessary stress and the energy consumption per volume unit to deform the material.
Introduction
In the last decades, the excessive consumption of natural resources already has visible consequences throughout the planet. This fact, associated with current governmental requirements, such as the current Rota 2030 1 and the National Action Plan on Energy Efficiency -NAPE 2 programs, has led industries to implement a more efficient production system, with the rational use of these resources at all stages of manufacturing processes.
Thus, research projects related to the forging process have been ranking energy-saving by eliminating heat treating processes and developing forging thermomechanical treatments -such as the joint program AiF-DFG EcoForge 3 and the international cooperation project BRAGECRIM, entitled "Energy-efficient manufacturing chain for advanced bainitic steels based on thermomechanical processing" 4 . In this sense, processes such as direct quenching associated with the use of new materials -such as new bainitic steels 5,6 -appear as an alternative to reduce energy waste.
The chemical composition of the new bainitic steels appears to be promising for applications in forging processes since they have a great potential to reduce the forged parts processing chain and save energy [7][8][9][10][11][12][13] . In the new bainitic forging steels the C content is limited around 0.2 wt-% to avoid welding problems 14 . The C and Mn decrease the Bs-temperature (which has a microstructural refinement effect) but, according to Caballero et al. 15 , Mn segregations need to be avoided to suppress the martensite formation and mechanical properties deterioration. The Si effect is well established retarding the bainite kinetics, stabilizing the remaining austenite by carbon enrichment, and hence to suppression of cementite during bainite formation. Mo and Cr increase the hardenability, allowing a wider range of cooling rates may be employed.
The bainitic microstructure is well known for the adequate balance of mechanical and metallurgical properties, such as hardness and toughness. As shown in Caballero et al. 16,17 , impact toughness evaluation of different continuously cooled cementite-free bainitic microstructures demonstrated that bainitic microstructure consisting of lath-like upper bainite exhibits higher impact toughness values than those with a granular bainite morphology. In this sense, to achieve the bainitic microstructure and desired properties, the processing window should be investigated to find the best processing parameters for each application.
In this work, alternative thermomechanical processing routes to the traditional quenching/tempering processes and "classical" isothermal treatment were tested and evaluated. The main objectives of this work are: (i) determination of the warm and hot stress-strain curves, with emphasis on the ultimate compressive stress required to promote material deformation at a respective strain temperature; (ii) evaluation of the final microstructural condition of the steel regarding its morphology and hardness. *e-mail: rodrigo.hatwig@ufrgs.br Thermomechanical experiments were conducted with the 18MnCrSiMo6-4 bainitic steel. Deformations on the austenitic and bainitic fields were performed with subsequent direct and controlled cooling after compressive deformation. In this way, the influence of the different processing routes is analyzed, and more knowledge was obtained about the bainitic microstructure formation.
Investigated material
The 18MnCrSiMo6-4 (1.801, HSX 130HD) steel on its original condition (as received) was Ø 43 mm, hot rolled and cooled by natural air cooling (~1° C/s). The chemical composition is shown in Table 1.
The microstructure of the material as received consists mainly of the bainitic phase and with a small volumetric fraction of pro-eutectoid ferrite. Figure 1 presents the microstructure of the material as received. Figure 2 shows a Gleeble ® 3800 universal testing machine used to perform the alternative processing routes presented in Figures 3-7.
Thermomechanical routes
The strain rate of 0.1s -1 was used to apply the deformations at temperatures of 950 °C, 850 °C, and 500 °C. Moreover, the deformation temperatures and cooling rates were defined based on the CCT diagram of the material under study and defined according to industrial applications. All process parameters used to perform the experiments can be seen directly in Figures 3-8.
Thus, the processing routes represented in Figures 3-7 were tested to obtain more in-depth knowledge about the final properties of the material under these specific processing conditions. The similarity of these tests with the forging process is regarding the stress state and the limitation is the maximum strain rate to be applied. Besides that, valuable information can be gained to adequate processing of this material on industrial scale.
The tests used cylinder-shaped samples of the material under study, with the following dimensions: 10 mm (diameter) x 15 mm (height) which were deformed: (i) in a single step, until 40% of its original height, and (ii) in combined steps, 20% + 20% in the austenitic field and 20% in the austenitic + 20% in the bainitic field.
The samples of the thermomechanical processing routes represented by Figures 5 and 7 were deformed in two distinct stages. Firstly, a deformation degree of 20% was performed in the austenitic field. For the second stage, another 20% deformation now in the low austenitic temperature and bainitic fields were performed taking as reference the sample height after the first deformation step. Table 2 summarizes the range of tests performed, together with the parameters used.
Stress-Strain curves
In conventional hot and warm forging, the billet is heated up to its respective forging temperature to reduce yield stress and increase deformability. However, forging industry has a big interest in quantifying the stress required to carry out the forging and, thus, to predict operating costs. Figure 8 shows the stress-strain curves of 18MnCrSiMo6-4 steel after deformations at temperatures of 950 °C, 850 °C, and 500 °C obtained after applying the parameters showed in Table 2. The two-step deformations (experiments 3 and 5) follow the below curves up to a strain of 0.2.
As the strain rate is constant, the stress increases as the strain increases, reaching a steady state characteristic of the elastoplastic deformation. As expected, the material exhibits lower mechanical strength values as the deformation temperature increases. The high stress necessary (up to 4 times) to promote deformation in the bainitic field becomes evident regarding the deformations promoted in the austenitic field. This fact is associated with the bainitic transformation induced by the deformation, besides the higher strength of the metastable austenite at a lower temperature.
In hot deformation, the flow stress is mainly dependent on strain rate and deformation temperature and, as the strain rate and the temperature were kept constant during the test, the adjustment corresponding to the Ludwik-Hollomon (Equation 1), was chosen. This equation is the most common equation to describe the material flow behavior, which the flow curve gives the stress, k f , necessary for plastically flow of the material under a given strain φ. The factor C is the strength coefficient and n the strain hardening exponent. (1) Table 3 shows the characterization of the flow stress-strain curves according to Ludwik-Hollomon equation.
In this way, through Equation 1 it is possible to obtain the stress to promote deformations up to φ = 0.4. Many parameters affect the material's formability. Among them, the superficial defects in the piece have a limiting factor. In the deformation's conditions employed no cracks were observed in the samples. However, not just the formability must be considered. There are works, such as 16,18 that discuss mechanical properties obtained at room temperature through bainitic microstructures in steels with similar applications to 18MnCrSiMo6-4 steel, that shows a strong dependence of the toughness, strength, and hardness of the microstructure. Figure 9 shows the ultimate compressive strength values required to deform the material under the test conditions described in Table 2. show the microstructures related to the performed physical simulation experiments. In this case, the metallographic analysis shows the final microstructural condition of the sample tested; i. e., the refining of the microstructure and its constituents. Therefore, it is possible to project the final mechanical properties of the forged component regarding the thermomechanical route employed. In work of Caballero et al. 16 microstructural characterization and impact toughness evaluation of different continuously cooled cementite free low carbon bainitic steels have demonstrated that bainitic microstructures formed mainly by lath-like bainite exhibit higher impact toughness values than those with a granular bainite morphology. The larger crystallographic packet size of granular bainite shows evidence of low resistance to crack propagation during cleavage fracture.
Metallographic analysis
In the deformed sample at 950 °C, Figure 10a, b, the microstructure consists of lath-like bainite and small amount of pre-eutectoid ferrite. This microstructure is associated with the high density of dislocations generated in the deformed austenite and, at lower austenitization temperatures, recovery and recrystallization mechanisms are difficult to happen. Thus, the formation of a high-density of dislocations in austenite will suppress the ferrite transformation, which occur by shear mechanism. Similar results were found by the authors 19,20 and, according to 21 , the high density of defects is considered as one of the main reasons for the stabilization of deformed austenite.
Samples deformed at 850 °C, Figure 11a, b, shows a large amount of pre-eutectoid ferrite and lath-like bainite in the microstructure. When the deformation temperature decreases, a large amount of ferrite will precipitate, and the deformation now has stabilized effects on bainite transformation. It means that the subsequent transformation of bainite will be hindered, in agreement with the results obtained by Lin-xiu et al. 19 .
On the other hand, the formation of the ferritic phase leads to an increase in the carbon content of untransformed austenite and this fact has a dual effect. According to Khlestov et al. 20 in the upper range of bainite, carbon has a destabilization effect in deformed austenite, in which it Deformation has several effects on the bainitic transformation which can retard or accelerate it. Thus, the parameters of the deformation and the transformation temperature have a great impact on the bainitic kinetics. Figure 12a, b and Figure 13a show that the microstructure resulting from the experiment 4 and 5 consists mainly of granular bainite, martensite, and retained austenite. Deformation at 500 °C leads to the formation of granular bainite and stabilizes the deformed austenite, which transformed later in martensite.
Microscopically, it has no difference if the deformation is promoted in one or two-steps. The presence of a hard phase, such as martensite, in a bainitic microstructure, would be undesired because they could be detrimental to toughness. Table 4 shows the final hardness of the samples in relation to the thermomechanical route employed. There was no big variation in the hardness of the samples submitted to the different thermomechanical treatments. However, the pre-eutectoid ferritic phase slightly decreases the hardness of the material, which is directly related to its volumetric fraction in the material, as seen in experiments 2 and 3. The fact that the hardness of bainite increases linearly with the carbon concentration 22 , associated with the presence of the martensitic phase justifies the small increase in the average hardness values of the samples from experiments 4 and 5.
Hardness
However, refining bainitic microstructure and work hardening increased 20% the final hardness compared to the sample as received. Finally, despite the different morphologies of bainite, according to Kamada et al. 23 , the final hardness of the bainitic phase is independent of the austenite grain size, even though the latter influences the bainite sheaf thickness and the strength. For mixed microstructures, the hardness depends on the transformation temperature and composition.
Conclusions
The present study contains results of the effects of metastable austenite deformation on the final morphology of the phase transformation, especially the bainitic phase, as well as mechanical properties. These data are of practical importance for the selection of optimum thermomechanical processing conditions applicable on an industrial scale. The major conclusions are drawn from the present investigation: • The deformation temperature has a critical effect on the bainitic transformation and its morphology. High deformation temperatures suppress the pro-eutectoid ferrite formation and favor the transformation of deformed austenite into bainite. Deformations at lower austenitization temperatures will form preeutectoid ferrite and it has a big influence on further phase transformations. Thus, deformation at higher temperatures should be preferred, because it has the best balance between the final microstructure (lath like bainite) and energy consumption. • The final microstructure of samples deformed in the bainitic field (one or two stages) is more refined. However, the large size of granular bainite packets and martensite has a detrimental effect on the mechanical properties, in addition to consuming more energy in the deformation according with the stress-strain curves.
•
The results indicate a great potential for the use and application of new continuous cooling bainitic steels because subsequent heat treating, such as quenching and tempering, can be substituted by adequate forging strategies and controlled cooling. | 2021-05-11T00:06:58.126Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "e7d7d5e554ef8767ea7cb9aa36dda46b30019282",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/mr/a/mYMcJnSR6jFzvzNJ7PV5r8M/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "36ef7aa0b6e40d1674c57ec4df11cf61ac24367d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
234925727 | pes2o/s2orc | v3-fos-license | Implementation of Curriculum 2013 Through the Department Development Program at SDIT Daarul Muttaqien
The female development program is coaching and guidance given specifically to female students whose implementation affects the implementation of the curriculum in schools. The research objectives describe and analyze the female program which includes: forms, methods and media; curriculum implementation; the impact of the female development program on curriculum implementation. Qualitative research with this type of study, with interview techniques through interviews, documentation, and participant observation. Test data wetness with credibility through triangulation of techniques and sources, transferability, dependability, and confirmability. Data analysis with condensation, presentation, and conclusions. The results showed, (1) the implementation of female development was carried out in the form of personality development through building morals and menstrual development, and building skills through building batik, weaving, embroidery, cooking, and make-up. (2) the implementation method with lectures for building morals and menstruation, counseling for menstruation development, and practice for all skill builders, (3) the media are manuals, menstrual teaching aids, batik equipment, weaving equipment, embroidery tools, make-up kits, and cooking equipment. (4) curriculum implementation with the Decree of the Foundation, integrated system, spiritual paradigm, school culture, integrated treatment, (5) impacts include: faith and obedience (KI-1), being polite and helpful (KI-2), increasing knowledge about femininity (KI-3), and creative and skilled students (KI-4). Suggestion: female training is carried out more than once a week, the menstrual development method can be done by discussing and solving problems, schools provide a special room for menstruation guidance such as a women's clinic.
INTRODUCTION
Education is also referred to as a series of long processes and carried out systematically in an effort to build a whole human being, both morally and spiritually, both about cognitive intelligence, afektfi, and psychomotor so that the process can form useful and dignified individuals in religious life, nation and state). Law No. 20 of 2003 on National Education System article 4 describes the principles of education implementation, among others, "education is carried out with domokratsi, fair, not discriminated against, and promotes human rights, religion, culture, and national diversity, systematic (open and meaningful), cultivating and empowering students, nudity, building willpower, developing student creativity, and implemented by utilizing all layers or components of society." [1] The implementation of education as a forum for the formation of character and competence that is outward and inner is carried out in a structured and systematic manner by educational institutions, oriented in achieving the objectives of education nationally. To achieve it all, various steps or efforts in the realm of process have been carried out by the government from time to time. The process of implementing education by public or private education institutions or organizations is never separated from the draft program or the reference of the education program in its implementation. The design or reference is a procedure managed by each educational institution as the basis of the learning process in educational institutions or often called curriculum. Curriculum is a very important part as input for the implementation of the educational process.
Curriculum is a draft of a learning program that is determined and that will be applied by an educational institution to support the learning process and developed based on the conditions and situation of the institution or institution of education providers (schools). For this reason, in curriculum 2013 aspects of character education get very intense attention. The intensity of character education planting in the Curriculum 2013 has been strengthened by the Presidential Regulation of the Republic of Indonesia Number 87 of 2017 on Strengthening Character Education, article 3 mentions character education carried out through the application of Pancasila values namely religious, honest, tolerant, hardworking, disciplined, creative, independent, democratic, wanting to know, nationality, love of the homeland, communicative, accomplished, fond of reading, peaceloving, environmentally minded, high-spirited, and responsible [2].
Character development of non-academic pathways in this school is actually managed in various program activities, including the practicum activities of thoharoh worship, prayer and studying, school discipline enforcement activities and most specifically through the activities of Bina Keputerian which is devoted to finalizing and strengthening the character of the nation's daughters as an important figure for the continuity of the nation going forward. The school gives very important attention to the development of the character of the princess considering that in the eyes of Islam women have an important role in coloring the life of a nation so that it needs to be strengthened character and personality early on in order to be a ptoduktif human being, tough, reliable and virtuous noble. Research from Sha'diyah explains women have a very important position in Islam [3]. Islam respects women and considers their position to be the same as that of men. Islam encourages students to be educated and have a social role in society such as in the field of politics, economy, health, and so on. Women also have a role in the care of their families, because women are leaders in the household.
Research from Tuwu in the journal Al Izzah explains "the work of women in the public sector has an impact on increasing family economic income, which strengthens the role of women in strengthening the family and state economy." [4] So it is certainly not unusual if the process of preparing tough women must be done early. Especially in the modern era and the era of advanced technology as it is today, where the tubs of two sides mature money opposite each other. On the other hand, the chances of crime and harassment are very easy to happen, while on the other hand the competition for competence in seizing opportunities and opportunities is also getting tighter. If women are not honed in their thinking skills, soft skills and personal toughness, it is not possible that the negative side of modernization and sophistication of the world of technology and information will drag them into the abyss and vice versa.
Based on the statement above, this research aims to describe and explain the form, method, and media of the program of computer development in schools; describe and explain the implementation of the 2013 curriculum in schools; and describe and explain the impact of the community development program on the implementation of curriculum in schools.
METHOD
This research uses qualitative approach which is a study to explain natural conditions where the results are described in the form of short, solid, and clear sentences. Riyanto explained that descriptive research is a research directed to explain the symptoms, facts and events regarding a condition in a particular area [5]. This qualitative research uses a type of case study where research needs to be done in depth and in great detail. Riyanto said the research with this case approach aims to learn in depth about social events that occur in individuals, groups, institutions, and communities [5]. The technique of collecting the first data of in-depth interviews by asking a few questions to the informant. Riyanto explained that the interview is a method of collecting data that requires a systematic question and answer process between researchers and informants. Second, the observation of participants is by conducting observations on all activities and events that occur at the research site [5]. Riyanto said that the observation of participants is an observation where the person who observes takes part in the life of the person being observed [5]. Third, documentation is by collecting texts, images, books, and so on obtained from research locations. Guba and Lincoln [5] explain that documents are all materials in the form of writings or films used by researchers for their research purposes.
The first data validity test with credibility test or trust test is the result of research conducted through technical triangulation and source triangulation. Engineering triangulation is a data trust test by combining several data collection techniques. Satori & Komariah said testing data credibility by triangulating techniques is the use of various data collection techniques conducted by researchers to data sources [6]. Source triangulation is a data trust test by matching research data from other sources sat uke sources. Satori & Komariah said that the way to increase confidence in the results of research can be done by looking for data from various sources that are interconnected with each other [6]. Both tests of transferability by summary, on, and clear the results of the study. Satori & Komariah explained the results of the study said transferability when research reports made by researchers can provide very clear, complete, and systematic information [6]. If a reader gets a clear picture or view of the results of the research, then it meets transferability standards. The three dependency tests are the consistention of test conducted through an audit from Advances in Social Science, Education and Humanities Research, volume 540 an audit expert (in this case a research supervisor). Stainback [6] explained the results of a reliabel study related to the degree of consistentness and stability of the research results. So it can be called also below this test is a test of research results data where the source and technique show a high rationality. Do not let the data obtained are not traced how to get it as well as the person who revealed it. The four confirmation tests are tests on the data of research results conducted by confirming the data source (informant) so that the data obtained is not much different from the informant. Satori & Komariah explained that a study is said to have a high degree of objectivity if the existence of data can be traced definitively and the research is said to be objective when the results of the study have been agreed by the public [6].
Data analysis techniques with condensation, data presentation, and inference. The first condensation is Sugiyono explaining the analysis done at the time of data collection and after data collection is done in a certain period [7]. At the time of conducting the interview, the researchers had conducted a limited analysis of the answers interviewed. Both presentations of data are displaying data from research in the form of writings or sentences, images, tables, and so on. Sugiyono said in qualitative research the presentation of data can be done in the form of briefs, charts, relationships between categories, flowcharts, and the like [7]. The third draw conclusion is to take the essence of the research results. Riyanto said that making conclusions about the activities took the essence of the data that had been collected during the research into a short and clear sentence statement [8]. The grids in this study can be seen in the following table: The design of this research can be described as follows:
RESULTS AND DISCUSSION
The results of the study are data collected or obtained by researchers through various research techniques (interviews, documentation, and observations) during the research process. The data of research results on curriculum implementation through thisputrian development program can be displayed into the matrix of research results table as follows: Media used for computer development activities as follows: bina Adab / Akhlaq, media used guidebook (material); menstrual development, material book media, dolls, sanitary napkins, and menstrual cards; batik batik, the media used ink or batik dye, forecloung, batik cloth, pan, small candles, and so on; weaning, media used panjalin blades, bamboo, pandanus, kerta, cutting tools, woven patterns and rattan; embroidery, used medium threads, needles, scissors, spreadsheets, and fabrics; tata boga, media used kitchen utensils such as stoves, dishes, spoons, pans, and so forth; makeup, media used makeup tools, Muslim clothes, and veils. 2 Curriculum Implementation 2013 The implementation of curriculum in 2013 is divided into the first two levels of the Foundation institution level through sk permit for curriculum implementation from the chairman of the Foundation. The two levels of school units with an integrated system are combining the 2013 curriculum with the local curriculum (Islam), building a spiritual paradigm that is instilling religious values in the educational process in schools, building integrated treatments that are building cooperation between schools and homes (parents), and building school culture through habituation of good habit and Islamic habit 3 Impact of Limited Keputerian Development Program on Curriculum Implementation The impact of the vocational development program is limited to the implementation of the curriculum in the first school, the impact on KI-1 on the spiritual is increasing the obedience and faith of students through personality building. Second, the impact on KI-2 on attitudes is that students have a good manners and manners in speaking and behaving. Third, the impact on KI-3 on knowledge is the increasing knowledge of students, especially knowledge about femininity that is not obtained in academic classes. Fourth, the impact on KI-4 on skills is that students are more creative and have good skills based on the skills they follow.
Implementation of women Development
Bina Keputerian program is a coaching and guidance activity that is given specifically by the school to students in the school with the aim of providing experience, knowledge and skills either hard skills or soft skills specifically in the field of femininity in accordance with the teachings of Islam. Mustari explained that in student management is a service devoted to organizing, supervising and serving students in the classroom and outside the classroom such as developing students' interest until they can and are able (mature) in school [9]. In line with the statement, Kholifah, Nasution, & Basri in Ta'dibi journal explained that the education of the princess is an educational or teaching activity carried out to change the nature, behavior, and personality of a princess [10].
Body Building Shape
The form of a computer development program is a type or type of coaching program used by a person or organization to foster women. Bina Keputerian program is divided into two areas ofthe group, namely the field of personality to organize the personality of students in order to become a true Muslimah in outner and inner, and kelompk field of skills to provide skills or handicrafts especially handicrafts to students. This computer development activity is carried out once a week in accordance with the schedule that has been determined by the school. Lestari in Untirta Civic Education Journal in his research stated that extracurricular activities aim to develop the character, talent, and citizenship skills of students both in groups and individuals [11]. Rahman that form of self-development activities can be through counseling services that include learning difficulties, personal life and social life of a person [12]. Dhuhani in Jurnal Fikratuna pesantren activities against santri is carried out by coaching, nurturing, and education to students to deepen religious knowledge [13].
The first program of women's development field or personality building group consisting of attitude or adab treatment activities is by providing knowledge about the procedures of acting, speaking, and socializing in the environment in accordance with Islamic religious; and menstrual development activities that provide knowledge about mesntruasi and how to purify themselves from menstruation. The program is in accordance with the results of research from Kholifah, Nasution, & Basri in the journal Ta'dibi stated that ta'lim activities carried out every Friday are carried out by discussing human adap in daily life [10].
Both groups or development fields consisting of batik activities, by teaching and equipping students how to make and produce good batik; embroidery activities, namely by teaching and equipping students on how to make and produce good and neat embroidery; weaning activities, by teaching and equipping students how to produce neat and good woven; food systemactivities are by teaching and equipping students how to make healthy food and good nutritionalvalue; and make-up activities by teaching and equipping students on how to wear makeup on the face and how to wear Islamic clothes that is covering the aurat in accordance with Islamic law. , wear an Islamic hijab or veil and wear Islamic clothing or clothing. Kompri says that the purpose of extracurricular is to improve students' abilities in the community; as a place to channel and develop students' potential and talents; to train discipline, honesty, trust, responsibility, civility and attitude, as a place to guide, direct, and train students [14].
Computer Development Methods
A method is a technique used by mentors or educators of a computer coaching program to convey material or knowledge to participants or construction objects. Implementation method of women development divided into three main methods, namely the first lecture method used for personality building activities in the field of study attitude or akhlaq and the field of study of menstruation or menstruation. Lectures are used in both fields of study because the field of personality gives or conveys more material and knowledge derived from books, hadiths, the Qur'an, and so on. As revealed by Kholifah, Nasution, & Basri in ta'dibi journal, the results of his research mention extracurricular of women can be done by searching and delivering material, as well as by practicing [10].
Both methods of counseling or consultation methods are used for personality building activities in the menstrual field, namely by conducting consultations, discussions, and question and answer questions between students and mentors about menstruation. As explained by Yusof, Zainuddin, & Hamdan in the journal Humanika Science, the results of his research explain that training and guidance to improve the teaching and learning process, especially in the class [15]. this means that training and guidance on curriculum changes need to be done to improve the results of the teaching and learning process in the classroom.
The three practical methods are used in skill building activities because embroidery, batik, weaning, cooking, and makeup are buildings that require practice. The practice is also used in personality building especially about menstruation by practicing how to use sanitary pads (puppet media) and clean up self-purifying. As explained by Kholifah, Nasution, & Basri in ta'dibi journal, the results of his research stated that the practice is to give examples to students, so that students can follow everything exemplified by the mentor of activities in the order of examples of practice [10].
Advances in Social Science, Education and Humanities Research, volume 540
The purpose of using these three methods is to facilitate, clarify, and accelerate the understanding of students limited to the material of the computer development program that is being given or implemented. Nuryanto in The Journal of Education said that the activity is expected to be able to meet the needs of students to gain new experiences and knowledge that are beneficial for themselves for life in the future [16].
Media Development
Media is a set of equipment used by a person or group to support the work that is being or will be done. First, personality development in the field of adab / morality studies is a guidebook or material book. The two media used for menstruation are books, dolls, sanitary napkins, and menstrual cards. The three batik builders used are batik ink or dye, dipper, batik cloth, frying pan, small wax, and so on. The four constructors for embroidering the media used are thread, needle, scissors, span tools, and cloth. The five weaving coaches are panjalin blades, bamboo, pandanus, paper, cutting tools, weaving patterns and rattan. The six media culinary constructions used are kitchen utensils such as stoves, plates, spoons, pans, and so on. The seven media make-up builders used are makeup tools, Muslim clothes, and headscarves. The purpose of using the media or equipment is so that students can take advantage of the media to facilitate the implementation of the fostered program and support to produce a work in accordance with the field of female development studies that are being followed by students (santri). Kunandar explains that the principle of implementing the curriculum in education units (schools) is that students must get educational services that aim to improve, accelerate potential, enrichment through various techniques or methods, media, resources, and technology that are beneficial to the learning process [17].
CONCLUSIONS
The female development program is coaching and guidance given by special schools to female students in schools which are given in two forms of training, first personal development consisting of adap / akhalq and menstrual development, secondly skills development consisting of batik, embroidery, weaving, make-up and culinary. The implementation of the female development is carried out by using the lecture method for adapting / morality and menstrual development, counseling methods for menstruation development, and practical methods for building skills in batik, weaving, embroidery, make-up and culinary. The media used in the development of kepuetrian include books for the development of adap / morality; material books, dolls, menstrual cards, and sanitary napkins for menstruation development; threads, needles, scissors, and others for building batik; batik cloth, cloth dye, dipper, frying pan, and others for building batik; panajlin blades, bamboo, pandanus, woven patterns, and rattan for building weaving; Buffer equipment such as pans, compiers, plates, etc. for cooking; make up tools, Muslim clothes, headscarves, and so on to build make-up.
The implementation of the curriculum is the application of the national curriculum in schools which is implemented by combining the national curriculum with the school curriculum, building spiritual paradigm in the learning process, building Islamic habits (culture) in the school environment, and building collaboration with parents to control student academic and non-academic achievement. .
The impact of the female development program on the implementation of the curriculum in schools is that students increase in faith and obedience to religion which in the 2013 curriculum is reflected in KI-1, students become more polite, polite, gentle with their peers and to others which in the 2013 curriculum is reflected in KI -2, students increase in knowledge which is reflected in the 2013 curriculum at KI-3, and students increase their skills and creativity, which is reflected in the 2013 curriculum in KI-4. | 2021-05-22T00:02:48.205Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3b3c8f0ad3a8fa8666ac8a6b24bcb5670860ee61",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125955220.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "09079a29ca3b7ca88f87caf5ec428f50471c37e3",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Engineering"
]
} |
212420081 | pes2o/s2orc | v3-fos-license | The GGCMI phase II emulators: global gridded crop model responses to changes in CO2, temperature, water, and nitrogen (version 1.0)
1. Department of the Geophysical Sciences, University of Chicago, Chicago, IL, USA. 2. Center for Robust Decision-making on Climate and Energy Policy (RDCEP), University of Chicago, Chicago, IL, USA. 3. Potsdam Institute for Climate Impact Research, Member of the Leibniz Association, Potsdam, Germany. 4. Department of Computer Science, University of Chicago. 5. NASA Goddard Institute for Space Studies, New York, NY, United States. 6. Joint Global Change Research Institute, Pacific Northwest National Laboratory, College Park, MD, USA. 7. Unité de Modélisation du Climat et des Cycles Biogéochimiques, UR SPHERES, Institut d’Astrophysique et de Géophysique, University of Liège, Belgium. 8. Met Office Hadley Centre, Exeter, United Kingdom. 9. Ecosystem Services and Management Program, IIASA, Laxenburg, Austria. 10. Department of Geography, Ludwig-Maximilians-Universität, Munich, Germany. 11. Department of Geographical Sciences, University of Maryland, College Park, MD, USA. 12. Texas Agrilife Research and Extension, Texas A&M University, Temple, TX, USA. 13. Department of Statistics, University of Chicago, Chicago, IL, USA. 14. EAWAG, Swiss Federal Institute of Aquatic Science and Technology, Dübendorf, Switzerland. 15. Laboratoire des Sciences du Climat et de l’Environnement, LSCE/IPSL, CEA-CNRS-UVSQ, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France. 16. Department of Physical Geography and Ecosystem Science, Lund University, Lund, Sweden. 17. Earth Institute Center for Climate Systems Research, Columbia University, New York, NY, USA. 18. School of Geography, Earth and Environmental Sciences, University of Birmingham, Birmingham, UK. 19. Birmingham Institute of Forest Research, University of Birmingham, Birmingham, UK.
Sampling in variable space and cultivated area
Simulation sampling across the defined variable space is not uniform tn the GGCMI Phase II experiment, with only some models providing all cases in the protocol. Figure S1 compares the sampling density of the models used in the emulator analysis. Figure S1: Heatmap illustrating the number of models providing simulations for each of the scenarios in CTWN variable space. Black boxes mark the "baseline" cases for rainfed and irrigated simulations. The maximum number is 9, the number of models included in the emulator analysis. (That is, we exclude here the three GGCMI Phase II models not included in the emulator analysis.) For cases with N levels lower than 200 kg/ha, the maximum number of models is 6 since three models (CARAIB, JULES, and PROMET) do not represent varying N levels. One model (GEPIC) provided additional simulations at T+5 not specified by the protocol; these are not used in emulation. Normalized error calculations are run only over scenarios in which 9 models contribute simulations (pink boxes). Figure S2: Presently cultivated area in the real world for rainfed (left) and irrigated (right) crops, from the MIRCA2000 dataset (Portmann, Siebert, and Doell, 2010). Data are taken directly from the MIRCA2000 dataset for maize, rice, and soy. Winter and spring wheat areas are adapted from MIRCA2000 and sorted by growing season.
Variability changes in future climate projections
Because the GGCMI Phase II simulation dataset does not sample across changes in climate variability, large impacts to yields driven by future changing variability would decrease the practical utility of the emulator for impacts assessments. We therefore assess the scale of potential future changes in temperature variability, in RCP8.5 simulations from the five climate models used in ISIMP (the Inter-Sectoral Impact Model Intercomparison Project; Warszawski et al., 2014;Frieler et al., 2017). In manuscript section 4.3 we use one of these climate simulations, that from HadGEM2-ES, to assess the ability of GGCMI emulators to reproduce yield changes simulated under more realistic climate projections. We choose the HadGEM2-ES model because it shows the largest variability changes, and therefore provides a stricter test of the utility of a GGCMI emulator. Table S1 summarizes daily T max variability changes for each crop and model weighted by production. Figures S3 and S4 below show changes in variability in minimum and maximum temperatures in the HadGEM2 simulation for each crop growing season and area, and Figure S5 shows changes in daily T max variability for maize across the 4 additional ISIMIP climate simulations. (Compare to Figure S3 upper left panel.) Most crop models included in GGCMI phase II take daily minimum and maximum temperature as inputs, though PROMET and JULES take sub-daily temperature inputs. Table S1: Global production-weighted fractional change in growing season daily maximum temperature variability under RCP8.5 for the five climate models included in the ISIMIP project (a subset of the CMIP-5 archive). Value for each crop and model is mean within-growing season temperature standard deviation across 30 growing seasons of 2070-2099 relative to that for 1981-2010, with grid-cell values weighted by LPJmL model simulated yields and current cultivation area (MIRCA). Values in parenthesis are the change in variability by the same metric for daily minimum temperature within the growing season. The HadGEM2-ES model is highlighted in bold because this model is used for our emulator evaluation in manuscript Section 4.3. The HadGEM2-ES model is chosen because it shows the highest changes in variability.
Model
Maize % Soybean % Rice % S. Wheat % W. Wheat % HadGEM2-ES 9.7 (2.1) 10.4 (-0.6) 10.1 (-3.3) 6.4 (4.7) 3.6 (1.7) GFDL-ESM2M 3.6 (0.9) 3.4 (0.6) 2.7 (-0. To determine the change we compute the mean standard deviation of daily T min in each historical growing season and take the mean across all 30 years; this metric therefore includes changes both in seasonality and in short-term variations but excludes interannual variability and longer-term trends. For winter wheat, growing-season variability reductions reflect the dampening of the seasonal cycle (stronger warming in winter). Strong percentage increases in the tropics reflect very low variability in the baseline. Productionweighted mean changes across crops range from -3% for rice to +5% for spring wheat (Table S1).
Note that changes may differ if calculated using an ensemble of simulations rather than a single projection as is done here. Figure S4: As in Figure S3 except now for daily maximum temperature. Changes in daily maximum temperature variability are generally higher than those for daily minimum temperature. Figure S5: As in Figure S4, change in daily maximum temperature variability, except now for maize only, for the remaining 4 ISIMIP climate simulations. Values are lower on average than for HadGEM2-ES but patterns can differ.
Yield response for A1 (growing season adaptation) simulations
This section shows illustrations of emulator ability to capture yield changes in A1 simulations; compare to main text Figures 5 and 6 showing A0 simulations. Responses to CWN factors are similar in both but responses to T are substantially weaker in A1 simulations, in which growing season length does not contract in warmer future conditions. Figure S6: Illustration of spatial variations in yield response, which are successfully captured by the emulator for the A1 simulations. Panels show simulations (points) and emulations (lines) of rainfed maize in the pDSSAT model in six example locations selected to represent highcultivation areas around the globe. Legend includes hectares cultivated in each selected grid cell. Each panel shows variation along a single variable, with others held at baseline values. Figure S7: Illustration of variations in yield response across models for A1 simulations, again successfully captured by the emulator. Panels show simulations and emulations from six representative GGCMI models for rainfed maize in the same Iowa grid cell shown above, with the same plot conventions. Three models (PROMET, JULES, and CARAIB) that do not simulate the nitrogen dimension are omitted for clarity.
Normalized error for other cases
In manuscript Figure 7 we show normalized error for the A0 emulators over all rainfed crops, models, and T and W values for baseline CO 2 and nitrogen levels (360 ppm and 200 kg ha-1). Here we show normalized error in some alternate cases for comparison: Figure S6: A0 emulators of rainfed crops at higher CO 2 , Figure S7: A1 emulators of rainfed crops at baseline values, Figure S8): A0 emulators of irrigated crops at baseline values. Results are generally similar, with a few exceptions. Normalized errors at higher CO 2 are generally lower because model disagreement is larger, lowering the denominator. Some model emulators for irrigation water demand are under-performing: LPJ-GUESS and CARAIB for some crops. A1 errors are larger than A0 errors for several crops and models: LPJmL rice, pDSSAT spring wheat, and PROMET winter wheat. Figure S8: Fraction of currently cultivated hectares with normalized emulation error less than 1 for the CO 2 =810 ppm and 200 kg N ha −1 yr −1 case for the temperature and precipitation perturbations scenarios provided by all 9 models included in the emulator analysis.
Emulation of yields in a realistic climate simulation at high latitude
In manuscript Section 4.3 we test the emulator against crop model simulations driven by a more realistic future climate projection to evaluate the impact of future variability changes that are not captured by the emulator. Figure S11 below isolates the mid-and high latitudes; compare to manuscript Figure 9 that shows global currently cultivated land. Results are generally unchanged by the restriction in latitude except for rice, which is typically grown in tropics and subtropics: only 20% of global rice production is grown north of 30N and 1% north of 45N, with even less in the Southern hemisphere, only 0.8% south of 30S and none south of 45S. Figure S11: Illustration of the ability of the emulator to capture a more realistic future climate simulation, as in main text Figure 9 but here restricted to latitudes north of 30N.
Emulator products
This section amplifies on manuscript Section 5 with additional figures analogous to manuscript Figures 10 and 11. Figure S15: Illustration of the factors affecting yields in more realistic climate scenarios for rainfed and irrigated (current mix) soy. Conventions as in main text Figure 11. The split in PROMET soybean temperature response (panel a, note distinct groups of points) results from the model's sensitivity to differences in spatial patterns of temperature change across climate models.
Reduced specification (23-term) emulator examples
In this section we present analogous figures to those in the main text for the reduced-form (23-term) emulator. Issues with the reduced-form model are most prominent in PROMET for rice and soy, and JULES soy and spring wheat. We identify several potential factors that may in some way contribute to these models showing qualitatively different responses that require additional terms for emulation.
• PROMET and JULES do not allow nitrogen variation. (However, CARAIB also cannot vary N and is readily emulatable with the 23-term specification.) • Both JULES and PROMET models are land system process models, originally developed with a broad focus, which have been adapted for managed vegetation (agriculture) only recently (2015). (CARAIB, by contrast, was originally developed as a vegetation model in the early 90's and has a longer history of agricultural focus.) • Both PROMET and JULES have anomalously strong responses to individual factors in those crops problematic to emulate. PROMET is the most sensitive model of all the models for rice in C, T, and W, and JULES for soybeans in C, T, and W. For spring wheat, JULES is a high outlier in C, the most sensitive model in W and T, and shows an extra inflection point in the global temperature response not seen in any of the other models.
• PROMET is the quantitatively lowest-performing model for soybeans when compared to the historical FAO data for the top 10 producing countries. Figure S16: As in manuscript Figure 4, simulated (a.) and emulated (b.) yield under historical conditions for rainfed LPJmL maize, but here for the reduced (23-term) emulator specification. Emulator performance is worse primarily where crops are not currently grown. Figure S17: As in manuscript Figure 5, emulator performance in selected high-yield regions for rainfed pDSSAT maize (and one region for PROMET), but now with the reduced (23-term) emulator specification. Emulator performance is similar. Figure S18: As in manuscript Figure 6, emulator performance across models for rainfed maize in one grid cell in Iowa, but now with the 23-term emulator specification. Note that JULES and PROMET are not shown. Figure S19: As in manuscript Figure 7, normalized error of all 9 models emulated on currently cultivated land, over all crops and all sampled T and W inputs, with CO 2 and nitrogen held fixed at baseline values, now with the reduced (23-term) emulator specification. Degradation of performance is most evident in JULES soy and spring wheat and PROMET rice and soy. Figure S20: As in manuscript Figure 8, normalized error for rainfed crops in CARAIB for the T+4 scenario, but here with the reduced (23-term) emulator specification. Degradation of performance is most evident in marginal lands where crops are not currently grown. Figure S21: As in manuscript Figure 11, rainfed maize on currently cultivated land, but here with the reduced (23-term) emulator specification. Note that strong C response for PROMET is different here than with the full-form emulator, because higher order C (C 3 , C 2 * T ...) interaction terms are needed for accurate emulation.
Yield responses for other crops and models
Spatial patterns of yields are well captured for all crops and models. Manuscript Figure 4 illustrated this using LPJmL maize; for reference, we show here yield response spatial patterns for other crops and models. Figure S24: Spatial yield response and emulator error for LPJmL for all 5 GGCMI Phase II crops. Convention as in manuscript Figure 4. Figure S25: Spatial yield response and emulator error for pDSSAT for maize. Convention as in manuscript Figure 4. pDSSAT absolute yields are significantly higher than those in LPJmL but spatial patterns are similar.
Cross validation error for all models
In this section we present maps of cross validation error (values found in main text Table 3 are aggregated up from the grid cell level). Errors are generally low as a percentage of yield change in each grid cell. Errors above 10% of yield change in the out-of-sample test occur very rarely; the only significant instance is spring wheat in southern China in the PROMET model. | 2020-02-20T09:16:27.644Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "b295b25078a03002c5c054db1b051132f224c5c6",
"oa_license": "CCBY",
"oa_url": "https://gmd.copernicus.org/articles/13/3995/2020/gmd-13-3995-2020.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "25c5059b79806face20ccd32f613ede708c237c8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
228845073 | pes2o/s2orc | v3-fos-license | Psychological ramifications of detraining effects in sportspersons amidst the COVID-19 pandemic: a consensus compendium
The coronavirus disease 2019 (COVID-19) pandemic has caused a negative impact globally, affecting various domains of life, including the fields of outdoor sports and athletic activities. The postponement or cancellation of outdoor sports and athletic activities has resulted in detraining effects in sportspersons. These detraining effects may result in a myriad of effects on physical health, mental health, and also, cause loss of opportunities, financial concerns, and disruption in non-sporting activities. This review article highlights the possible detraining effects, psychological consequences on sportspersons, and certain interventions which may help in mitigation of these effects during the pandemic and in its aftermath.
INTRODUCTION
The coronavirus disease 2019 (COVID-19) pandemic has caused havoc globally in various realms of life. This has called for vigorous action to address the global crisis, which is of importance to psychiatry and its allied fields. [1] The World Health Organization (WHO) advised to restrict travel or trade to countries experiencing COVID-19 outbreaks. This move was essential to break the chain of transmission and safeguard public health. [2] So, major events such as the Tokyo 2020 Olympic and Paralympic Games, the Indian Premier League, and several others have been postponed almost indefinitely or cancelled in this pandemic. [3,4] This has led to a significant impact in the fields of outdoor sports and athletic activities. Several sportspersons have placed themselves in home quarantine or isolation may be for the first time in their lives. There may be a myriad of effects on these individuals including physical health issues, mental health issues, loss of opportunities, financial concerns, disruption of non-sporting activities, especially for those who pursue their parallel careers in academics, business, or other avenues. From a general point of view, sportspersons have been victims of psychological issues including depression, anxiety, eating and sleep-related disorders, attention-deficit/hyperactivity disorder (ADHD), and stress-related to overtraining, personality issues, bullying and hazing, even in the normal times. [5] In this brief review, the authors have made an attempt to muster a range of possible psychological and detraining consequences as a result of the pandemic and interventions to tackle these issues.
DETRAINING EFFECTS ON SPORTSPERSONS: IMPACT ON PHYSICAL HEALTH AND PERFORMANCE
Detraining is "the partial or complete loss of training-induced adaptations, in response to an insufficient training stimulus". It depends on the period of training cessation or insufficient training which would have a detrimental impact on various systems, mainly cardiovascular and musculoskeletal systems. There is already awareness that long-term detraining, as for the current COVID-19 scenario, would cause a significant reduction of the maximal oxygen consumption (VO 2 max), depreciation of the recently developed gains in terms of endurance capacity, and also, remarkable deprivation of muscle strength and bulk. [6] The detraining effects may become worse in cases where access to adequate facilities, access to proper nutrition and diet have been compromised. Depletion in skeletal muscle activity would lead to a rise in the risk of injuries both in non-contact as well as contact sports like soccer. [7] Detraining effects would definitely lead to emergence of psychological issues or exacerbation of earlier issues during this trying time of the pandemic.
PROBABLE PSYCHOLOGICAL CONSEQUENCES ON SPORTSPERSONS DURING THE PANDEMIC
The pandemic has caused uncertainty in the lives of individuals and caused a negative psychological impact on persons all over the globe. Sportspersons too would have been similarly affected in this scenario. The repercussions of quarantine/ isolation comprise of the lack of structured training and competition, deficient communication of athletes with their peers and coaches, restriction in free movement, reduced exposure to sunlight, and unprofessional training conditions which they have to adjust to. [8] Disrupted training leading to detraining effects, reduced physical activity in general, separation from their respective teams or sports communities, reduced interaction with coaches or trainers, and relative disruption of social support which comprised of fans, fan clubs, institutions, media, and fitness centres may lead to psychological issues in them. [9] They may be affected by the fear of themselves or their family members contacting the infection, loneliness and boredom secondary to the physical and social restrictions of lockdown, and anxiety regarding physical revival and pandemic-related information. Apart from these, they may become victims of depressive symptoms, mood disturbances, disturbed eating and sleeping patterns, adjustment issues in new settings, obsessive-compulsive disorder, and acute stress (in those testing positive). [10,11] There may be cases of use of substances or a relapse especially if poor coping strategies are being utilised to deal with psychological concerns. Certain athletes who are affected by the enduring effects of the pandemic may have long-term mental health issues such as posttraumatic stress disorder. Detraining effects would lead to loss of confidence, reduced self-esteem, acute stress reaction, anxiety, depression, and the enduring effects on the mental health of sportspersons. [12] Those who had previously been diagnosed or treated for mental health disorders may have an exacerbation/worsening or a relapse, which can be due to the existing stress or difficulty in following up for treatment or getting access to treatment.
INTERVENTIONS AND MITIGATION STRATEGIES
Interventions for dealing with the aforementioned issues are necessary for sportspersons to not only survive through the current pandemic but, also, to deal with issues in the postpandemic era as sports and game-related events would not be able to restart soon in the same way as before. Currently, as most of the sportspersons may be residing at their homes, it would be difficult for them to have direct or the usual interventions. Interventions should focus on the physical domain, sports-related domain, social domain, psychological domain, and be holistic in nature. These interventions need to be planned in such a way that the individuals themselves, team members, the interdisciplinary team, sports governing bodies as well as the government is brought into action.
Awareness regarding mental health issues and physical concerns should be made available to individuals as well as at other levels. It may be performed by concerned authorities or sports boards/committees even if the individuals are at home using the help of video conferencing platforms or other digital-based applications. Periodic support group discussion can be facilitated for sportspersons as well as support staffs. The support group should identify and address key issues of the sportspersons, and help in player's individual growth. [13] Support group can also plan for training on healthy adaptive coping strategies for the sportspersons through the support of existing mental health professionals. This time of forced isolation is a time for introspection on the past mistakes, analyse the present opportunities, and reset priorities for a bright future. Individuals also need to be encouraged to keep in touch with their trainers, coaches, and team members through telephone, text messaging applications, and video conferencing platforms. Those who have psychological issues, can liaison with mental health professionals or sport psychologists through teleconsultations or national helplines. [14] The following interventions may play a vital role to support sportspersons during the pandemic:
Psychotherapeutic interventions
Athletes have almost an equal probability of developing mental health illnesses like the general population. [15] Particularly depression is considered to equally affect athletes and non-athletes; however, in athletes, it can be sparked by peculiar factors such as poor training, over-training, or retirement. Psychological first aid may be provided to those acutely affected with mental health issues. Techniques such as body scan, deep breathing, and relaxation techniques may be recommended to affected individuals as they are easy to follow. [12] Individuals who have psychological mindedness may be intervened with cognitive behavioural therapy (CBT), supportive therapy, mindfulness-based interventions, and meditation-based yoga. [15,16] Powerful mental tools like meditation and autogenic training are useful for stress and anxiety management. [17] It has been found that the use of mental and motor imagery is useful in preventing detraining effects as well as in rehabilitation by activating certain brain areas associated with actual training, even in the absence of the physical stimulus. [18] Group therapy and family therapy may be useful based on the presentation of psychosocial symptoms. Providing psychotherapeutic interventions to athletes may become difficult and challenging at times because of certain traits they possess, such as aggression and narcissism, and their level of psychological mindedness for these interventions. [15] Balancing physical health and sport endurance
Minimalist training
Maintenance of fitness by using minimal equipment and facilities may be ensured by using the following methods:
Elastic resistance bands
It is a cheap and effective way to maintain muscular strength and flexibility by using colour-coded bands of varying resistance. [19] Plyometric training These are exercises utilising the stretch-shortening cycle like variations of box jumps, depth jumps, bounding exercises, etc., especially useful in the maintenance of power and explosive strength for upper and lower body, but with due precautions. [20]
High intensity interval training (HIIT)
It consists of high-intensity exercise bouts interspersed by a rest period between exercises (e.g. 30 seconds of highintensity activity, followed by 30 seconds of rest, repeated for a total of seven minutes). It is helpful in maintaining or enhancing cardio-respiratory fitness. [21]
Body weight training (calisthenics)
It is any exercise that involves using the body as a means of resistance to perform work against gravity. This involves the minimum use of the equipment and has to be progressive in nature to achieve fitness goals. [22]
Tele-workouts
This is similar to telemedicine in which coaches or fitness trainer can prescribe and monitor workouts using digital interfaces like smartphones, tablets, or laptops.
Social bubbles
These are limited social contacts beyond household who are maintained and allowed to breach physical distancing measures without substantially increasing the risk of transmission if managed properly. [23] This may satisfy the needs of the athletes to engage in practice like in combat sports or team sports where a partner is a must for training.
Connectivity, learning, and entertainment
The use of newer video calling apps helps staying connected with near and dear ones. Massive Online Open Courses (MOOC) learning platforms are useful in spreading knowledge and utilising the extra time. Social networking platforms are a boon; although they can be addictive if misused.
Connecting with the community
Athletes as a community must take active initiatives to connect with alike individuals facing homogenous sets of problems and find common solutions customised to the context. [24] This connectivity can be definitely augmented by the use of technology including video conferencing platforms and other digital applications.
EPILOGUE
Studies on physical, psychological, social, and ecological aspects of sportspersons' health are limited especially with relation to the current pandemic. There is need for further studies in general and specific to pandemic focusing on evidence-based interventions. Future research should focus on detraining effects, physical and psychological concerns of athletes from different sport domains, cultures, and countries.
The impact of the pandemic on sportspersons' health need special attention. Services from mental health professionals need to be utilised by concerned sports authorities to ensure positive mental health and support for sportspersons. The rising popularity of telepsychiatry consultations can add on to manage psychological issues in an effective manner. [25] There is an urgent need to develop preventive and promotive measures to reduce the morbidity associated with detraining effects on individuals with the support of sport authorities and policy makers. Digital-based platforms can be used for support of sportspersons and support staff during the pandemic and also, in the aftermath. | 2020-09-19T07:20:24.968Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "69ac0c3d5c5076107e062e21e161de14127da5ca",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.5958/2394-2061.2021.00008.2",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "69ac0c3d5c5076107e062e21e161de14127da5ca",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
237406397 | pes2o/s2orc | v3-fos-license | Pharmacological Rationale for Targeting IL-17 in Asthma
Asthma is a respiratory disease that currently affects around 300 million people worldwide and is defined by coughing, shortness of breath, wheezing, mucus overproduction, chest tightness, and expiratory airflow limitation. Increased levels of interleukin 17 (IL-17) have been observed in sputum, nasal and bronchial biopsies, and serum of patients with asthma compared to healthy controls. Patients with higher levels of IL-17 have a more severe asthma phenotype. Biologics are available for T helper 2 (Th2)-high asthmatics, but the Th17-high subpopulation has a relatively low response to these treatments, rendering it a rather severe asthma phenotype to treat. Several experimental models suggest that targeting the IL-17 pathway may be beneficial in asthma. Moreover, as increased activation of the Th17/IL-17 axis is correlated with reduced inhaled corticosteroids (ICS) sensitivity, targeting the IL-17 pathway might reverse ICS unresponsiveness. In this review, we present and discuss the current knowledge on the role of IL-17 in asthma and its interaction with the Th2 pathway, focusing on the rationale for therapeutic targeting of the IL-17 pathway.
INTRODUCTION
Asthma is a respiratory disease that currently affects around 300 million people worldwide and is defined by coughing, shortness of breath, wheezing, mucus overproduction, chest tightness, and expiratory airflow limitation (1). Asthma treatments are categorized as controller medications (anti-inflammatories alone or in combination with long-acting bronchodilators), reliever medications (bronchodilators), and add-on therapies for patients with severe asthma (1). Asthma that is uncontrolled with a high dose of inhaled corticosteroids (ICS) and second controller (long-acting inhaled β2 agonists, montelukast, and/or theophylline) and/or systemic CS for at least 6 months is defined as severe asthma (2). A comprehensive review of the current understanding of severe asthma and its treatment has been compiled by Israel and Reddel (3). The current add-on treatment options for severe asthma are long-acting muscarinic antagonist (LAMA), leukotriene modifier, low dose azithromycin, and low dose oral corticosteroids (OCS), and biological drugs such as anti-IgE, anti-IL-5/IL-5R, or anti-IL-4R for type 2 severe asthma (1,3). Asthma is characterized by airway hyperresponsiveness (AHR) associated with chronic airway inflammation (1). Several types of airway inflammation have been recognized in the pathobiology of asthma (4,5). Type 2 inflammation is wellstudied and characterized by its downstream (granulocytemacrophage colony-stimulating factor (GM-CSF), IL-3, IL-4, IL-5, IL-9, and IL-13) and upstream (thymic stromal lymphopoietin (TSLP), IL-25 and IL-33) cytokines. This Th2high endotype is linked to orchestration of the eosinophil biology and is considered as an important type of inflammation in a significant subpopulation of asthmatics (4,notch6). Other mechanisms include Th1 inflammation importantly mediated by the production of interferon-gamma (IFNγ) and Th17 inflammation characterized by IL-17A, IL-17E, IL-17F, and IL-22 cytokine production, which leads to neutrophil activation via IL-8 (6). Both Th2 and Th17 pathways can stimulate airway inflammation, tissue fibrosis, and AHR. However, the Th17-dependent inflammation is considered to be less sensitive to steroids, which constitute the primary anti-inflammatory treatment for asthma control (7,8).
Increased levels of IL-17 have been observed in sputum, nasal and bronchial biopsies, and serum of patients with asthma compared to healthy controls (9)(10)(11)(12)(13). Moreover, expression is more pronounced in moderate-to-severe than in mild asthmatics (10,(13)(14)(15)(16). In mild-to-moderate asthmatics, the level of IL-17A in the airway submucosal layer was significantly increased, whereas IL-17F was higher in both mild-to-moderate and severe asthma (13). Patients with higher levels of IL-17 are classified as a Th17-high inflammation in asthma. So far, there is no biologic treatment for Th17-high asthmatics, as opposed to the effective biologics for Th2 high. In fact, Th17-high subpopulation has a relatively low response to Th2 biologics, rendering this a relatively more severe asthma phenotype (10,11,14,15,17,18).
Th17-high asthma is often referred to as neutrophilic asthma. During the onset of asthma, stimulation of the Th17/IL-17A axis leads to the release of neutrophil chemoattractants and their accumulation in the airways stimulates the development of neutrophil-based asthma with increasing severity (14,19). All the above indicates that IL-17 exerts multiple effects in the progression of asthma that differ from the classical and more treatable Th2 types of the disease. As there are no antiinflammatory treatment options available for Th17-high patients, there is an unmet clinical need for effective therapeutic strategies targeting Th17-driven asthma.
The initially defined IL-17R, IL-17RA, is a type I transmembrane protein that consists of an extracellular domain, a transmembrane domain, and a cytoplasmic tail (20,24). Interleukin-17RA serves as a co-receptor for IL-17A and IL-17F, the two best-known members of the IL-17 family (21). In humans, IL-17RA gene expression has been identified in B and T lymphocytes, epithelial cells, fibroblasts, smooth muscle cells, macrophages, bone marrow stromal cells, monocytes, and vascular endothelial cells (30,31). This might explain the broad influence of IL-17 on the regulation of normal physiological as well as pathological responses (32).
The molecular control of IL-17 signaling is depicted in Figure 1. In the canonical pathway, upon binding to the IL-17R, IL-17 recruits Act1 to the receptor via interaction with SEF/IL-17R (SEFIR), a conserved region of the receptor in the cytoplasmic tail (32). Act1 further recruits several TNF receptor associated factors (TRAFs) needed for IL-17 transcriptional and posttranscriptional regulation (32). The mechanism involved in transcription is dependent on the inclusion of TRAF6 that activates the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB), CCAAT/enhancer binding protein (C/EBP)β, C/EBPδ, and mitogen-activated protein kinase (MAPK) pathways. Recruitment of the TRAF2-TRAF5 complex occurs posttranscriptionally resulting in messenger ribonucleic acid (mRNA) stabilization by stimulating mRNA stabilizing factors and/or by inhibiting mRNA destabilizing factors, thereby maintaining the translation of IL-17 target genes (21,32). The noncanonical IL-17 pathway was recently found to functionally interact with epidermal growth factor receptor (EGFR), fibroblast growth factor 2 (FGF2), NOTCH1, and C-type lectin receptor components (32). In addition, IL-17 is also known to act synergistically with other activators of NF-κB (TNF-α), STAT1 (IFN-γ), STAT6 (IL-13), and small mothers against decapentaplegics (SMADs) (TGF-β) (32).
The downstream effects of IL-17 signaling activation mainly occur via attracting immune cells, notably neutrophils (25,30,32). The IL-17/IL-17R signaling axis in lung epithelial cells induces the expression of granulocyte-colony stimulating FIGURE 1 | IL-17 signaling pathway. Th0 cells are differentiated into Th17 cells by stimulation with TGF-β, IL-6, and IL-21. Th17 cells are matured and maintained by IL-23. IL-23 also triggers Th17 to secrete IL-17. In the canonical pathway, upon binding to the IL-17R, IL-17 recruits Act1 to the receptor via interaction with SEFIR. Act1 further recruits several TRAFs needed for IL-17 transcriptional and posttranscriptional regulation. The inclusion of TRAF6 activates the NF-κB, C/EBPβ, C/EBPδ, and MAPK pathways, which then activate gene transcription (transcription phase). Activation of NF-κB was also carried out via JAK2 activation. Recruitment of the TRAF2-TRAF5 resulting in mRNA stabilization by stimulating mRNA stabilizing factors and/or by inhibiting mRNA destabilizing factors, thereby maintaining the translation of target genes (posttranscription phase). In the noncanonical pathway, the IL-17 complex functionally interacts with EGFR, FGF2, NOTCH1, and C-type lectin component resulting in enhanced molecular signaling in different cell types. The downstream effects of IL-17 signaling are mainly via the secretion of cytokines which then affect inflammatory cells such as neutrophil and structural cells, e.g. epithelial cell, smooth muscle cell, and fibroblast. IL-17, interleukin 17; IL-17R, IL-17 receptor; SEFIR, SEF/IL-17R; TRAFs, TNF receptor associated factors; NF-κB, nuclear factor kappa-light-chain-enhancer of activated B cells; C/EBPβ/δ, CCAAT/enhancer binding protein β/δ; JAK2, Janus kinase 2; MAPK, mitogen-activated protein kinase; mRNA, messenger ribonucleic acid; EGFR, epidermal growth factor; FGF2, fibroblast growth factor receptor 2; NOTCH1, NOTCH homolog 1. Created with BioRender.com.
factor (G-CSF) and neutrophil recruiting chemokines, such as C-X-C ligand 1 (CXCL1), CXCL2, and CXCL5 (33). These downstream effects include normal physiological responses, such as host defense regulation and tissue repair, as well as several pathological responses, including the development of several types of cancer (carcinoma, adenoma), aggravation of autoimmune diseases (psoriasis, rheumatoid arthritis, ankylosing spondylitis), asthma, and chronic obstructive pulmonary disease (COPD) (32).
Although the presence of this gene signature has not yet been studied in asthma, this gene signature was further analyzed to evaluate the significance of its level of expression with regards to clinical characteristics of patients in two COPD studies, GLUCOLD and SPIROMICS (8). The IL-17 gene signature was found to be correlated with airway tissue neutrophils, airway tissue macrophages, and sputum neutrophils (8). Interestingly, it was negatively associated with change of forced expiratory volume in 1 s (FEV 1 ) % predicted in all treatment arms and notably in patients receiving corticosteroids (CS), suggesting that the IL-17 gene signature not only represents a potential tool to assess airway disease severity but may also predict CS responsiveness (8).
Although both Th2 and Th17 pathways are able to stimulate airway inflammation, tissue fibrosis, and AHR, only Th2 cellmediated effects are CS sensitive (7,39). In fact, treatment with CS further elevated the influx of neutrophils in the lung and upregulated IL-17-inducible chemokines, CXCL3 and CXCL1, in a preclinical model of allergen-induced asthma (39). Thus, patients with predominantly Th17-driven airway inflammation are likely less sensitive or even unresponsive to CS treatment.
Interestingly, the Th2 and Th17 pathways have been reported to regulate each other. Several studies showed that IL-13-induced IL-17 downregulation is mediated through the JAK/ STAT6signaling pathway (7,(40)(41)(42). In a murine model of asthma, neutralization of the Th2 cytokines IL-4 and IL-13 increased Th17 cell number and neutrophilic inflammation in the lung (7). Accordingly, mice with IL-4R alpha deficiency exhibited elevated levels of IL-17 and decreased eosinophil recruitment into the airways (43). Furthermore, human Th17 cells express the IL-13Rα1 and stimulation with IL-13 prevented IL-17 secretion from these cells (41).
The influence of IL-17 on the IL-13 pathway is more complex. Interleukin−17A has been shown to moderately suppress several IL-13 inducible genes (POSTN, CLCA1, SERPINB2) in human bronchial epithelial cells (7). Moreover, asthmatics with high expression of the IL-17 gene signature (Th17high) have low IL-13 gene signature expression and vice versa; patients with Th2-high asthmatic presented a depleted IL-17 gene signature, suggesting a reciprocal relationship between the pathways (7,44). Furthermore, IL-17 decreased pulmonary eosinophil recruitment by downregulating the eosinophil chemokine eotaxin (CCL11) and thymus-and activation-regulated chemokine/CCL17 (TARC) as well as by reducing IL-5 and IL-13 production in murine lung (43). This implies that the effect of IL-17 is not limited to IL-13 expression and also affects other downstream cytokines in Th2 pathways.
Interestingly, simultaneous induction of IL-17A and IL-13 in a murine asthma model resulted in increased AHR compared to stimulation with IL-13 alone, whereas IL-17A alone had no effect (40). This indicates a synergistic interaction of these cytokines in AHR (40). Indeed, neutralization of both IL-17A and IL-13 inhibited eosinophilia and neutrophilia, mucus hyperplasia, and AHR in a preclinical model of allergen-induced asthma (7,45). Therefore, targeting IL-17 and IL-13 pathways concurrently may result in a more effective treatment strategy.
In air liquid interface (ALI)-grown primary human epithelial cells, inhibition of heat shock protein 90 (HSP90) prevented goblet cell metaplasia stimulated by IL-13 or IL-17A (46). HSP90 is a cellular protein-folding factor involved in several physiological and pathological processes (47). The inhibitory effect was postulated to be due to interference with signaling driven by erythroblastic leukemia viral oncogene (ERBB) (ERBB1/EGFR and ERBB2-4), TGF-β, nuclear receptor coactivator 3 (NCOA3)/SRC3, and ets homologues factor (EHF), which are all relevant factors in IL-13 as well as IL-17-induced goblet cell metaplasia (46). In addition, Notch4 expression in the peripheral blood of murine asthma model stimulates regulatory T (Treg) cell destabilization and induces differentiation into Th2 and Th17 cells, via the Hippo pathway and Wnt axis, toward a Th17 and Th2 cell fate, respectively (48). Furthermore, Notch4 expression in Treg cells increases with asthma severity (48). These observations suggest that IL-13 and IL-17 signaling share upstream regulators, which would allow for the targeting of both inflammatory pathways via a shared regulator.
THE RELATIONSHIP BETWEEN INTERLEUKIN-17 AND NEUTROPHILIC ASTHMA
Neutrophilic airway inflammation is defined as an inflammatory condition with sputum neutrophil cutoffs ranging from ≥60 to ≥76% (6). In asthma, this type of inflammation is related to reduced sensitivity to steroid treatment, lower forced vital capacity (FVC) % predicted, reduced FEV 1 reversibility, and higher disease severity compared to patients with nonneutrophilic asthma (16,49). The level of sputum neutrophils is one of the most discriminating factors in phenotyping patients with more severe asthma (50,51). Although it is estimated that around 50% of severe asthma cases are neutrophilic, no therapeutic approach is available to specifically target the neutrophilic inflammation in asthma (6,49).
The level of IL-17 in the airways is correlated with sputum (8, 11) and bronchial (15) neutrophil counts. The mechanism by which IL-17 regulates neutrophil activities has been increasingly studied over the past decade. It has been demonstrated that IFNγ plus lipopolysaccharide (LPS)-stimulated neutrophils induce the production of CCL2 and CCL20, which bind to CCR2 and CCR6, respectively, on Th17 cells (52). These interactions resulted in Th17 activation that could be effectively inhibited by anti-CCL20 and anti-CCL2 antibodies (52). Moreover, in a mouse model of combined allergic asthma with Haemophilus influenzae infection, Th17 cell differentiation and neutrophil influx into the airway were significantly induced, resulting in allergic neutrophilic asthma features that were steroid resistant (53,54). These data suggest that IL-17 might play an important role in neutrophilic asthma associated with airway infection. Indeed, neutrophilic asthmatics have a lower diversity of sputum microbiota compared with eosinophilic asthma, which correlates with airway infection in patients with asthma (55). Furthermore, patients with neutrophilic asthma have higher pathogenic bacteria, i.e Haemophilus and Moraxella taxa, and reduced common airway microorganisms, such as Gemella, Porphyromonas, and Streptococcus taxa, compared to eosinophilic asthma (55).
Conversely, activated Th17 cells affect neutrophils by promoting the production of CXCL8/ IL-8, a well-known chemoattractant of neutrophils, in the microenvironment and via GM-CSF, TNF-α, and IFN-γ release (52). This suggests reciprocal crosstalk between neutrophils and Th17 cells via these chemokines. However, neutrophils do not express IL-17RC; therefore, they cannot be activated directly by either IL-17A or IL-17F produced by Th17 cells (52). The effect of IL-17 on neutrophils is rather indirect by means of activating structural cells, including airway epithelial cells, fibroblasts, and airway smooth muscle (ASM) cells, to produce the above-mentioned cytokines and chemokines that in turn interact with neutrophils (33, 42, 43, 56-60).
DISEASE SEVERITY AND INTERLEUKIN-17
The prevalence of severe asthma ranges from 3.6 to 6.1% in the total adult population-based asthma cohort, and is associated with an age over 50 years, nasal polyposis, impaired lung function, sensitization to mold, and female gender (51). Asthma that is uncontrolled with a high dose of ICS and second controller (long-acting inhaled β2 agonists, montelukast, and/or theophylline) and/or systemic CS for at least 6 months is defined as severe asthma (2). Uncontrolled asthma is clinically characterized by inadequate symptom control, frequent severe exacerbations, and airflow limitation despite bronchodilator use (2). Airflow obstruction in severe asthma might be due to structural alterations in the airway (airway remodeling) (3). Severe asthmatics have a higher risk of asthma exacerbation and morbidity (3).
The challenging task of targeting severe asthma is further complicated by the complex and diverse pathobiology of asthma. Phenotyping, which integrates biological and clinical characteristics, has been used to categorize asthma into several clusters (2); these represent the range of disease severity from mild-to-moderate asthma with predominantly eosinophilic inflammation to moderate-to-severe asthma with neutrophilic or mixed granulocytic inflammation (3,50,61).
Several studies have reported higher levels of IL-17, notably IL-17A and IL-17F, in sputum, nasal and bronchial biopsies, and blood of patients with severe asthma as compared to those with mild asthma (10,11,14,15,18). In fact, an IL-17 level of 20 pg/ml in serum was identified as an independent risk factor for severe asthma (14). Likewise, histological expression of IL-17F exceeding values of 23 cells/mm 2 for bronchial and 19 cells/mm 2 for nasal biopsies could be used to discriminate between mild and severe asthmatics (15). These findings suggest that IL-17 can be considered a biomarker for severe asthma.
The presence of IL-17 in asthma has been demonstrated in several preclinical and clinical studies. Interleukin-17-related cytokine expression was upregulated in nasal and bronchial biopsies of neutrophilic asthmatic patients just prior to an exacerbation, indicating a possible role of IL-17F in frequent exacerbators (15). This was in line with observations by Östling et al., who showed that IL-17-high asthmatics were at risk of frequent exacerbations (44).
More severe AHR is another hallmark feature of severe asthma. In a study that measured AHR by quantifying the response to methacholine in asthmatic children, AHR was reported to be positively correlated with serum IL-17A as well as the number of Th17 cells and the Th17/Treg ratio in peripheral blood mononuclear cells (PBMCs) (12). In a mixed Th2/Th17 mouse model of steroid-insensitive asthma, IL-17A was shown to be an independent contributor to AHR (62). In addition, the presence of Th17 cells in mice resulted in increased AHR that could not be resolved with dexamethasone treatment, whereas inhibition of IL-17A was effective (39,40). These studies also showed that the effect of the IL-17 pathway on AHR were associated with the number of neutrophils, suggesting regulation by the IL-17-neutrophil axis (39,40).
Furthermore, an association of IL-17 with lung function and steroid sensitivity is evident. Thus, the FEV1 value of asthmatics negatively correlated with the expression of IL-17F in bronchial and nasal biopsies as well as the number of Th17 cells in serum, serum IL-17A, and the Th17/Treg ratio (12,15).
EFFECTS OF INTERLEUKIN-17 ON AIRWAY REMODELING
Airway remodeling is an important feature of severe asthma and contributes to lung function reduction and obstruction of airflow (63). It is characterized by goblet cell hyperplasia, elevated submucosal extracellular matrix (ECM) deposition, thickening of reticular basement membrane (RBM), ASM cell hyperplasia and tissue hypertrophy, and changes in the bronchial microvasculature (64)(65)(66).
Changes in airway epithelium are considered hallmark features of airway remodeling. These changes include epithelial to mesenchymal transition (EMT), which is a phenotypic conversion of epithelial cells characterized by cell disaggregation as well as reduced epithelial (e.g., E-cadherin) and increased mesenchymal (e.g., vimentin, α-SMA) marker expression (67). Interleukin-17A has been shown to induce EMT of primary murine bronchial epithelial cells by inhibiting E-cadherin expression and stimulating vimentin expression via NF-κB activation (58). Interleukin−17A has also been shown to affect mucus production by airway epithelium. Exposure to IL-17A increased IL-13-stimulated expression of the goblet cell hyperplasia marker, chloride channel, calcium activated 3 (CLCA3), in mouse lungs via enhanced IL-13-induced STAT6 signaling (68). In line with these findings, a study using a mouse asthma model of Th17 inflammation showed a correlation between IL-17A expression and goblet cell number (69). Furthermore, IL-17A induced MUC5AC gene and protein expression through NF-κB activation in differentiated primary human bronchial epithelial cell cultured in ALI and a human bronchial epithelial cell line (HBE1) (70). These studies strongly suggest that IL-17A elevates mucus production. Another epithelial response to IL-17 is related to ECM production. Hyperreactivity of STAT3 in T-lymphocytes resulted in the expansion of Th17 cells in murine lung parenchyma and overexpression of matrix metalloproteinase-9 (MMP9) (71). Matrix metalloproteinase-9 simulates the degradation of ECM proteins, including collagen and laminin, which is consistent with airway epithelial remodeling.
Interleukin-17A also exerts its influence on fibroblast function. Primary mouse lung fibroblasts stimulated with IL-17A in vitro showed elevated TGF-β1 secretion and procollagen1a2 (proCol1a2) expression (69). The TGF-β1 secretion by fibroblasts was further increased after co-stimulation with IL17A and wingless-type mouse mammary tumor virus (MMTV) integration site family, member 5A (Wnt5a), suggesting the involvement of a IL-17A/Wnt5a/TGF-β1-axis mediating the effects of lung fibroblasts on airway remodeling (69). In agreement with these studies on murine fibroblasts, supernatant of IL-17A-exposed primary human parenchymal fibroblast culture increased collagen synthesis and TGF-β1 secretion in primary human lung fibroblasts (72). Moreover, fibroblast proliferation induced by these supernatants was attenuated by anti-TGF-β1, indicating that IL-17A-stimulated fibroblast activation was mediated by autocrine TGF-β1 expression (72).
In another study, normal human lung fibroblasts expressed IL-17RA and proliferated in response to 1 and 10 ng/ml IL-17A (73). Stimulation with IL-17A also increased α-SMA expression, indicating fibroblast-to-myofibroblast transdifferentiation. Moreover, when cultured on soft (polyacrylamide) gels, IL-17Astimulated ECM deposition (collagen type I and fibronectin) in primary human lung fibroblasts (73). These responses were carried out via NF-κB signaling and inhibition of JAK2, but not JAK1/3, prevented these fibrogenic responses (73). Interestingly, IL-23, a known regulator of IL-17 expression, also employs JAK2 in its mechanism (25). Furthermore, IL-17A upregulated fibronectin and collagen-III protein expression in primary normal human parenchymal fibroblasts but not in normal human bronchial fibroblasts, indicating that IL-17A effects on ECM production are dependent on fibroblast phenotype (60).
Airway smooth muscle tissue hypertrophy, a result of cellular hyperplasia and/or hypertrophy, represents a prominent feature of airway remodeling (64). In one study, a mouse model of mixed Th2/Th17 asthma was developed by intranasally transferring allergen-pulse LPS with adenosine 5'-triphospate (ATP)-activated dendritic cells. In this model, IL-17A production was correlated with α-SMA, a marker for mesenchymal cells and ASM thickness (69). The expression of Th17-related cytokines receptors, such as IL-17RA, IL-17RC and IL-22R1, has been detected in primary human ASM cells (74). The corresponding cytokines, IL-17A, IL-17F and IL-22, were shown to promote ASM cells migration in a dose-dependent manner, which could be inhibited by blockade of these receptors, implying receptor-dependent effects (74). Furthermore, the IL-17A and IL-17F responses could be partially prevented by a p38 MAPK inhibitor, whereas IL-22-stimulated effects were attenuated by NF-κB inhibition (74). Interleukin-17A, IL-17F, and IL-22 induce proliferation of primary human ASM cells as well (75). These effects were mediated by ERK 1/2 MAPK for IL-17A and IL-17F, and by both ERK 1/2 MAPK and NF-κB signaling for IL-22. These cytokines were also shown to decrease apoptosis and promote cell survival; in addition to effects on migration and proliferation, this could potentially contribute to ASM mass thickening (75).
INTERLEUKIN-17 AND STEROID SENSITIVITY IN ASTHMA
Inhaled corticosteroids constitute the cornerstone controller therapy in the treatment of asthma of all severities (1). They exert their anti-inflammatory effect via transcriptional and posttranscriptional mechanisms (76). Inhaled corticosteroids transcriptional mechanisms interfere with pro-inflammatory and anti-inflammatory genes that are induced by inflammation. Upon binding to glucocorticoid receptors (GRs), and predominantly GRα, ICS form a heterodimer complex that translocates into the nucleus and binds to a DNA recognition site known as glucocorticoid response element (GRE) in the steroid-responsive-gene promoter regions (76,77). This induces transcriptional coactivator molecules, such as cyclic AMP element binding protein (CREB), to acetylate core histones resulting in activation of anti-inflammatory gene transcription (76,77). On the other hand, ICS suppress transcription of inflammatory genes via interaction with pro-inflammatory transcription factors, such as NF-κB and activator protein 1 (AP-1), which reverses histone acetylation and prevents proinflammatory gene transcription (76). Inhaled corticosteroids also inhibit the MAPK pathway via MKP-1 stimulation resulting in the blockade of the expression of several pro-inflammatory genes. Posttranscriptionally, ICS promote the degradation of pro-inflammatory mRNA that was previously stabilized by certain pro-inflammatory cytokines, thereby alleviating inflammation (76).
With a more advanced understanding of asthma pathophysiology, particularly with regards to ICS responsiveness, there is a growing body of research on reduced ICS sensitivity in asthma. Failure of ICS to adequately inhibit inflammatory molecular and/or clinical features can be caused by several mechanisms, including GR modification, increased GRβ expression, increased pro-inflammatory transcription factors (e.g., NF-κB and AP-1), immune mechanisms (e.g., elevated Th17 activity, decreased Treg activity), and defective histone acetylation and deacetylation (76,78). Detailed reviews on the mechanisms of reduced ICS sensitivity in asthma are available (76,78).
Interleukin-17 has been associated with reduced steroid sensitivity in inflammatory diseases, including asthma. An in vitro study showed that Th17 cells differentiated from naïve CD4 + T cells from antigen-specific TCR-transgenic mice are unresponsive to dexamethasone and continue to produce IL-17A and IL-22 despite successful translocation of GRα into the nucleus (39). Transfer of these Th17 cells into mice challenged with ovalbumin was sufficient to induce increased CXC chemokine secretion, neutrophil infiltration, and AHR, all of which were less responsive to dexamethasone treatment (39).
As indicated, Th17-high asthma is associated with neutrophilic airway inflammation. In a mouse model of acute exacerbation of chronic asthma, it was shown that although dexamethasone treatment suppressed airway inflammation associated with eosinophil and T-lymphocyte recruitment, it did not prevent the influx of neutrophils or the development of AHR, suggesting that Th17/ neutrophilic inflammation is more resistant to dexamethasone treatment (79). Furthermore, in this model, dexamethasone inhibited histone acetyl transferase (HAT) activity in the lungs but failed to reverse the increased NF-κB activity and reduction of histone deacetylase-2 (HDAC2), which is one of the mechanisms of reduced ICS sensitivity (79).
The potential mechanisms underlying the effects of IL-17 on reduced steroid sensitivity have also been evaluated in vitro using human cells. Irvin et al. found that Th2/Th17 cells in the bronchoalveolar lavage (BAL) from asthmatic patients, which presented with higher IL-17 levels in the BAL, were resistant to dexamethasone-induced cell death. In addition, the expression level of MAP-ERK kinase 1 (MEK1), an inducer of AP-1, was elevated, suggesting its involvement in reduced ICS sensitivity (80).
In human airway epithelial cells, the ability of budesonide to inhibit TNF-α-induced IL-8 production was significantly reduced by IL-17A pretreatment (81), which could likely be attributed to enhanced phosphoinositide-3-kinase (PI3K) pathway activity and a subsequent reduced HDAC2 activity in response to IL-17A (81). Moreover, primary airway epithelial cells from asthmatics have been shown to express significantly higher levels of GRβ after IL-17A/F stimulation (82). These studies suggest that IL-17 might directly contribute to reduced ICS sensitivity of the airway epithelium.
Reduced ICS sensitivity has also been linked with the high expression of GRβ. GRβ is an alternatively spliced form of GRs, which binds to glucocorticoid receptor element (GRE) in the DNA but not with CS (76,83). Thus, it competes with GRα and acts as a negative regulator (76,83). Increased GRβ expression has been reported in PBMCs in response to IL-17 exposure. This subsequently hindered dexamethasonemediated prevention of cell proliferation and apoptosis, resulting in prolonged inflammation (84). mRNA expression profiles of PBMCs (mitogen-induced kinase phosphatase 1, IL-8, and GRβ) are considered relevant parameters to predict the response of clinical asthmatics to CS (85). Indeed, PBMCs isolated from ICS-unresponsive asthmatic patients secreted significantly higher levels of IL-17A when compared to those from steroid-responsive asthmatics, and IL-17A production was inversely correlated with the clinical response to prednisolone (86). Moreover, patients with a higher IL-17 gene signature exhibited a lower response to ICS therapy in FEV 1 %-predicted-over-30-months change (8).
Recent findings by Ouyang et al. suggest that colony stimulating factor 3 (CSF3), a key neutrophil survival cytokine, mediates an IL-17A/ICS synergistic interaction leading to increased airway neutrophilic inflammation. Thus, it was demonstrated that IL-17A and dexamethasone synergistically induced CSF3 gene expression in human ASM cells in vitro by increasing gene promoter activity and prevent CSF3 mRNA degradation (87). Targeting with anti-IL-17A or the small molecule IL-17 blocker cyanidin-3-glucoside (C3G) inhibited neutrophil influx into the airways in a steroid-insensitive neutrophil/Th17-high mouse model of acute asthma, underlining the significance of CSF3 in IL-17A-mediated reduction in ICS sensitivity (87).
TARGETING THE INTERLEUKIN-17 PATHWAY IN ASTHMA
The prevalence of heterogeneous phenotypes in severe asthma has become increasingly more evident, which makes cytokinetargeted therapies likely to be only beneficial in specific patient populations (88). The effects of IL-17 on asthma are summarized in Figure 2. In this review, we presented that IL-17 is upregulated in asthma, notably in severe asthma. Several preclinical and clinical studies reported relatively high level of IL-17, particularly IL-17A and IL-17F, in sputum, nasal and bronchial biopsies, and blood of patients with severe asthma (10,11,(13)(14)(15)18). Therefore, targeting IL-17/Th17 pathway may be a promising strategy, given its putative role in relevant processes in asthma, e.g. inflammation, airway remodeling, and AHR as has been discussed in this review. However, the current treatment for severe asthma is focused predominantly on targeting Th2/eosinophilic inflammation, with sputum eosinophil counts and exhaled nitric oxide fraction (FENO) as a therapy guide, and therapies primarily targeting Th2 inflammation (e.g., IL-5, anti-IgE, anti-IL-4/IL-13) as treatment options (17). Furthermore, over the last decade, approximately 78% of all randomized control trials (RCTs) on biologics under consideration for severe asthma have been targeted toward a Th2-high endotype (88).
Despite promising preclinical results that have been discussed in this review, clinical trials on IL-17 inhibition failed to demonstrate a sufficiently effective clinical response thus far. Brodalumab, a human anti-IL-17 receptor monoclonal antibody, did not meet its primary efficacy outcome [the asthma control questionnaire (ACQ)] in moderate to severe asthmatics treated with ICS (89). A significant improvement in ACQ was only observed in the high-reversibility patient subgroup (postbronchodilator FEV1 improvement ≥20%; n = 112) (89). The study was well designed by anticipating the heterogeneity of the severe asthma population. Prespecified subgroup analyses were done based on nine different subgroups, based on bronchodilator reversibility, baseline FEV1% predicted, ACQ, ICS dose, FeNO level, peripheral eosinophils, sex, race, and weight (89). It should be noted that Th17-high inflammation (in which high anti-IL17 (Brodalumab) effects are expected) was not assessed in these patients. Moreover, the study populations were highly atopic (83% in total population; 79 and 84% in placebo and Brodalumab treatment arms, respectively). It has been reported that atopic asthma is more likely to correlate to an eosinophilic phenotype, therefore the subjects included in the study might not be the best target group for Brodalumab (90,91). Interleukin−17-targeted therapy such as Brodalumab may be more beneficial in a specific Th17-high asthma subjects. Several endotyping and phenotyping strategies can be applied to define patients with Th17-high asthma, namely TAC3 gene signatures, 6GS, and IL-17 gene signatures alongside airway inflammation phenotyping (eosinophilic, mixed, neutrophilic, and paucigranulocytic) as described previously in this review (6,8,35,38). Moreover, Brodalumab is proven effective in other IL-17-driven inflammatory diseases, i.e psoriasis, rheumatoid arthritis, and psoriatic arthritis, indicating its potential in Th17high asthma (92,93).
Phenotype-targeted clinical trials have been proven beneficial in evaluating biologic efficacies in asthma, but have not always been straightforward. Early anti-IL-5 monoclonal antibody (Mepolizumab) studies in asthmatics did not show significant results in asthma clinical measures (asthma exacerbations, AHR, FEV1, peak flow recordings) (94)(95)(96). Nevertheless, the levels of the airway and peripheral eosinophil were greatly decreased with Mepolizumab (95,96). Only after the focus was shifted toward patients with eosinophilic asthma, the desired clinical effects of Mepolizumab on exacerbation frequency were found (97,98). Similar lessons to focus on the phenotype-specific subjects were learned from the development of other Th-2-targeted asthma therapy, such as anti-IL-4R (Dupilumab) monoclonal antibody (mAb) (99,100). This advocates for the evaluation of IL17targeted therapy in the Th17 high population. In addition, considering the significance of IL-17 in reduced ICS sensitivity, it will be important to also evaluate (oral) corticosteroid use and responsiveness as endpoints.
Another potential strategy in improving IL-17-targeted therapy in asthma is by antagonizing IL-17/ IL-17 receptor interaction with small molecules instead of mABs such as Brodalumab. The limitations of mAbs are their high cost, lack of oral bioavailability, and relatively large structures, which make them difficult to be formulated as local delivery medications (i.e inhalers) (101,102). Small molecule formulations instead of mABs might circumvent some of these problems. Indeed, small molecules have the potential to be developed into orally bioavailable drugs (103). A combined computational and hydrogen/deuterium exchange mass spectrometry (HDX-MS) study revealed an inhibitory small molecule binding site on IL-17A ligand called β-hairpin, which disrupted the interaction to its receptor (103). Recently, two small molecules (CBG040591 and CBG060392) targeting the interaction of IL-17A/IL-17RA were identified (101). Both molecules were biologically functional by preventing the production of IL-17A-induced CCL20 and CXCL-8 in human keratinocytes. Moreover, molecule CBG060392 showed partial inhibition of IL-17A intracellular signaling, suggesting a functional downstream effect (101). In another computational study, Cyanidin, a natural small molecule compound, inhibited the IL-17A/IL-17RA interaction by docking into the IL-17RA pocket (104). Further, Cyanidin was shown to attenuated skin hyperplasia in murine psoriasis model, reduced inflammation in Th17-driven murine multiple sclerosis model, and alleviated AHR in murine obesity-induced and allergic asthma model (104). These findings encourage for exploration of other potential small molecules targeting IL-17/IL-17R.
Over the last 5 years, several preclinical studies revealed (novel) key factors/mechanisms involved in Th17/IL-17-driven airway inflammation and its effects on ICS responsiveness, which markedly improved our understanding of this complex pathway and will allow for more effective therapeutic targeting. Targeted interventions in humans so far have not been rewarding, but several innovative avenues for interventions are being explored. Also, in light of the high cost of research and developing therapies, it is important to identify which patients will most likely benefit from an approach specifically aimed at Th17/IL-17 inhibition, i.e., those patients presenting with a high IL-17 gene signature and Th2-low airway inflammation. Future research is also warranted to further explore the mechanisms underpinning the effects of the Th17/IL-17 pathway on ICS responsiveness to gain better insight on how to reverse reduced ICS sensitivity and ameliorate disease severity. | 2021-09-04T13:17:12.810Z | 2021-08-30T00:00:00.000 | {
"year": 2021,
"sha1": "f53d1f3fc4952d028a7b50dcc32d8c4fba8e36db",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/falgy.2021.694514/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "f53d1f3fc4952d028a7b50dcc32d8c4fba8e36db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235671663 | pes2o/s2orc | v3-fos-license | The Health & Aging Brain among Latino Elders (HABLE) study methods and participant characteristics
Abstract Introduction Mexican Americans remain severely underrepresented in Alzheimer's disease (AD) research. The Health & Aging Brain among Latino Elders (HABLE) study was created to fill important gaps in the existing literature. Methods Community‐dwelling Mexican Americans and non‐Hispanic White adults and elders (age 50 and above) were recruited. All participants underwent comprehensive assessments including an interview, functional exam, clinical labs, informant interview, neuropsychological testing, and 3T magnetic resonance imaging (MRI) of the brain. Amyloid and tau positron emission tomography (PET) scans were added at visit 2. Blood samples were stored in the Biorepository. Results Data was examined from n = 1705 participants. Significant group differences were found in medical, demographic, and sociocultural factors. Cerebral amyloid and neurodegeneration imaging markers were significantly different between Mexican Americans and non‐Hispanic Whites. Discussion The current data provide strong support for continued investigations that examine the risk factors for and biomarkers of AD among diverse populations.
INTRODUCTION
The percentage of Hispanics 65 and older in the United States will triple by the year 2050. 1 Along with this population growth, when compared to other racial/ethnic groups, Hispanic Americans are expected to experience the largest increase in Alzheimer's disease (AD) and ADrelated dementias (ADRD) by 2060. 2 and AD among Mexican Americans. 4,5 The extant literature suggests significant differences in MCI and AD among Mexican Americans as compared to non-Hispanic Whites with regard to age at onset, 6 genetic risks, 4,5 medical co-morbidities, 4,5 and biological profiles. 7,8,9 The Health & Aging Brain among Latino Elders (HABLE) study was initiated in September of 2017 under award R01AG054073 with the goals of (1) investigating factors underlying health disparities in MCI and AD among Mexican Americans (eg, younger age at onset) and (2) examining differential pathways to MCI and AD among Mexican Americans (ie, metabolic, inflammatory, depressive) as compared to non-Hispanic Whites. The HABLE study is intended to examine long-term factors associated with incident MCI and AD from mid to late life; therefore, the age for inclusion was set at 50.
The 2018 AT(N) framework 10 provided the field with a biological system for studying AD with the explicit goal of advancing novel clinical trials; however, there remains very little research on amyloid (A), tau (T) or neurodegeneration (N) among diverse populations. 11 In fact, the publication itself calls for examination among communitybased, diverse populations. 10 Currently, the sequence, trajectories, timing, and even clinical impact of cerebral amyloid, tau, or neurodegeneration biomarkers among Mexican Americans is unknown. On the other hand, not only do traditional risk factors, proteomic profiles, and apolipoprotein E (APOE) ε4 genotype vary by racial/ethnic group, but data also point to racial/ethnic variability in core AD pathological markers in cerebrospinal fluid (CSF) 12,13,14 and autopsy. 15,16,17,18 In August 2020, the HABLE-AT(N) grant was funded under award number R01AG058533 to examine the hypothesis that the presence, sequence, progression, incidence, and cognitive impact of amyloid, tau, and neurodegenerative biomarkers will be different among Mexican Americans as compared to non-Hispanic Whites. Together grants R01AG054073 and R01AG058533 provide the structure for a largescale multi-ethnic examination of the AT(N) framework.
The goal of this article is to provide an overview of the HABLE methods for the field. HABLE data, images, and biofluid samples are now publicly available. 19
METHODS
The HABLE protocol takes place over multiple appointments com-
Interview and Medical/Functional Exam
A custom electronic data capture (EDC) system was generated. The HABLE interview includes, but is not limited to, questions regarding
Informant Interview
All participants provide an informant who is familiar with the participant to answer questions regarding daily functioning. A standardized assessment is administered for the Clinical Dementia Rating (CDR) 31 scale and the physician's estimate of duration (PED). 32
Cognitive Assessment
The cognitive battery includes tests to assess global cognition, attention/executive functioning, memory, language, and premorbid intelligence (see Table 2). Also indicated are the tests that overlap with ADNI, SOL/INCA, and LEADS. Based on our recently published methods, 33,34 HABLE normative ranges were calculated stratified by education (0-7 years, 8-12 years, and 13+), primary language (English or Spanish), and age (median split; < = 65 and > = 66), which are used to assign cognitive diagnoses.
Imaging
All HABLE neuroimaging scans are stored, managed, and processed by the University of Southern California (UCS) Laboratory of Neuroimaging (LONI). 35 The HABLE MRI protocol is based on that of ADNI3 using a 3T Siemens Magnetom SKYRA whole-body scanner. The following scans sequences were captured: T1-weighted whole brain vol- Images are reconstructed immediately after the 30-min emission scan.
Blood Collection and Processing Procedures
Fasting blood samples are collected and processed per the international guidelines. 37
Proteomic Assays
All assay preparation is completed using a custom
Exosome Processing
Plasma neuronal-derived exosomes (NDEs) are assayed per our previously published protocols. 41 Detailed protocols will be available from the Omics Core. L1CAM-positive NDE cargo proteins will be quantified using Quanterix Simoa assay for Aβ 40
Cognitive Diagnosis and Consensus Review
Research cognitive diagnoses were assigned based on self-report and informant report of daily function, expert clinician assignment of CDR scores (using daily function and cognitive information), and neuropsy-
Participants and Preliminary Data
As of June 2020, there were a total of n = 1786 participants enrolled in HABLE with data entry and consensus completed on n = 1705, which were included in the current analyses. Due to coronavirus disease 2019 (COVID-19), recruitment was halted in April 2020. Study procedures began again in July 2020, with Visit 1 assessments ongoing until n = 2000 participants have been enrolled.
Demographics
Demographic characteristics of the cohort (total and split by ethnicity) are presented in Table 3. The Mexican American cohort was significantly younger (p < 0.001), had fewer years of education (p < 0.001), had a lower annual household income (p < 0.001), and had a higher body mass index (BMI) (p < 0.001) than the non-Hispanic White cohort.
The Mexican American cohort was less likely to own their residence (p < 0.001), less likely to have insurance (p < 0.001) and less likely to have a primary care provider (p < 0.001). The Mexican American cohort was more likely to have a consensus diagnosis of hypertension (p = 0.002) and more likely to have a diagnosis of type 2 diabetes (p < 0.001). There was no significant difference in dyslipidemia or depression prevalence. were younger than non-Hispanic White MCI cases (mean age = 71.07, SD = 9.94) (p < 0.001).
Cognitive Testing
Raw neuropsychological test scores for the cohort (total and split by ethnicity) are provided in Table 4. Analysis of covariance (ANCOVA) models were conducted using age, gender, education, and primary
Neuroimaging Biomarkers
MRI: In order to measure "neurodegeneration" from the AT(N) framework (ie, N), the "metaROI" for N was calculated per Jack. 46 In addition, total brain volume and hippocampal thickness were exam- These findings are also important when considering putative factors (risk and/or causal) associated with MCI and AD among Mexican Amer-icans. Mexican Americans were classified as having MCI at significantly younger ages, which is consistent with our prior work. 5 Mexican Americans were more likely to be classified as MCI as compared to non-Hispanic Whites, whereas no differences in dementia prevalence were observed. The HABLE team examined ADNI-criteria for MCI, which resulted in a 30% MCI rate in this cohort, which was considered overpathologizing. Therefore, the more traditional ≤1.5 SD cut-score was implemented. In addition, normative references were created based on prior work 33 ; however, the team has ongoing studies to examine multiple methods for normative consideration that may impact prevalence rates of diagnostic categories across ethnic groups.
SUMMARY AND CONCLUSIONS
The current findings demonstrate a link between ethnicity and biological markers thought to be associated with MCI and AD. Mexican Americans had higher levels of glucagon-like peptide-1 (GLP-1), insulin, and glucagon. The same held for glucose and HbA1c (data not shown).
Mexican Americans also had significantly higher levels of plasma Aβ40 and total tau. With regards to imaging, Mexican Americans had lower levels of amyloid positivity and significant differences were observed in multiple MRI-based measures of neurodegeneration.
By the year 2045 1 the United States will become largely "non-White," with 14% of the U.S. population being African American and 25% being Latino. 1 In addition, by the year 2060, the U.S. population age 65 and older will grow by more among the African American and Hispanic communities as compared to the non-Hispanic White community. 13 African Americans currently have highest the prevalence of AD and ADRD, whereas Hispanics will experience the greatest increase in ADRDs 2 by 2060. Based on these data, the HABLE study (now entitled the Health & Aging Brain Study -Health Disparities, HABS-HD) has expanded to add 1000 African Americans and now includes the three largest racial/ethnic groups in the United States (75% of the population). The overall goal of HABS-HD is to examine the biomarkers of AD within a health disparities framework. All HABS-HD data are available to the global scientific community to foster a more advanced understanding of the biological, social, cultural, and environmental factors associated with MCI and AD. | 2021-06-30T05:25:16.361Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3a1b0d82fce94c298bc3d92432ef25f6da61fb42",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1002/dad2.12202",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3a1b0d82fce94c298bc3d92432ef25f6da61fb42",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119123930 | pes2o/s2orc | v3-fos-license | Poincar\'e series of compressed local Artinian rings with odd top socle degree
We define a notion of compressed local Artinian ring that does not require the ring to contain a field. Let $(R,\mathfrak m)$ be a compressed local Artinian ring with odd top socle degree $s$, at least five, and $\operatorname{socle}(R)\cap \mathfrak m^{s-1}=\mathfrak m^s$. We prove that the Poincar\'e series of all finitely generated modules over $R$ are rational, sharing a common denominator, and that there is a Golod homomorphism from a complete intersection onto $R$.
INTRODUCTION.
Let (R, m,k k k) be a local ring and M be a finitely generated R-module. The Poincaré series of M, [39, pg. 118] asked if the Poincaré series of a local ring is always a rational function. Considerable study was devoted to this question, (see, for example, the survey articles [35,8]), before Anick [1] showed that the answer is no.
Consideration of rational and transcendental Poincaré series has only intensified since the appearance of Anick's example. The example has been simplified, reworked, and reformulated in the language of Algebraic Topology; see the discussion following Problem 4.3.10 in [4] for more details, including references. Roos [36] calls a local ring R good if the Poincaré series of all finitely generated modules over R are rational, sharing a common denominator. A list of applications of the hypothesis that a local ring is good may be found in [3]. The recent papers [26,20,15,14] all prove that a family of rings has rational Poincaré series.
Nonetheless, at the Introductory Workshop for the special year in Commutative Algebra at the Mathematical Sciences Research Institute in 2012 Irena Peeva observed [32] that "We do not have a feel for which of the following cases holds. One would like to have results showing whether the Poincaré series are rational generically, or are irrational generically." A first answer to Peeva's problem is made in the paper by Rossi and Ş ega, [37], where it is shown that if R is a compressed Artinian Gorenstein local ring with top socle degree not equal to three, then the Poincaré series of all finitely generated modules over R are rational, sharing a common denominator. (In particular, these rings are good, in the sense of Roos.) The Rossi-Ş ega theorem is a complete answer to the Peeva problem for generic Artinian Gorenstein rings because generic Artinian Gorenstein rings are automatically compressed. Furthermore, it is necessary to avoid top socle degree three because Bøgvad [10] has given examples of compressed Artinian Gorenstein rings with top socle degree three which have transcendental Poincaré series.
In the present paper we carry the Rossi-Ş ega program further. As in the Gorenstein case, once the relevant parameters are fixed, the set of Artinian standardgraded k k k-algebras is parameterized (now by a non-empty open subset in a chain of relative Grassmannians) and (when k k k is infinite) the points on a non-empty open subset of this parameter space correspond to compressed algebras. As in [37], we ignore the parameter space and the cardinality of k k k; instead, we prove that compressed local Artinian rings, with odd top socle degree s are also good in the sense of Roos, provided 5 ≤ s and socle(R) ∩ m s−1 = m s . Our result then applies in the "generic case" whenever the "generic case" makes sense. Bøgvad's examples also apply in our situation, so we also are forced to exclude top socle degree equal to three.
Our argument is inspired by the proof in [37]. The key ingredient from local algebra in our proof is Lemma 4.7 which can be interpreted as a statement about the structure of the Koszul homology algebra where Q → R is a surjection of local rings of the same embedding dimension e, (Q, n,k k k) is a regular local ring, (R, m,k k k) is a compressed local Artinian ring, and K is the Koszul complex which is a minimal resolution of k k k by free Q-modules. When the hypotheses of Lemma 4.7 are in effect, then the conclusion may be interpreted to say that there is an elementḡ in Tor Q 1 (R,k k k) with g · Tor Q e−1 (R,k k k) = Tor Q e (R,k k k).
We use this conclusion to create a Golod homomorphism from a hypersurface ring onto R.
Our reliance on Lemma 4.7 explains the hypotheses in the main theorem about the shape of the socle of R. In particular, when the top socle degree of R is even, it is possible for a non-Golod compressed Artinian standard-graded k k k-algebra R to have Tor 1 · Tor e−1 = 0. (We are writing Tor i in place of Tor Q i (R,k k k).) For example, if e = 4 and socle(R) is isomorphic to k k k(−4) 2 , then according to [11,Conj. 3 . . . . .
:
. . 4 10 . 4 : . . . . in the language of Macaulay2 [23] or Boij [11,Notation 3.4], respectively. The numerology alone shows that Tor 1 · Tor 3 = 0, but the numerology permits Tor 2 · Tor 2 to be non-zero, and this is precisely what happens. In a similar manner, if the top socle degree of R is three or if (socle(R) ∩ m s−1 )/m s = 0, then the Betti tables (in the homogeneous case) permit too many non-zero products in Tor. Consequently, if R is compressed and the top socle degree of R is 3, or the top socle degree of R is even, or (socle(R) ∩ m s−1 )/m s is non-zero, then our techniques are not able to determine if the Poincaré series of R is rational. In these cases, the question of Peeva remains wide open. We prove that the Poincaré series of R is rational by exhibiting a Golod surjection from a complete intersection onto R. It is worth observing that the existence of such a map is an important conclusion in its own right. For example, this hypothesis is used in [31] in the study of the rigidity of the two-step Tate complex, in [5] in study of the non-vanishing of Tor R i (M, N) for infinitely many i, and in [6] in the study of the structure of the set of semi-dualizing modules of a ring R.
Theorem 7.1 is the main result of the paper. To prove this theorem we apply Lemma 5.2, which is established in [37]. Lemma 5.2 is a down-to-earth criteria for proving that a given surjection of local rings is a Golod homomorphism. Massey operations are replaced with calculations involving Tor Q • (−,k k k), where Q is a regular local ring.
A compressed local Artinian ring R exhibits extremal behavior. Such a ring has maximal length among all local Artinian rings with the same embedding dimension and socle polynomial. Extremal objects exhibit special properties and deserve extra study. Indeed, there are many applications of compressed rings and these rings have received much study; see, for example, [38,28,22,21,9,24,11,12,33,41,42,34,29,16,18,37,19,27]. However, for thirty years, 1984 -2014, the notion of "compressed" ring was only defined for rings containing a field. Finally, in 2014, Rossi and Ş ega [37] proved that the notion of "compressed local Artinian Gorenstein ring" is meaningful, interesting, and works just as well in the non-equicharacteristic case. Furthermore, their theorem about rational Poincaré series is valid in the non-equicharacteristic case.
In sections 3 and 4 we embrace the philosophy of [37] and prove that the phrase "compressed local Artinian ring" is meaningful whether or not the ring contains a field and whether or not the ring is Gorenstein. Furthermore our main theorem, Theorem 7.1, is valid in the context of this enlarged notion of "compressed ring". It is worth noting that although we adopt the philosophy of [37], the techniques about Gorenstein compressed rings in [37] are not relevant in our situation. Our technique is introduced in Section 3 and the proof that the phrase "compressed local Artinian ring" is meaningful is carried out in Section 4. This study of compressed local Artinian rings is an important feature of the present paper.
Section 2 consists of preliminary matters. In Section 3 we introduce our duality technique for studying local Artinian rings. In Section 4 we prove that the notion of compressed ring is meaningful in the non-equicharacteristic case. Section 5 is concerned with Golod homomorphisms and the homological algebra that can be used to prove that a homomorphism is Golod. In Section 6 we explore the consequences of the hypothesis "compressed" on the homological algebra of Section 5. The proof of the main theorem is given in Section 7. In Section 8 it is shown that in the situation of the main theorem, R/m s is a Golod ring and the natural map R → R/m s is a Golod homomorphism; furthermore, the final statement continues to hold even if (socle(R) ∩ m s−1 )/m s is not zero.
NOTATION, CONVENTIONS, AND PRELIMINARY RESULTS.
In this paper k k k is always a field. If L is the zero module, then we also use "annihilator notation" to describe these "colon modules"; that is, ann A M = 0 : A M and ann N I = 0 : N I.
Any undecorated ":" or "ann" means : A or ann A , respectively, where A is the ambient ring.
2.2.
If I is an ideal in a ring A, N is an A-module, and L and M are submodules of N with IL ⊆ M, then let mult : I → Hom A (L, M) denote the homomorphism which sends the element θ of I to the homomorphism mult θ of Hom A (L, M), where mult θ (ℓ) = θℓ for all ℓ in L.
2.3. "Let (R, m,k k k) be a local ring" identifies m as the unique maximal ideal of the commutative Noetherian local ring R and k k k as the residue class field k k k = R/m.
is a local Artinian ring, then the top socle degree of R is the maximum integer s with m s = 0 and the socle polynomial of R is the formal polynomial ∑ s i=0 c i z i , where Further comments about the phrase "top socle degree" may be found in Remark 2.10. (e) If M is a finitely generated R-module, then µ(M) denotes the minimal number of generators of M.
where e is the embedding dimension of R. (This notation is introduced in [37, (4.1.1)].) Remark. Let (R, m,k k k) be a local Artinian ring with top socle degree s and socle polynomial equal to ∑ s i=0 c i z i . Part of the hypothesis of Theorem 7.1 is that socle(R) ∩ m s−1 = m s . This condition is equivalent to c s−1 = 0. Observation 2.4 follows quickly from the definition of v(R), gives an idea of the significance of this invariant, and is used in the proof of Lemma 4.5.
Observation 2.4. If (R, m,k k k) is a local Artinian ring, x 1 is a minimal generator of m, and i is an integer with 0 ≤ i ≤ v(R) − 2, then the linear transformation which is given by multiplication by x 1 , is an injection. In particular, and socle(R) ⊆ m v(R)−1 .
Proof. Extend the set {x 1 } to be a minimal generating set x 1 , x 2 , . . . , x e for m. If d is an arbitrary non-negative integer, then the set of monomials in x 1 , . . . , x e of degree d represents a generating set for m d /m d+1 . If d < v(R), then the number of monomials in this set is equal to the dimension of the vector space m d /m d+1 and hence this set of monomials is a basis for the vector space. The index i satisfies i + 1 < v(R); consequently, the linear transformation (2.4.1) carries a basis of m i /m i+1 to part of a basis of m i+1 /m i+2 ; and therefore, this linear transformation is an injection.
Definition 2.5. Let (R, m,k k k) be a local Artinian ring of embedding dimension e, top socle degree s, and socle polynomial ∑ s i=0 c i z i . If the Hilbert function of R is given by then R is called a compressed local Artinian ring.
Alternate definitions of "compressed local Artinian ring" are given in Theorem 4.4 and Remark 4.4.2.
2.6. If S is a ring and M is an S-module, then let λ S (M) denote the length of M as an S-module.
2.7.
Let k k k be a field. A graded ring R = 0≤i R i is called a standard-graded k k kalgebra, if R 0 = k k k, R is generated as an R 0 -algebra by R 1 , and R 1 is finitely generated as an R 0 -module.
2.8.
If M is a module over the local ring (R, m,k k k), then are the associated graded objects with respect the the maximal ideal: R g is a standard graded k k k-algebra and M g is a graded R-module.
2.9.
If V is a graded vector space over the field k k k with V i finite dimensional for all i and V i = 0 all sufficiently small i, then the formal Laurent series (d) The top socle degree of R is the top degree of the associated graded ring R g .
The following calculation is used in the proof of Lemma 6.3.
Remark 2.11. If k k k is an infinite field, (Q, n,k k k) is a regular local ring of embedding dimension e, t is an integer, and h 0 is an element of n t \n t+1 , then there exists a minimal generating set X 1 , . . ., X e for n such that h 0 −uX t 1 is in the ideal (X 2 , . . . , X e )n t−1 , for some unit u in Q. In particular, there is a generator h for the ideal (h 0 ) of Q such that h − X t 1 ∈ (X 2 , . . . , X e )n t−1 .
Outline of proof. Pass to the associated graded ring Q g . If h 0 is a non-zero homogeneous form of degree t in k k k[X 1 , . . . , X e ], where k k k is an infinite field, then, there exists a homogeneous change of variables such that, in the new variables, h = x t 1 +g, where g is a homogeneous form of degree t in the ideal (x 2 , . . . , x e ). The proof is clear. Start with After the change of variables h 0 = h 0 (1, a 2 , . . . , a e )x t 1 +g, where g is a homogeneous form of degree t in the ideal (x 2 , . . . , x e ). The field k k k is infinite; so, there exists a point (a 2 , . . ., a e ) in affine e − 1 space with h 0 (1, a 2 , . . . , a e ) = 0.
2.12. If P = i P i is a graded ring, and A = i A i and B = i B i are graded Pmodules, then the module Tor P is a resolution of A by free P-modules, homogeneous of degree zero, then , and H i (Y) to represent the modules of i-cycles, i-boundaries, and i th -homology of Y, respectively. So, in particular, 2.14. Let (R, m,k k k) be a local ring of embedding dimension e. The ring R is Golod if where K R is the Koszul complex on a minimal set of generators of m.
SOCLE.
In order to study compressed rings, one must have an appropriate duality theory. Partial derivatives provide the duality for Iarrobino [28]. Fröberg and Laksov [22] and Boij and Laksov [13] pick a vector space V in the polynomial ring k k k[x 1 , . . ., x e ] and use colon ideals to define an ideal I in the polynomial ring with the property that the corresponding quotient ring has socle V . The colon ideals provide the duality in these cases. Rossi and Ş ega [37] work in a Gorenstein ring and use Gorenstein duality directly. Duality for us is supplied by homomorphisms from a power of the maximum ideal to the socle.
Let (R, m,k k k) be a local Artinian ring with top socle degree s. If j and k are integers with 0 ≤ j, 1 ≤ k, and j + k ≤ s + 1, then the R-module homomorphism Proof. The proof may be iterated; consequently, it suffices to prove the result for ε equal to 1. Let x 1 , . . ., x e be a minimal generating set for m. For each integer i, let , and γ is an integer with 1 ≤ γ ≤ c, then the R-module homomorphism φ m,γ , which is defined by is a basis for the vector space Hom R (m i , socle(R) ∩ m A+B ). In this discussion, "δ" is the Kronecker delta; that is, and an index γ with 1 ≤ γ ≤ c. We complete the proof by showing that the basis element
COMPRESSED LOCAL ARTINIAN RINGS.
A compressed local Artinian ring has maximal length among all local Artinian rings with the same embedding dimension and socle polynomial. Compressed algebras were introduced by Iarrobino [28]. Fröberg and Laksov [22] offer an alternate discussion, essentially from the dual point of view. Traditionally, the concept "compressed" was only defined for equicharacteristic rings. However, the equicharacteristic hypothesis is irrelevant and the proof of our main theorem (Theorem 7.1) holds for arbitrary compressed local Artinian rings.
There are two themes in this section. In Theorem 4.1 and Remark 4.2 we explain the sense in which generic standard-graded Artinian algebras over a field are compressed. A short, self-contained, and direct proof of Theorem 4.1 may be found in [13].
In Theorem 4.4 and Corollary 4.5 we justify the first sentence of the present section and we describe the annihilator of each large power of the maximal ideal of R when R is a compressed local Artinian ring. This information is used heavily in the proofs of Corollary 4.6 and Lemma 4.7. Lemma 4.7 is the key result from local algebra that is used in the second half of the paper about Poincaré series. , Q be a standard-graded polynomial ring over k k k of embedding dimension e, G be the Grassmannian of subspaces of Q s of codimension c, and L be the set of homogeneous ideals I of Q such that Q/I is a standard-graded Artinian k k k-algebra with socle polynomial cz s . Then the following statements hold.
There is a non-empty open subset of G for which the corresponding quotient Q/I is compressed.
Let k k k be an infinite field. It is shown in Section 7 of [22], especially Theorem 14, that generic standard-graded Artinian k k k-algebras are compressed for all legal socle polynomials. (Theorem 4.1 deals only with socle polynomials of the form cz s .) The exact details of the result in [22] are similar to, but more complicated than, the details of Theorem 4.1. There is no need to record the details of the statement of [22] in the present paper. The extra complication arises because is a filtration of m j . The proof is obtained by exhibiting an injection from each factor of filtration (4.3.1) into a vector space whose dimension is easy to approximate.
If k is an integer with 1 ≤ k ≤ s + 1 − j, then the R-module injection mult of (3.0.1) yields Recall that because m k−1 is generated by the set of monomials of degree k − 1 in any minimal generating set of m, and by the definition of socle polynomial. Thus, Let K = k + j − 1; reverse the order of summation; let α = K − j; and recall the relationship between the number of monomials of degree at most ℓ − j in e variables and the number of monomials of degree equal to ℓ − j in e + 1 variables to conclude In Theorem 4.4 we justify the claim that a compressed local Artinian ring has maximal length among all local Artinian rings with the same embedding dimension and socle polynomial. The proof of Theorem 4.4 contains a wealth of information. We mine this information throughout the rest of the section.
Theorem 4.4. Let (R, m,k k k) be a local Artinian ring with embedding dimension e, top socle degree s, and socle polynomial ∑ s i=0 c i z i . Then the following statements hold.
(a) The length of R satisfies show that an alternate version of (4.4.1) is given by Once the proof of Theorem 4.4 is complete, then we know that a local Artinian ring R is compressed if and only if equality holds in (4.4.3). This observation provides an effective method for testing if a ring is compressed.
Proof. Define t to be the integer Observe that hence, the inequality (4.4.1) may be re-written as Reverse the order of summation, let α = ℓ − i, and count the number of monomials of degree at most ℓ − t in e variables to see that and therefore, the inequality (4.4.1) is equivalent to On the other hand, the inequality (4.4.6) does indeed hold, because as described at (4.3.2); and Proposition 4.3 guarantees that This completes the proof of (a).
The parameter c s is at least 1; so, one consequence of the inequality (4.4.5), when The binomial coefficient e−1+i i counts the number of monomials of degree i in a polynomial ring with e variables; therefore, the most recent inequality forces s − t < t; and therefore, (b) It is clear that if R is a compressed local Artinian ring in the sense of Definition 2.5, then equality holds in (4.4.1). Henceforth, in this proof, we assume that equality holds in (4.4.1). We first prove that R is a compressed local Artinian ring in the sense of Definition 2.5; that is, we prove that The inequality (4.4.6) is equivalent to (4.4.1); hence equality holds in (4.4.6) and in all of the intermediary inequalities that lead to (4.4.6). In particular, follow from (4.4.7), and is surjective for all j, k with (4.4.15) t ≤ j ≤ s and 1 ≤ k ≤ s − j + 1.
Therefore, the injections of (3.0.1) are isomorphisms and equality holds in (4.3.3) when j and k satisfy (4.4.15); that is, for t ≤ j ≤ s and 1 ≤ k ≤ s − j + 1.
Furthermore, equality holds in (4.3.4) for t ≤ j ≤ s. In particular, for t ≤ j ≤ s. Combine (4.4.12) and (4.4.17) to see that (4.4.10) holds. This completes the proof of (b).
(c) The inequalities of (4.4.5) and (4.4.9) hold because of the definition of t which is given in (4.4.4). We assume equality holds in (4.4.1); so (4.4.10) holds. We conclude that t = v(R) and s ≤ 2v(R) − 1.
In Corollary 4.5 we describe the annihilator of each large power of the maximal ideal of R, when R is a compressed local Artinian ring. This information is used heavily in the proofs of Corollary 4.6 and Lemma 4.7.
Corollary 4.5. If (R, m,k k k) is a compressed local Artinian ring with top socle degree s, then the following statements hold. Apply descending induction on j to see that Indeed, (4.5.2) holds when j = s. Assume that (4.5.2) holds when j + 1. We prove that (4.5.2) holds when j. Observe that (The final equality is due to the induction hypothesis.) Thus, The equality on the left is due to (4.5.3) and the equality on the right is due to (4.5.1).
(b) We saw in Observation 2.4 that (m j : m) = m j−1 = m j−1 + socle(R), . Also, the assertion of (b) is obvious at j = s + 1. The parameter v(R) continues to equal to the "t" of (4.4.4). We prove that if t + 1 ≤ j ≤ s, then It suffices to prove the inclusion "⊆". To do this, it suffices to prove the following claim.
Claim. If 2 ≤ a ≤ s − j + 2 and θ ∈ (m j : m) ∩ (0 : m a ), then there exists an element We prove the claim. Observe that multiplication by θ is an element of Of course, we know from (4.4.16), that there is an element θ ′ ∈ m j−1 ∩ (0 : m a ) with multiplication by θ ′ equal to multiplication by θ on m a−1 .
(c) One direction of assertion (c) is obvious. We prove the other direction. The special hypothesis socle(R) = m s of (c) guarantees that socle(R) ⊆ m a for 0 ≤ a ≤ s; and therefore, under this special hypothesis, assertion (b) becomes Fix an element x in R and an integer i with 0 ≤ i ≤ s and xm i = 0. We use descending induction to prove that Proof. (⇐) This direction is obvious. Indeed, the Hilbert function of R is always equal to the Hilbert function of R g and the hypothesis asserts that the relationship of Definition 2.5 holds between h R g and the socle polynomial of R g .
(⇒) As described above, it suffices to show R and R g have the same socle polynomial. The isomorphism theorem I/(I ∩ J) ∼ = (I + J)/J ensures that hence the socle polynomial R, defined in 2.3.(d), is also equal to where s is the top socle degree of R. On the other hand, the socle polynomial of the graded local ring R g is The ring R is compressed; hence Proof. Let t denote v(R), which by hypothesis is equal to (s + 1)/2. It is clear that For the other direction, let σ be an element of m s . We will construct an element Θ of ann R (m ′ ) ∩ m t such that x t−1 1 Θ = σ. We build Θ as θ 0 + · · · + θ t−2 , where, for each i, We first build θ 0 . Consider the homomorphism φ 0 ∈ Hom R (m t−1 /m t , socle(R) ∩ m s ), which is given by φ 0 (m ′ m t−2 ) = 0 and φ 0 (x t−1 1 ) = σ. (Keep in mind that m t−1 /m t and m ′ m t−2 ⊕k k kx t−1 1 are isomorphic as R-modules. At this point¯means mod m t .) Apply (4.4.16), with j = k = t to obtain an element θ 0 ∈ m t ∩ ann(m t ) with x t−1 1 θ 0 = σ and θ 0 m ′ m t−2 = 0. Suppose 0 ≤ i ≤ t − 3 and elements θ 0 , . . . θ i , which satisfy (4.7.1), have been identified. We now build θ i+1 . Consider the homomorphism which is given by (At this point¯means mod m t−i−1 . We have taken advantage of a direct sum decomposition of m t−i−2 /m t−i−1 to define φ i+1 . The image of φ i+1 is contained in the socle of R because of the properties of the earlier θ's as described in (4.7.1).) Apply (4.4.16), with j = t and k = t − i − 1 to obtain an element Iterate this procedure to find θ t−2 and thereby complete the proof.
GOLOD HOMOMORPHISMS.
In this paper we exhibit a Golod homomorphism from a complete intersection onto a compressed local Artinian ring R and then use facts about Golod homomorphisms to draw conclusions about the Poincaré series of R-modules. The present section is mainly concerned with techniques from homological algebra that can be used to prove that a homomorphism is Golod. The hypothesis "compressed" is not used anywhere in the present section.
There are numerous definitions of Golod homomorphism (see for example [2]); we give the version involving trivial Massey operations, found, for example, in [25]. In Lemma 5.2 we record a result from [37] which shows how to use homological algebra to prove that trivial Massey operations exist. Most of the section is about homological algebra. Indeed, in Lemmas 5.4 and 5.5 we prove that various maps of Tor are zero. Lemma 5.5 is used in Observation 5.6 to show that if the top socle degree of a local Artinian ring R is small compared to the invariant v(R) of 2.3.(f), then R is a Golod ring. Lemmas 5.8 and 5.9 are a short study of the effect on Tor associated to taking a hypersurface section. The section concludes with Theorem 5.10 which is a well-known result that exhibits the common denominator for all Poincaré series P R M (z) when there is a Golod homomorphism from a local hypersurface ring onto R and M roams over all finitely generated R-modules. It is convenient to name the following family of maps of Tor. We use Lemma 5.4 to calculate ν i . Associated graded objects are discussed in 2.8.
Lemma 5.4. Let (Q, n,k k k) be a regular local ring, (R, m,k k k) be the local ring R = Q/I for some ideal I of Q, and i and ℓ be two integers. If Tor Q g i, j (R g ,k k k) = 0 for all j with ℓ + 1 + i ≤ j, then the map Proof. Let K R denote the Koszul complex over R on a minimal generating set x 1 , . . ., x e of m. We identify ν Q i (m ℓ ) with the map H i (m ℓ+1 K R ) → H i (m ℓ K R ) induced by the inclusion Let Z denote the module of cycles in degree i of m ℓ+1 K R and B denote the module of boundaries of degree i in m ℓ K R . Note that B ⊆ Z. To show that ν Q i (m ℓ ) is zero, we need to show that Z ⊆ B. We will show that Z ⊆ B + m j K R i for all j with ℓ + 2 ≤ j. For each j, let x * j denote the image the element x j in m/m 2 = (R g ) 1 . Let L denote the graded Koszul complex over R g on x * 1 , . . ., x * e . When writing L p,q , the index p stands for the homological degree and the index q for the internal degree. Note that L can be thought of as the associated graded complex of K R , with respect to the standard m-adic filtration of K R . In particular, L p = ((K R p ) g )(−p) for each p, and the differential d L of L is induced from the differential d K R of K R as follows: If y ∈ m q K R p and y * is the image of y in m q K R p /m q+1 K R p = L p,p+q , then d L (y * ) is equal to the image of d K R (y) in m q+1 K R p−1 /m q+2 K R p−1 = L p−1,p+q . We identify Tor Q g (R g ,k k k) with the homology of the complex L.
Fix an integer p with ℓ+1 ≤ p and let z ∈ Z ∩m p K R i . In particular, d K R (z) = 0. We consider z * to be the image of z in m p K R i /m p+1 K R i = L i,p+i and note that d L (z * ) = 0 because d K R (z) = 0. The hypothesis that Tor are each the zero map for all (i, ℓ) with 1 ≤ i and 1 ≤ ℓ ≤ v(R) − 1. The map of (5.5.1) is induced by the natural quotient map R/m ℓ+1 → R/m ℓ and the map of (5.5.2) is induced by the natural quotient map R → R/m ℓ .
Proof. It is clear that Tor Q g i, j (Q g ,k k k) = 0 for all (i, j) = (0, 0). The parameter ℓ is non-negative; so Lemma 5.4 yields that is the zero map for all non-negative i. The long exact sequences of Tor which correspond to the commutative diagram is the zero map for all positive i. The ideal I is contained in n ℓ+1 and n ℓ ; so Q/n ℓ+1 = R/m ℓ+1 and Q/n ℓ = R/m ℓ . Thus, (5.5.1) is the zero map for all positive i. The map of (5.5.2) factors through (5.5.1).
Observation 5.6 takes care of the "easy case" in the proof of the main theorem, which is Theorem 7.1.
Observation 5.6. Let (R, m,k k k) be a local Artinian ring with top socle degree s. If s ≤ 2v(R) − 3, then R is a Golod ring.
Proof. Let t denote v(R). The ring R is complete and local; so the Cohen structure theorem guarantees that there is a regular local ring (Q, n,k k k) with R = Q/I and I ⊆ n 2 . We apply Lemma 5.2, with a = t − 1, to show that the canonical quotient map Q → Q/I = R is a Golod homomorphism. It follows that R is a Golod ring. It suffices to show that (i) the map Tor Q i (R,k k k) → Tor Q i (R/m t−1 ,k k k), induced by the quotient map R → R/m t−1 , is zero for all positive i, and (ii) the map Tor Q i (m 2t−2 ,k k k) → Tor Q i (m t−1 ,k k k), induced by the inclusion m 2t−2 → m t−1 is zero for all non-negative i.
Condition (i) is established in Lemma 5.5 and (ii) obviously holds. Indeed, by hypothesis, the top socle degree s of R satisfies s ≤ 2t − 3. It follows that m 2t−2 = 0.
The following two results are proven in [37]; but in each case the statement given in [37] is slightly different than the statement given here.
Set up 5.7. Let (Q, n,k k k) and (P, p,k k k) be local rings with P = Q/(h) for some element h in n t with h not a zerodivisor on Q and 2 ≤ t. Let N ⊆ M be finitely generated P-modules, incl : N → M represent the inclusion map, and ϕ : Q → P represent the natural quotient map. For any P-module X , let ϕ X i : Tor Q i (X ,k k k) → Tor P i (X ,k k k) be the map on Tor induced by the change of rings ϕ : Q → P. For either ring A = P or A = Q, let incl A i : Tor A i (N,k k k) → Tor A i (M,k k k) be the map on Tor induced by the A-module homomorphism incl : N → M.
Remark. To prove these results, in each case start with the short exact sequence and follow the argument given in [37]. Keep in mind that the hypothesis that [40]; this construction planted a seed that evolved into the Eisenbud operators.
We conclude this section with a result which exhibits the common denominator for all Poincaré series P R M (z) when there is a Golod homomorphism from a local hypersurface ring onto R and M roams over all finitely generated R-modules.
Theorem 5.10. Let (Q, n,k k k) be a regular local ring of embedding dimension e, (P, p,k k k) be a local ring with P = Q/(h) for some h ∈ n 2 , (R, m,k k k) be a local ring, κ : P → R be a surjective Golod homomorphism, ϕ R • : Tor Q (R,k k k) → Tor P • (R,k k k) be the map induced by the natural quotient map Q → P, and d R (z) be the polynomial . Then, for every finitely generated R-module M, there exists a polynomial p M (z) in Z[z] with P R M (z)d R (z) = p M (z). In particular, p k k k (z) = (1 + z) e .
Proof. Results of Levin, see for example [7,Prop. 5.18], give all of the conclusions, except for the formula for d R (z). The denominator d R (z) is calculated in [37]; although the exact form given above is not explicitly identified there. Most of the steps are well known. One starts with the equation ; so it suffices to calculate P R k k k(z) . The homomorphism κ is Golod; hence the equation The key new step is taken in [37, 2.2.1] where it is shown that for all finitely generated P-modules X . (The calculation (5.10.1) is valid whenever the hypotheses of 5.7 are satisfied.) In the present calculation, one takes X to be R. The ring P is a hypersurface; consequently, the Poincaré series is well known. (Indeed the resolution of k k k by free P-modules is known.) Combine everything to obtain the formula for d R (z).
COMPRESSED.
We deduce three homological consequences of the hypothesis that local Artinian ring R is compressed. These Lemmas (6.1, 6.3, and 6.4) play a major role in the proof of the main result, Theorem 7.1. Lemma 6.1. Let (Q, n, k) be a regular local ring and (R, m, k) be the local ring R = Q/I for some ideal I of Q. Assume that R is a compressed local Artinian ring of embedding dimension e. If v(R) ≤ ℓ, then the map ν Q i (m ℓ ) of Definition 5.3 is zero for i < e.
Proof. Apply Corollary 4.6 to see that R g is a standard-graded compressed Artinian k k k-algebra with the top socle degree of R g equal to the top socle degree of R and v(R g ) equal to v(R); and therefore, [22,Prop. 16] guarantees that Tor Let (R, m,k k k) be a local Artinian ring. This ring is complete and local; hence the Cohen structure theorem guarantees that R is the quotient of a regular local ring. We often use information from Data 6.2. This information all automatically exists as soon as the local Artinian ring (R, m,k k k) is chosen. Observe that the parameter t of Data 6.2 is equal to the invariant v(R) of 2.3.(f). Data 6.2. Let (Q, n,k k k) be a regular local ring and (R, m,k k k) be the local Artinian ring R = Q/I, where I is an ideal of Q with I ⊆ n 2 . Define t to be the largest integer with I ⊆ n t . Let (P, p,k k k) be the local hypersurface ring P = Q/L, where L is the principal ideal of Q generated by a non-zero element of I which is not in n t+1 , (K, ∂) be the Koszul complex which is a minimal resolution of k k k by free Q-modules, and π : Q → R and κ : P → R be the natural quotient homomorphisms. Lemma 6.3. Let (R, m,k k k) be a compressed local Artinian ring of embedding dimension e and top socle degree s. Adopt Data 6.2. Assume that the field k k k is infinite and that s = 2t − 1. Then there exists G ∈ n t−1 K 1 such that ∂(G) generates L and where g denotes the image of G in P ⊗ Q K andḡ is the image of G in R ⊗ Q K.
Proof. The field k k k is infinite; therefore we may apply Remark 2.11 and decompose n into subideals (X 1 ) + n ′ with X 1 a minimal generator of n, µ(n ′ ) = e − 1, and h − X t 1 in the ideal n ′ n t−1 of Q, for some generator h of L. The decomposition n = X 1 Q + n ′ induces a decomposition m = x 1 R + m ′ with x 1 equal to the image of X 1 and m ′ equal to the image of n ′ . Let q be the ideal ann R (m ′ ) ∩ m t of R. We proved in Lemma 4.7 that (6.3.1) x t−1 1 q = m s . Let X 2 , . . ., X e be a minimal generating set for n ′ and T 1 , . . . , T e be a basis for K 1 with ∂(T i ) = X i . Recall that h has the property that h − X t 1 ∈ (X 2 , . . ., X e )n t−1 . It follows that there is an element G in K 1 of the form α i T i T 2 · · · T e by (6.3.2) = qḡT 2 · · · T e =ḡqT 2 · · · T e ⊆ḡZ e−1 (q ⊗ Q K) since ∂(T i )q = 0 for i ≤ 2 ≤ e. Proof. Without loss of generality, we may assume that k k k is infinite. [y] , and m ′ = mR ′ , then the extensions Q → Q ′ , P → P ′ , and R → R ′ are faithfully flat, and therefore, ν P i = 0 if and only if ν P ′ i = 0, and (a) Let ν Q i : Tor Q i (m j ,k k k) → Tor Q i (m t ,k k k) denote the map induced by the inclusion m j ⊆ m t . Apply Lemma 5.9 to the inclusion m j ⊆ m t . Observe that n t−1 annihilates m j and m t /m j . Observe also that the map incl A i of 5.9 is now denoted ν A i for A = P or A = Q. We conclude that assertion (a) is equivalent to the assertion The hypothesis that socle(R) ∩ m j = m s yields socle(m j ) ⊗ Q K e = m s ⊗ Q K e . Thus, im ν Q e is equal to the submodule m s ⊗ Q K e of (socle(R) ∩ m t ) ⊗ Q K e . We compute ϕ m t e (m s ⊗ Q K e ). Let G be as in Lemma 6.3. The image of G in P ⊗ Q K, denoted by g, is a cycle and the minimal resolution of k k k by free P-modules is the Tate complex T = (P ⊗ Q K) Y , with The homomorphism ϕ m t e is induced by the natural map whereḡ is the image of g in R ⊗ Q K. The defining property of Y , given in (6.4.5), together with the graded product rule yields (6.4.6) z which establishes that the image of z under the map ϕ m t e is represented by a boundary in (m t ⊗ Q K) Y ; and therefore is zero in H e ((m t ⊗ Q K) Y ) = Tor P e (m t ,k k k). This finishes the proof of (6.4.1) and hence the proof of (a).
(b) Apply Theorem 6.5 with b = t, τ = t − 1, K R = R ⊗ Q K, and z 1 =ḡ. Recall that g ∈ Z 1 (m t−1 ⊗ Q K). It is clear that the one-cycleḡ squares to zero. We verify that hypothesis (6.5.1) is satisfied. On the one hand, Lemma 6.3 yields that m s ⊗ Q K e ⊆ḡZ e−1 (m t ⊗ Q K) and, on the other hand, Lemma 6.1 yields that The following Theorem is a special case of [17,Thm. 3.1]. This result was used in the proof of Lemma 6.4. In this section we prove Theorem 7.1, which is the main result of the paper. The short version of the statement is "If R is a compressed local Artinian ring with top socle degree s, with s odd, 5 ≤ s, and socle(R) ∩ m s−1 = m s , then the Poincaré series of all finitely generated modules over R are rational, sharing a common denominator, and there is a Golod homomorphism from a complete intersection onto R." Recall that the data of 6.2 is constructed from R. Adopt Data 6.2. Then s ≤ 2t − 1 and the following statements hold: where c s = dim k k k (m s ) then, for every finitely generated R-module M, there exists a polynomial p M (z) in Z[z] with P R M (z)d R (z) = p M (z). In particular, p k k k (z) = (1 + z) e .
Proof. It is shown in Theorem 4.4.(c) that s ≤ 2t − 1. If s < 2t − 1, then it is shown in the proof and statement of Observation 5.6 that π is a Golod homomorphism and R is a Golod ring. The statement about the common denominator d R (z) is due to Lescot [30], see also [4,Thm. 5.3.2].
Henceforth, we assume s = 2t − 1. The following two conditions hold: , induced by the canonical quotient map R → R/m t−1 , is zero for all positive i, and 7.1.2. the map ν P i : Tor P i (m 2t−2 ,k k k) → Tor P i (m t ,k k k), induced by the inclusion m 2t−2 ⊆ m t , is zero for all non-negative integers i. Now that 7.1.2 holds, the map Tor P i (m 2t−2 ,k k k) → Tor P i (m t−1 ,k k k) is also zero, and Lemma 5.2 can be applied with a = t − 1 to conclude that κ is Golod.
Apply Theorem 5.10 to finish the proof. It remains to prove that the Hilbert series of the kernel of ϕ R • : Tor Q • (R,k k k) → Tor P • (R,k k k) is HS ker(ϕ R • ) (z) = z + c s z e . It suffices to prove that Observe that ϕ R 0 : Tor Q 0 (R,k k k) → Tor P 0 (R,k k k) is the isomorphism k k k → k k k. It follows that dim k k k ker(ϕ R 0 ) = 0. Observe that ϕ R 1 : Tor Q 1 (R,k k k) → Tor P 1 (R,k k k) is the natural map ker π n ker π → ker π n ker π + L .
The kernel of this map has dimension 1 because one of the minimal generators of ker π has been sent to zero. It is shown in Lemma 7.2 that ker(ϕ R e ) ∼ = m s . We complete the proof of (7.1.3), hence the proof of the Theorem, by showing that . Lemma 6.1 yields that (7.1.6) is the zero map; hence, (7.1.5) is also the zero map. Apply Lemma 5.8, together with the fact that (7.1.5) is the zero map, to the inclusion m 2t−2 ⊆ m t−1 . Observe that n t−1 annihilates m t−1 /m 2t−2 . Conclude that One can now employ the commutative diagram in proof of Claim 2 in the proof of [37,Lem. 3.4] to complete the proof of (7.1.4).
The following calculation is used in the proof of Theorem 7.1. Lemma 7.2. Adopt the notation and hypotheses of Theorem 7.1 with s = 2t − 1. Let ϕ R e : Tor Q e (R, k) → Tor P e (R, k) be the map induced by the natural quotient map Q → P and let K be the Koszul complex which is a minimal resolution of k k k by free Q-modules. Then ker(ϕ R e ) = m s ⊗ Q K e .
Proof. As described at the beginning of the proof of Lemma 6.4, it does no harm to assume that k k k is infinite. The following consequence of Lemma 5.5 is used repeatedly in this proof.
The homomorphism Tor
, which is induced by the natural quotient map R/m t → R/m t−1 , is zero for 1 ≤ i.
We continue the identification of the functors H • (− ⊗ Q K) and Tor Q • (−,k k k) which was begun in (6.4.2). In other words, we take Tor Q e (R,k k k) to be socle(R) ⊗ Q K e and Tor P e (R, k) to be H e ((R ⊗ Q K) Y ); furthermore, ϕ R e carries the cycle z in socle(R) ⊗ Q K e to the homology class of z in (R ⊗ Q K) Y . The argument (6.4.6) shows that if z ∈ m s ⊗ Q K e , then the image of z in (m t ⊗ Q K) Y is a boundary; hence the image of z in (R ⊗ Q K) Y is a boundary. Thus, m s ⊗ Q K e ⊆ ker(ϕ R e ). We prove the other direction. Let w be an element of socle(R) ⊗ Q K e which is an element of the kernel of ker ϕ R e . It follows that w is a boundary in (R ⊗ Q K) Y ; therefore, w = ∂ a 0 +Ya 1 +Y (2) a 2 + · · · +Y (m) a m = ∂(a 0 ) +ḡa 1 +Y ∂(a 1 ) +ḡa 2 + · · · +Y (m−1) ∂(a m−1 ) +ḡa m +Y (m) ∂(a m ), for some a i ∈ R ⊗ Q K e+1−2i , with 1 ≤ i ≤ ⌊ e+1 2 ⌋. The module K e+1 is zero; consequently, a 0 = 0. The (R ⊗ Q K)-module (R ⊗ Q K) Y is free, with basis {Y (i) }, and therefore w =ḡa 1 , ∂(a 1 ) +ḡa 2 = 0, . . . , It is possible that m = (e+1)/2 and a m ∈ R ⊗ Q K 0 = R. Observe that, in this case, a m ∈ m. Indeed, if a m were a unit, then the equation ∂(a m−1 ) +ḡa m = 0 of (7.2.2) would yield thatḡ is a boundary in R ⊗ Q K and it would follow from Lemma 6.3 that m s ⊗ Q K e ⊆ ∂(R ⊗ Q K e+1 ) = 0. The most recent statement is impossible because R has top socle degree s.
We claim that for each i, there exists b i ∈ R ⊗ Q K e+2−2i , c i ∈ m t−1 ⊗ Q K e+1−2i , and d i ∈ R ⊗ Q K e−2i such that We prove (7.2.3) by descending induction.
If m < (e + 1)/2, then a m is a (e + 1 − 2m)-cycle in R ⊗ Q K. (Of course, a m is also a cycle in R/m t ⊗ Q K). Apply (7.2.1) to find b m ∈ R ⊗ Q K e+2−2m and c m in m t−1 ⊗ Q K e+1−2m with a m = ∂(b m ) + c m . If m = (e + 1)/2, then a m ∈ m and a m = ∂(b m ) for some b m ∈ R ⊗ Q K 1 .
We have assumed that s = 2t − 1 and that socle(R) ∩ m s−1 = m s .
It follows that w ∈ m s ⊗ Q K e , and the proof is complete.
FACTORING OUT THE HIGHEST POWER OF THE MAXIMAL IDEAL.
The hypotheses s = 2t − 1, 5 ≤ s, and socle(R) ∩ m s−1 = m s all are in effect in the interesting case of the main theorem, Theorem 7.1. If we only assume s = 2t − 1, then we are not able to make any claim about the Poincaré series P R k k k ; nonetheless, in Corollary 8.1, we prove that the homomorphism R → R/m s is Golod. As a consequence, when all of the hypotheses of the interesting case of Theorem 7.1 are reimposed, we prove, in Corollary 8.3, that R/m s is a Golod ring. Tor R i (R/m s ,k k k) → Tor R i (R/m t ,k k k), induced by the natural quotient homomorphism R/m s → R/m t , are zero for all positive i. Apply Lemma 5.2 with P = R, R replaced by R/m s , and a = t. Condition (a) of Lemma 5.2 is satisfied by (8.1.1). Condition (b) of Lemma 5.2 holds because m 2t = 0. Conclude that ρ is a Golod homomorphism.
The next result describes how to use a mapping cone to obtain a minimal resolution of the Q-module R/m s if one already knows the minimal resolution of R.
Lemma 8.2. Let (R, m,k k k) be a compressed local Artinian ring of embedding dimension e and top socle degree s, (Q, n,k k k) be a regular local ring of embedding dimension e with R = Q/I for some ideal I of Q, and c s be dim k k k m s . If v(R) + 1 ≤ s, then P Q R/m s (z) = P Q R (z) + c s z(1 + z) e − c s z e (1 + z).
Proof. Observe that the inclusion m s ⊆ R induces the following statements: (8.2.1) Tor Q i (m s ,k k k) → Tor Q i (R,k k k) is zero for 0 ≤ i ≤ e − 1, and Tor Q i (m s ,k k k) → Tor Q i (R,k k k) is an injection for i = e. Before establishing (8.2.1); we draw consequences from these statements. One combines (8.2.1) and the short exact sequence It follows that P Q R/m s (z) = P Q R (z) + c s P Q k k k (z) − c s z e − c s z e+1 , as claimed. Now we prove (8.2.1). The long exact sequence of homology that is associated to (8.2.2) ends with 0 → Tor Q e (m s ,k k k) → Tor Q e (R,k k k); hence, the lower line in (8.2.1) holds. On the other hand, if 0 ≤ i ≤ e − 1, then Lemma 6.1 guarantees that the inclusion m ℓ+1 ⊆ m ℓ induces the zero map Tor Q i (m ℓ+1 ,k k k) → Tor Q i (m ℓ ,k k k) for all ℓ with v(R) ≤ ℓ. The hypothesis ensures that v(R) ≤ s − 1; hence, Tor Q i (m s ,k k k) → Tor Q i (m s−1 ,k k k) is the zero map for i < e. The top line of (8.2.1) holds because the inclusion m s ⊆ R factors through the inclusion m s ⊆ m s−1 .
In the interesting case of the main theorem, the ring R/m s is Golod. .
We calculate both sides of (8.3.1), verify the equality, and thereby prove the result. Observe first that (8.3.2) P R R/m s (z) = 1 + c s zP R k k k (z). Indeed, the exact sequence 0 → m s → R → R/m s → 0 is the beginning of the minimal resolution of R/m s by free R-modules and m s is isomorphic to c s k k k.
The hypotheses of Theorem 7.1 are in effect; and therefore, Apply (8.3.1) to conclude that R/m s is a Golod ring. | 2017-06-29T21:31:54.000Z | 2016-07-19T00:00:00.000 | {
"year": 2018,
"sha1": "8d00fe4c6239506e83779b58ec48be7999eefb30",
"oa_license": null,
"oa_url": "https://www.sciencedirect.com/science/article/am/pii/S0021869318301911",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8d00fe4c6239506e83779b58ec48be7999eefb30",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
15269576 | pes2o/s2orc | v3-fos-license | The discovery of a massive supercluster at z=0.9 in the UKIDSS DXS
We analyse the first publicly released deep field of the UKIDSS Deep eXtragalactic Survey (DXS) to identify candidate galaxy over-densities at z~1 across ~1 sq. degree in the ELAIS-N1 field. Using I-K, J-K and K-3.6um colours we identify and spectroscopically follow-up five candidate structures with Gemini/GMOS and confirm they are all true over-densities with between five and nineteen members each. Surprisingly, all five structures lie in a narrow redshift range at z=0.89+/-0.01, although they are spread across 30Mpc on the sky. We also find a more distant over-density at z=1.09 in one of the spectroscopic survey regions. These five over-dense regions lying in a narrow redshift range indicate the presence of a supercluster in this field and by comparing with mock cluster catalogs from N-body simulations we discuss the likely properties of this structure. Overall, we show that the properties of this supercluster are similar to the well-studied Shapley and Hercules superclusters at lower redshift.
. The J − K, I − K and K − 3.6µm colour-magnitude plots for each cluster. The colour-magnitude diagrams represent a 1.5 arcminute region around each cluster. The dashed lines show the limits used to select candidate cluster members via the red sequence. We identify the spectroscopically confirmed z =0.89 cluster members (filled squares), as well as the fore-/background galaxies (filled circles). In the DXS4 field we identify separately the members of the higher redshift structure in this field (at z = 1.09). Note also that some spectra were taken for objects outside the colour-magnitude limits (marked as dashed boxes) to fully populate the GMOS masks.
the number density is most sensitive to the assumed cosmology), the full power of the comparisons to theoretical simulations has not yet been exploited.
The paucity of clusters known at z > ∼ 1 stems from the limitations of current survey methods. For instance, optical colour-selection of clusters (Couch et al. 1991;Gladders & Yee 2000), which relies on isolating the 4000Å break in the spectral energy distributions of passive, red early-type galaxies (the dominant population in local clusters) becomes much less effective at z > ∼ 0.7 where this feature falls in or beyond the i-band, in a region of declining sensitivity of silicon-based detectors. Recent progress has been made in identifying clusters using X-ray selection with Chandra and XMM-Newton (Romer et al. 2001;Rosati et al. 2002;Mullis et al. 2005) and these studies have identified galaxies clusters out to z ∼ 1.5 (Stanford et al. 2006;Bremer et al. 2006). However, the X-ray gas in these clusters appears more compact than for comparable systems at lower redshifts and hence there are concerns that the accurate comparison of cluster properties with redshift required to constrain cosmological parameters could be subject to potential systematic effects related to the thermal history on the intra-cluster medium. Thus a complimentary technique for cluster selection is required.
One solution to this problem is to extend the efficient optical colour-selection method beyond z > ∼ 0.7 using nearinfrared detectors (Hirst et al. 2006). This approach has been impressively demonstrated by Stanford et al. (2006) who find a z = 1.45 cluster in the NOAO-DW survey selected from optical-near-infrared colours. The commissioning of the new wide-field WFCAM camera on UKIRT provides the opportunity to significantly expand deep panoramic surveys in the near-infrared. The Deep eXtragalactic Survey (DXS) is a component within the UK Infrared Deep Sky Survey (UKIDSS) (Warren et al. 2007) with the aim of imaging an area of 35 square degrees at high Galactic latitudes in the J-and K-band filters to a depth JAB=23.2 and KAB=22.7 respectively. The principal goals of the DXS include measuring the abundance of galaxy clusters at z ∼ 1-1.5, measuring galaxy clustering at z ∼ 1 and measuring the evolution of bias. This paper presents the first results from a spectroscopic follow-up of five high redshift galaxy cluster candidates identified in the DXS Early Data Release (EDR; Dye et al. 2006).
The structure of this paper is as follows. In §2 we describe the data on which our analysis is based: a combination of optical, near-and mid-infrared imaging with Subaru, UKIRT and Spitzer which is used to select cluster candidates and the follow-up Gemini/GMOS spectroscopy. In §3 we present an analysis of the cluster properties. We discuss these and give our conclusions in §4. Unless otherwise stated, we assume a cosmology with Ωm=0.27, Λ =0.73 and H0=70 km s −1 Mpc −1 . All magnitudes are given in the AB system.
OBSERVATIONS AND REDUCTION
We utilise the UKIDSS-DXS EDR for the ELAIS-N1 region which covers a contiguous area of 0.86 • × 0.86 • centred on α = 16 11 14.400; δ = +54 38 31.20 (J2000). The survey data products for this region reach 5-σ point source limits of JAB = 22.8-23.0 and KAB = 22.9-23.1. To complement these observations we exploit deep I-band imaging obtained with Suprime-Cam on Subaru Telescope. These observations cover the entire ELAIS-N1/DXS field and are described in Sato et al. 2007 (in preparation) and reach a 5-σ point-source limit of IAB = 26.2. As part of the Spitzer Wide-area InfraRed Extragalactic (SWIRE) survey (Lonsdale et al. 2003), the ELAIS-N1 region was also imaged in the IRAC (3.6, 4.5, 5.8 and 8.0µm) bands as well as at 24µm with MIPS. These catalogs are described in Surace et al. (2004).
Optical-Infrared matching
In order to efficiently select cluster candidates, accurate colours are required for galaxies in the coincident regions of the DXS, Subaru and SWIRE. The DXS catalogue was constructed using SExtractor (Bertin & Arnouts 1996) with a detection threshold of 2σ in at least five pixels. Objects which lay in the halo or CCD bleed of a bright star were also removed before the final catalog was constructed. This catalogue was then matched to the optical catalogue, with the closest match within 1 ′′ being used. During the first pass cross-correlation the average offsets between the optical and near-infrared catalogues was ∆α = 0.33 ± 0.05 ′′ , ∆δ = −0.20 ± 0.04 ′′ (i.e. the optical sources were offset to the south-east of the near-infrared sources, which are tied to FK5 through 2MASS stars, Dye et al. 2006). This systematic offset was removed from the optical catalog and the cross-correlation recalculated resulting in an rms offset of ∼ 0.1 ′′ . Since accurate colours were required in order to select cluster candidates, we extracted 2 ′′ aperture magnitudes from the optical and near-infrared catalogs. In both cases, the magnitude zero-points were calculated using 2 ′′ photometry of (unsaturated) stars in the field.
The near-infrared and mid-infrared catalogs were crosscorrelated in exactly the same way as above, with a systematic offset between the mid-infrared and near-infrared sources of ∆α = −0.30 ± 0.05 ′′ , ∆δ = 0.33 ± 0.04 ′′ , which again was removed before a second pass cross-correlation was performed.
Cluster Selection
As this was a pilot study, we choose to identify candidate high-redshift galaxy clusters in three ways. First, we searched for the sequence of passive red galaxies in high-redshift clusters by selecting galaxies from photometric catalogue in the (J − K)-K and (I − K)-K colourmagnitude space. We first identified candidates using slices of ∆(J − K)AB = 0.4 stepped between (J − K)AB = 0-2.5. This selection includes a small correction for the tilt of the colour-magnitude relation for early type galaxies of d(J − K)AB/dKAB = −0.025. Consecutive slices overlapped by 0.2 magnitudes to ensure that no sequence was omitted. Each position in the resulting spatial surface density plot for a colour slice was then tested for an over-density using consecutively larger apertures from 0.01 to 0.05 degrees (corresponding to approximately 250 kpc to 1 Mpc at z = 1). If the over-density in the central aperture was ≥ 3σ above the background and the density decreased with increasing aperture radius then a region was marked as a candidate cluster. A similar procedure was carried out using the (I − K)-K colour-magnitude space to refine the selection of cluster candidates. Independently, we identified cluster candidates by identifying peaks in the surface density in KAB −3.6µm colour space (using an approximate colour cut of KAB − 3.6µm > 0.3 which should be efficient at selecting elliptical at z ∼1). Having defined these cluster candidates, we checked that each of these met the selection criterion recently used by van Breukelen et al. (2006) (which is based on a projected friends of friends algorithm). Using these three selection criteria we identified fifteen candidates, of which eight were identified using all three criteria. Five of the most promising eight cluster candidates (which showed the tightest colour-magnitude sequences and a clear over-density of red objects), were then targeted for spectroscopic followup. In Fig. 1 we show the colour-magnitude diagrams for a 1.5 arcminute region around each of the over-density peaks which were spectroscopically targeted (the dashed boxes in Fig. 1 show the colours used to select the cluster candidates).
GMOS Spectroscopy
Spectroscopic follow-up observations of five candidate overdense regions were taken with the Gemini Multi-Object Spectrograph (GMOS) on Gemini-North between 2006 May 23 and June 18 U.T. in queue mode. As our target clusters were expected to be at z ∼ 1 we placed a strong emphasis The total redshift distribution for the five regions targeted in our spectroscopic follow-up. This clearly shows the strong overdensity in these regions at z = 0.9. Middle: The redshift histograms for each individual cluster with the mean and 1-σ scatter overplotted (using the method described in §3.2). The Bottom panel shows the redshift distribution for the whole sample on the same velocity scale. For reference the bin size in the lower panels is 1000 km s −1 (in the rest frame of the cluster).
on good sky subtraction, to identify weak features in the presence of strong and structured sky emission.
For this reason we employed the Nod & Shuffle mode of GMOS. In Nod & Shuffle, the object and background regions are observed alternately through the same regions of the CCD by nodding the telescope. In between each observation the charge is shuffled on the CCD by a number of rows corresponding to the the centre-to-centre spacing into which each slit is divided. Each alternate block is masked off so that it receives no light from the sky but acts simply as an image store. The sequence of object and background exposures can be repeated as often as desired and at the end of the sequence, the CCD is read, incurring a read-noise penalty only once (see Glazebrook & Bland-Hawthorn 2001 for further details of this general approach). For each spectrum, the two spectra block are identified and subtracted to achieve Poisson-limited sky subtraction.
For our observations we micro-step the targets in the 3.2 ′′ long slits by 1.5 ′′ every 30 seconds. We used the OG515 filter in conjunction with the R400 grating and a central wavelength of 840 nm which results in a wavelength coverage of ∼ 580-1100 nm. The spectral resolution in this configuration is λ/∆λ ∼ 1700 and the slit width was 1.0 ′′ . To counter the effects of bad pixels and the GMOS chip gaps, the observations were taken with two wavelength configurations, each comprising two 2.8-ks exposures at central wavelengths of 840 nm and 850 nm respectively. Each of the five masks was observed for a total of 3.2 hours in < ∼ 0.7 ′′ seeing and photometric conditions. In total 134 galaxies are included on these five masks and we list the positions and photometric properties (IJK and the IRAC/MIPS bands) of these in Tables 2&3.
To reduce the data, we first identified charge traps from a series of dark exposures taken during the run and used these to mask bad pixels. We extract the nod and shuffle regions from the data frames and then mosaiced the three GMOS CCDs. The frames were then flat-fielded, rectified, cleaned and wavelength calibrated using a sequence of Python routines (Kelson, priv. com.). The final twodimensional mosaic was generated by aligning and median combining the reduced two-dimensional spectra using a median with a 3-σ clip to remove any remaining cosmic rays or defects. For flux calibration, observations of BD+28d4211 were taken, however, no tellurics were taken and so we have not attempted to correct for the A-band absorption at 7600Å although this is unimportant for deriving redshifts in any of our spectra. While flux-calibration and response correction are not necessary for redshift determination via crosscorrelation, we perform these steps in order to present the spectra in Fig. 2.
Redshift determination and velocity dispersions
For redshift determination we first attempt to identify strong emission or absorption features in the spectra including [Oii]λ3726.2,3728.9 emission, the 4000Å break, Ca H&K absorption at λ3933.44,3969.17 or the G-band at λ4304.4. From the sample of 134 galaxies with spectroscopic observations, 111 yielded secure redshifts, with only 23 unidentifiable, giving a 85% success rate. As expected for absorptionline spectroscopy, the non-detections can in large part be attributed to optical faintness: the median IAB magnitude of the 111 galaxies with secure redshifts is 22.15 ± 0.2, whereas for the galaxies without redshifts the median magnitude was IAB = 22.65±0.4. The measured redshifts for all sources are listed in Tables 2&3.
Having identified an approximate redshift for a galaxy we compute a robust velocity by cross-correlating each the Table 1. Names, central positions and properties of the spectroscopic sample. n slits and n cl denote the number of spectroscopic slits on the mask and the number of confirmed cluster member respectively. σ is the velocity dispersion of the spectroscopic sample and σ' is the velocity dispersion after removal of substructure. The values in the [] denote the errors in the last decimal place.
spectrum with an elliptical galaxy template spectrum. For the template we use solar metallicity, 1 Gyr burst models with ages of 3,5 or 7 Gyr from Bruzual & Charlot (2003). The errors on the redshifts are determined from the shape of the cross-correlation peak and the noise associated with the spectrum and are typically in the range 30-150 km s −1 . We present typical example spectra from each of the masks in Fig. 2 and report the errors on the redshifts of the individual candidate cluster members in Table 2. Figure 3 shows the redshift distribution for all galaxies in our sample. We see that the vast majority of the spectroscopic sources lie in a narrow redshift range at z ∼ 0.9. This indicates that most of the galaxies we have selected lie within the overdense regions we targeted and that these structures themselves appear to form a coherent structure across the whole survey region.
To calculate the membership of the structure in each field, we have followed the iterative method used by Lubin et al. (2002). Initially we estimated the central redshift for the overdensity, and select all other galaxies with ∆z < ±0.06 in redshift space. We then calculated the bi-weight mean and scale of the velocity distribution (Beers et al. 1990) which correspond to the central velocity location, vc, and dispersion, σv of the cluster. We used this to calculate the relative radial velocities in the restframe: ∆v = c(z − zc)/(1 + zc). The original distribution was revised, and any galaxy that lies > 3σv away from vc, or has |∆v| > 3500 km s −1 was rejected from the sample and the statistics were re-calculated. The final solution is achieved when no more galaxies are removed by the iterative rejection. The results are presented in Table 1, with 1-σ errors on the cluster redshift and dispersion corresponding from 10 3 bootstrap re-samples. We plot the redshift histograms for the structures in each field in Figure 3 along with a Gaussian curve showing the measured mean redshift and velocity dispersion.
The most striking result from these histograms is the discovery that all five structures lie within 3000 km s −1 of each other even though they are spread across nearly a degree on the sky (approximately 30 Mpc in projection). This strongly suggests that this field intercepts a "supercluster" like structure at z = 0.9 -we discuss the posterior likelihood of this in §4 and next discuss the the properties of the individual structures.
Searching for Substructures
We list the centres of the structures identified from our dynamical analysis in Table 1, along with the number of members, the mean redshift and the estimate of the velocity dispersion for each structure. Since the central positions and redshifts of the clusters are not well constrained we define the central velocity as the median redshift in the cluster and determine the centre of the cluster from the peak in the cluster surface density plot (Fig. 4). The uncertainties on the velocity dispersions are derived from bootstrap resampling the observed sample of members. Measuring the velocity dispersions from clusters with ∼10 members is particularly difficult, and in the rest-frame, the cluster velocity dispersions are unusually high (∼1000 km s −1 ). We derive more secure velocity dispersions by first investigating how relaxed each of the structures are, and construct position-velocity diagrams (similar to Dressler-Shectman plots; Dressler & Shectman 1988). In Figure 4 we mark the positions of all of the galaxies for which a radial velocity measurement was obtained (we note that the flat distribution of galaxies in this plot reflect the spatial sampling by GMOS). Together with Fig. 3, this shows that the only structure with a discernible nongaussian velocity distribution is DXS5, where four galaxies form a higher velocity substructure. As noted above, unfortunately, the small numbers of members in each structure compromise the conclusions we can draw from this analysis. However, we can attempt to derive average velocity dispersions from the whole sample. We de-redshift and stack the five clusters (according to their central redshifts) and measure a velocity dispersion of 900±200 km s −1 which may remain artificially high due to substructure. In order to better define the cluster membership via a simple method we use both the velocity and spatial information. This technique was first used in the CNOC surveys (Carlberg et al. 1996) and is described in detail in Carlberg et al. (1997) and briefly described here. Firstly, the mean redshift of the cluster is normalised to the observed velocity dispersion (σz). This is plotted against the projected radius away from the center of the cluster in units of r200 (Fig. 5). The mass model of Carlberg et al. (1997) can then be used to mark the 3σ and 6σ limits which are used to differentiate between cluster members and near-field galaxies (or galaxies which reside in filaments/structures surrounding the clusters). In this analysis r200 is calculated under the assumption that a cluster is a single isothermal sphere and is defined to be the clus- Colour-selected surface density map of the ELAIS-N1 region based on the I − K, J − K and K-3.6µm colours used to select galaxy clusters (the cluster candidate selection is described in §2). The image is smoothed with a Gaussian kernel with a FWHM of 60 ′′ (420 kpc at z = 0.9). The large open circles represent the cluster candidate colour selection using J − K (large circles) and K-3.6µm (slightly smaller) colours. The dashed square box in the bottom right hand corner shows the size of the GMOS field of view. For the five regions targeted in our GMOS observations we mark the individual galaxies which are known to be members and in addition we plot those galaxies from the literature which lie between z = 0.870 and z = 0.915. The cluster candidates marked A-D have colour-magnitude sequences consistent with the z=0.89 supercluster. It is clear that the supercluster potentially spans the whole field (the most prominent region is at 16:10:09, 54:25:00) and beyond. Panels DXS1-5: The spatial distribution of the galaxies within in each of the five over-dense regions selected for spectroscopic study. We plot the positions of all of the galaxies which meet our colour-selection (see §2.2) and we identify those which are spectroscopically confirmed as cluster members or non-members. For the members, the sizes and colours of the symbols denote the rest-frame velocity offset with respect to the cluster redshift given in Table 1 (on a velocity scale from −2000 to +2000 km s −1 ). In addition, in the DXS4 field we identify the members of the background z = 1.09 structure.
tocentric radii at which the mean density interior is 200× the critical density at the redshift of the cluster. We calculate r200 as r200 = √ 3σz/10H(z). where H(z) is defined by H(z) = H 2 0 (1 + z) 2 (1 + Ω0z) (see Carlberg et al. (1997) for a detailed discussion). Restricting our analysis to the galaxies which lie within the 3σ limits we recalculate the velocity dispersion for the clusters as an ensemble and derive a velocity dispersion of 540±100 km s −1 .
Given the limited number of cluster members, it is not practical to establish accurate limits on the fraction of velocity substructures. However, using the analogy with the Shapley supercluster which has several multi-component velocity clusters (e.g. A 1736 and A 3528), we can state that the observed fraction of 20% in the five DXS clusters is consistent with other superclusters at low redshift (although this clearly suffers from small number statistics).
Spectral Classification
To investigate the spectral properties of the cluster galaxies, we spectroscopically classify the galaxies in our sample according to the classification of Dressler et al. (1999). We find an average spectral mix of: k, 29 ± 7% (17 ± 4); k+a: 24 ± 8% (14 ± 5); e(a): 17 ± 3% (10 ± 2); e(c): 28 ± 7% (16 ± 4); e(b): 2 ± 2% (1 ± 1). The structure in DXS3 (and to a lesser extent DXS5) show excess numbers of active galaxies (e(c) and e(a)) compared to the other fields, but these are only marginally significant. These spectral mixes are similar to previous spectroscopic studies of similarly highredshift galaxy cluster members (e.g. Jørgensen et al. 2005) as well as that seen in local (z ∼ 0.1) galaxy clusters (e.g. Pimbblet et al. 2006) which have found that the mix of k and k+a galaxies make up 50-70% of the population whilst star-forming galaxies contribute ∼20% with the remaining having properties consistent with e(a) galaxies. The most significant difference is that the DXS clusters have up to 20% of galaxies with e(a) signatures, which is slightly higher than local clusters or other high-redshift clusters.
We note that there is also a large dispersion in the fraction of [Oii] detections in the cluster members of each of the six clusters. In total there are 56 cluster members, of which 27 have significant [Oii] emission (48 ± 9% with an equivalent width above 3Å) which is comparable to that found in similar z > 0.6 clusters (Finn et al. 2005;Poggianti et al. 2006). However, this global fraction hides a wide range in the fraction from cluster to cluster (between 20 and 75% for DXS4 to DXS3 respectively). While the median I − K colour of members with or without [Oii] are indistinguishable (both I − K = 2.41 ± 0.28), this may be due to the fact that the I-band data cover rest-frame emission redward of the 4000Å break.
In terms of the 24µm detections, we note that of the six galaxies which have 24µm counterparts in the clusters, two have spectral properties consistent with passive galaxies (k+a), three are strongly star-forming (with strong [Oii], Hβ and [Oiii] emission lines) and one galaxy (DXS4-11) shows high excitation lines (such as [Nev]3346,3426 and [Neiii]3343,3868.7) which unambiguously identifies this source as a highly-obscured AGN.
DISCUSSION
The most striking result from our survey is the discovery of five clusters at z=0.89 across 30 Mpc in projection. The velocity dispersions and physical sizes of each of these individual clusters bear a number of similarities to (well studied) local superclusters, and therefore lead us to interpret the results in the context of a "super-cluster" at z=0.89.
How much of the supercluster have we identified?
To investigate how much of the supercluster remains unidentified (since only the first five cluster candidates we spectroscopically targetted), we colour cut the ELAIS-N1 catalog and construct a surface density plot to look for other potential supercluster members. Using the colour cuts K = 18.8-21.3; (J − K) = 0.7-1.35; (I − K) = 1.95-2.95 and K − 3.6µm> 0.3; (see Fig. 1) we construct a colour-selected density map of the ELAIS-N1 region and present the results in Figure 4. We also overlay the spectroscopically identified cluster members from DXS1-5 and objects from previous studies that have spectroscopically identified z ∼0.90 galaxies in this field (Scott et al. 2000;Chapman et al. 2002;Manners et al. 2003). This surface density map identifies all the candidate clusters we selected. Of the seven cluster candidates we did not observe, four (marked A-D in Fig. 4) have colour sequences consistent with a cluster at z = 0.90 (the other candidates have colour-magnitude sequences which are likely lower redshift). We also note that since all five supercluster members are close to the edge of the WFCAM field, we may have only partially sampled the full supercluster. Although we currently do not have the near-infrared imaging outside the 0.8 × 0.8 degree field to efficiently select other supercluster candidates we estimate that we may have missed up to (1997) applied to our cluster sample. The solid curve denotes the 3σ contour of the mass model; the dashed curve is the 6σ contour. The filled symbols denote galaxies which lie within the 3σ contours whilst the open symbols denote galaxies which lie outside the 3σ contours. This test shows that significant substructure is evident in DXS5, with outliers also evident in DXS1, 2 & 3. Restricting our analysis to the galaxies within the central 3σ limits we derive velocity dispersions for the whole sample of 540±100 km s −1 .
50% of the full structure. Therefore, there are potentially between 7 and 18 rich clusters in the supercluster on a scale of 50-60 Mpc. This is consistent with local superclusters such as Shapley and Hercules (which have seven and nine Abell class two or above clusters in a redshift range equivalent to 4000 km s −1 across ∼60 Mpc; e.g. Barmby & Huchra 1998). Thus, the observations presented here highlight the need to study fields on scales of several degrees to best characterise such large structures even at z ∼ 1.
How rare are superclusters?
The discovery of a massive supercluster in the first DXS survey field of nearly fifty is surprising given their low space density below z=0.1. Cluster surveys at higher redshift have also identified superclusters similar to the structure presented here (e.g. Cl 1604+4321 z = 0.90, Gal & Lubin 2004) and quasar surveys have revealed massive overdensities at still higher redshift (Graham & Dey 1996). Therefore, this system is not unique but it is important to estimate how likely it is that we should have identified one in the first UKIDSS-DXS field.
Using the statistics from Tully (1986Tully ( , 1988 for the local space density of superclusters, we estimate that there are five superclusters over the high galactic latitude sky within z = 0.1. This corresponds to one supercluster per 0.04 Gpc 3 . The total volume sampled in the complete DXS survey between redshifts 0.7 to 1.4 (the farthest we can efficiently select galaxy clusters from the DXS and reliably recover redshifts for from optical spectroscopy) will be 0.27 Gpc 3 (comoving). Scaling from the local space density of superclusters, we expect a total of seven superclusters in the complete DXS survey. To find one such system in the first field from the DXS is fortunate (∼15% probability) but not so unlikely for us to question the validity of our interpretation of this system as a rich supercluster.
To estimate the potential masses of this structure and the clusters within it, we compare the observed space density of clusters in the ELAIS-N1 region to the predictions of the expected space density of dark matter halos from the N -body simulations of Reed et al. (2005). Within the survey volume covered by this DXS field, the halo mass functions from Reed et al. (2005) suggest that at z=0.9 there should be ∼60, 25, 2 and 0.05 halos of mass log(M/ M⊙)=13,13.5,14.0 and 15.0 respectively within our survey volume of 3×10 6 Mpc 3 . Thus, the number density of massive halos found in the region we have surveyed is consistent with halos of mass ∼10 13.5−14.0 M⊙.
However, these estimates disregard clustering of clusters and so we also exploit the Hubble Volume cluster catalog (Evrard et al. 2002) which uses giga-particle N -body simulations to study galaxy cluster populations in CDM simulations out to z =1.4. We exploit the NO sky survey catalog of the ΛCDM cluster simulation which covers a redshift from z =0 to z =1.5 in a solid angle of π/2 steradians. We randomly sample this catalog in volumes comparable to the DXS ELAIS-N1 survey, and find that the probability of finding five clusters with velocity dispersions > ∼ 450 km s −1 between z =0.7 and z =1.4 is ∼75%. However, the chance of all five clusters lying within a 2000 km s −1 slice is only ∼20%. When this criterion is met, we note that median mass of each cluster is ∼ 10 13.5−14.0 M⊙ (crudely suggesting a total mass for five clusters > 10 14.7 M⊙).
CONCLUSIONS
We present the results of the first spectroscopic follow-up of candidate high-redshift clusters selected from the UKIDSS DXS. This pilot programme was designed to test the feasibility of identifying high-redshift (z=0.8-1.4) galaxy clusters in the first DXS survey field through an extension of the redsequence method which efficiently selects galaxy clusters at z < ∼ 0.7 (Gladders & Yee 2000). The main results are summarised as follows: (i) Using (J − K), (I − K) and (K-3.6µm) colours we extend the efficient red-sequence cluster detection method developed by Gladders & Yee (2000) and identify fifteen cluster candidates in the 0.8 square degree DXS ELAIS-N1 field. Five cluster candidates were targetted with GMOS spectroscopy, all of which yielded significant overdensities between z=0.88 and z=1.1 with between five and nineteen members. The 100% success rate of this cluster search confirms that the colour selection is efficient at selecting the highest redshift galaxy clusters.
(ii) The most striking result from our observations is that five of the six galaxy clusters lie within 3000 km s −1 of each other across 30 Mpc in projection. This overdensity is most naturally explained by the presence of a supercluster at z=0.9 (at least part of) which our observations intersect.
(iv) We find that the mix of k and k + a galaxies make up 50-70% of the population, whilst star-forming galaxies comprise 20% with the remaining galaxies having properties consistent with e+a signatures. The spectroscopic mix of galaxies is similar to previous studies of both low-(z ∼0.1) and high-(z ∼0.7) redshift clusters. We also derive redshifts for six cluster 24µm sources, two of which are passive (k+a), three strongly star-forming (strong [Oii], [Oiii] and Hβ) and one with high excitation lines indicating an AGN.
(v) By comparing the number of clusters in our survey volume with the number density of massive halos from Nbody simulations we suggest that each of these clusters will have masses of order 10 13.5−14.0 M⊙. Moreover, we also compare the cluster abundance with predictions from gigaparticle N -body simulations to estimate the probability of finding such a structure as ∼25% in the current cosmological paradigm.
Whilst simulations and mock cluster catalogs provide extremely useful constraints on the likelihood of finding such a structure and crude estimates of the mass, the ultimate goal of the complete DXS survey area (35 square degrees) is to measure cluster abundances between z ∼0.8-1.4. In order to constrain cluster abundance, a combination of follow-up spectroscopy and sophisticated mock catalogs will be required in order to accurately constrain the masses of the clusters from galaxy velocity dispersions (e.g Eke et al. 2006). If reliable halo masses can be derived, then for a fixed set of cosmological parameters (Ωm, ΩΛ), the resulting cluster abundance will reflect on σ8 (the rms mass fluctuation amplitude in spheres of 8 h −1 Mpc which measured the normalisation of the mass power spectrum). For clusters with mass > ∼ 10 14.5 , the cluster abundance is expected to rise by a factor of 6× and 20× between σ8=0.7 and 0.8 and σ8 = 0.7 and 0.9 respectively. Thus, once complete, the DXS has the opportunity to use galaxy cluster abundances as a precision tool for cosmology and we look forward to undertaking this task in the future.
The discovery of a supercluster in the first DXS field highlights the importance of the combination of depth and area in surveys of the z = 0.5-2 Universe. Surveys of one WFCAM field or less are unlikely to contain a structure as rare as a supercluster or, even if they do, will only cover part of it. As such, surveys of this size are still affected by cosmic variance, and any attempt to measure cosmological parameters are severely compromised (e.g. Retzlaff et al. 1998). Only contiguous surveys of several degrees (such as the UKIDSS/DXS, VISTA/VIDEO and VISTA/VIKING), will have sufficient area and depth coverage to identify large structures in statistically significant numbers whilst reliably accounting for the effects of cosmic variance (see Borgani 2006 for a review).
Indeed, the implications of how cosmic variance affects the analysis of cluster surveys of relatively small volumes and/or many non-contiguous areas are subtle but in an era of "precision cosmology" must be considered (e.g. Schuecker et al. 2001).
AMS and CJS acknowledge PPARC Fellowships, ACE and IRS acknowledge support from the Royal Society. We gratefully acknowledge the UKIDSS DXS. The United Kingdom Infrared Telescope which is operated by the JAC on behalf of PPARC. The GMOS observations were taken as part of programme GN-2006A-Q-18 and are based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the NSF (United States), PPARC (United Kingdom), the NRC (Canada), CONICYT (Chile), the ARC (Australia), CNPq (Brazil) and CONICET (Argentina). This paper is also partially based on data collected at Subaru Telescope which is operated by NAO of Japan as well as observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. The strong 24µm detection suggests that this galaxy contains a highly obscured AGN, although it has a low infrared luminosity ( < ∼ 5×10 11 L ⊙ from the lack of a 70µm detection). | 2007-06-01T08:31:24.000Z | 2007-06-01T00:00:00.000 | {
"year": 2007,
"sha1": "8ce34b428f773e139ab77a0a835531df6ee3ed1b",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/379/4/1343/17317669/mnras0379-1343.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8ce34b428f773e139ab77a0a835531df6ee3ed1b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258165286 | pes2o/s2orc | v3-fos-license | A test of the social withdrawal syndrome hypothesis of bulimia nervosa
BACKGROUND The study examined the social withdrawal syndrome (SWS) hypothesis of bulimia nervosa (BN). According to the hypothesis, eating disorders such as BN are associated with a coherent set of social withdrawal cognitions, affect, and behavior. PARTICIPANTS AND PROCEDURE Eight-eight young female adults completed a standardized measure of bulimic symptoms and measures of social withdrawal (affective withdrawal, trust beliefs in close others, and disclosure). Participants were engaged in a laboratory-based peer interaction which yielded the SWS measure of perceived lack of social connectiveness. RESULTS Bulimic symptoms were associated with each measure of social withdrawal. Structural equation modeling analysis confirmed that those measures contributed to a coherent latent factor which was associated with bulimic symptoms. CONCLUSIONS The findings supported the social withdrawal syndrome hypothesis of BN and have implications for the detection and treatment of eating disorders.
Background
Bulimia nervosa (BN) is a type of eating psychopathology that profoundly affects physical and mental health, primarily for women (Mitchell et al., 2014;Keel & Forney, 2013).The life-time prevalence of BN in women is 1.8% (Galmiche et al., 2019) with fullblown eating disorders manifested between 18 and 21 years of age (Hudson et al., 2007).The prevalence of BN and other eating disorders among college/ university women is a major source of concern for clinical psychologists and educators (NEDA, 2013).The current research therefore examined BN in women and for women attending college/university.The purpose of the current research was to test the social withdrawal syndrome (SWS) hypothesis of BN, which posits that a coherent set of social withdrawal variables contribute to and maintain BN (Rotenberg et al., 2013).SWS promotes BN because it undermines women's dietary restraint, receiving social support by close relationships, and receiving clinical treatment (Rotenberg et al., 2013;Rotenberg & Sangha, 2015).Although supportive of the SWS hypothesis, the existing research has limitations and omissions which were addressed by the current research.The research examined bulimic symptoms as a marker of BN.
Support for the SWS hypotheSiS of BN
The SWS includes components of low trust beliefs in others, high loneliness, and low disclosure to others.In support of the SWS hypothesis, research has shown that bulimic symptoms are associated with loneliness, low trust beliefs in others and low disclosure to others (Rotenberg et al., 2013(Rotenberg et al., , 2017)).The research further shows that trust beliefs in others negatively predicts changes in bulimic symptoms during adolescence and thus is a probable cause of those symptoms.Also, the longitudinal relationship was mediated by loneliness, thus demonstrating that it is responsible for the relationship between low trust beliefs and bulimic symptoms (Rotenberg & Sangha, 2015).
The current research was designed to redress the following three limitations or omissions in the research literature.First, research has not specifically examined whether bulimic symptoms are associated with affective withdrawal as predicted by the SWS hypothesis.The research shows that loneliness is associated with bulimic symptoms, but loneliness has been assessed as a cognitive construct (i.e., dissatisfaction with the quality or quantity of relationships) rather than as affect per se.Researchers have found that bulimic symptoms are associated with an array of negative emotions and corresponding emotional dysregulation (Lavender et al., 2015) but the research has not examined affective withdrawal separately.
According to the SWS hypothesis, affective withdrawal should be associated with bulimic symptoms.According to this hypothesis, affective withdrawal promotes BN because it undermines women's dietary restraint, which causes a vicious cycle of food consumption and dieting behaviors symptomatic of this type of eating disorder.
Second, according to the SWS hypothesis, women with BN and elevated bulimic symptoms should hold a dysfunctional relationship schema (see Baldwin, 1992) in which they perceive that they lack social connectiveness with others.Research confirms that women with BN and elevated bulimic symptoms show a range of relationships problems.They demonstrate heightened social conflicts, social criticism, self-criticism, avoidance attachment, fear of intimacy, low self-disclosure, and social incompetence (Evans & Wertheim, 2002;Grisset & Norvell, 1992;Pruitt et al., 1992;Reiss & Johnson-Sabine, 1995;Steiger et al., 1999;Tasca & Balfour, 2014).However, the findings do not show whether women with BN or elevated bulimic symptoms hold a dysfunctional relationship schema comprising a perceived lack of social connectiveness.This hypothesis was tested in the current research by engaging women in a labbased peer interaction situation.This method permitted control over a range of extraneous factors (e.g., social reactions by others) so that women's perceived lack of connectiveness with others could be accurately assessed.According to the SWS hypothesis, this dysfunctional schema undermines satisfactory peer friendships.This detracts from the support they could receive from peers for coping with the eating disorder and other social problems.
Third, a test of the SWS hypothesis of BN requires that the complete set of measures of social withdrawal function are a coherent (latent) factor which is associated with bulimic symptoms.To date, research has shown only that there are associations between measures of social withdrawal and that those are associated with bulimic symptoms (Rotenberg et al., 2013).This was redressed by the structural equation modeling statistical strategy, which examined the relationship between the latent measure of social withdrawal underlying all measures and testing the relationship between that latent measure and bulimic symptoms.
overvieW of the curreNt Study aNd hypotheSeS
Women completed standardized measures of trust beliefs in close others, disclosure, and affective withdrawal.They were engaged in a lab-based interaction with a same gender peer.Their perceived lack of social connectiveness was assessed by their perceptions of that interaction.
It was hypothesized that: 1.There would be associations between the three measures of social withdrawal (hypothesis 1). 2. Bulimic symptoms would be associated with affective withdrawal and lack of social connectiveness and negatively associated with trust beliefs in close others and disclosure (hypothesis 2). 3.As a structural equation modeling test of the SWS hypothesis model, it was expected that there would be a coherent latent factor with paths to the four social withdrawal measures.Also, there would be a path between that latent factor and bulimic symptoms (hypothesis 3).
ParticiPants and Procedure participaNtS
The participants were 88 female undergraduates (Mage = 21 years 6 months, SD = 5 years 3 months, from 18 to 39 years of age) enrolled in a modest size university in the UK.They were solicited by advertisements on campus as an investigation of the factors affecting students getting acquainted.They were offered the potential to win a modest lottery prize for participating.Research of this type is ongoing on the university campus and typically solicits a representative sample of the university population.The university attended by the participants was composed of 75% White and of 25% from other racial backgrounds comprising 16% Asian, 5% Black and 4% Mixed Race.
The study was approved by the appropriate institutional ethics committee, and it adhered to American Psychological Association Ethics Guidelines.
An insight into the BN quality of the current sample is provided by a comparison of their bulimic symptoms to those of clinical and other nonclinical samples.Williams et al. (1994) reported that women who were clinically diagnosed with BN had an Ms of 36.75 (SD = 7.70) and 34.54 (SD = 9.80) for BDC and BDB subscales, respectively.They tested a group of women (the control group) without BN or other eating disorders who had Ms of 5.53 (SD = 8.80) and 3.79 (SD = 6.10).for BDC ns and BDB subscales, respectively.The current sample had means = 10.48 (SD = 9.34) and 10.24 (SD = 7.58) for BDC and BDB subscales, respectively.The current sample in the current study had bulimic symptoms that were approximately twice the scores of women in the control and approached the scores of women with BN.The current sample of women may be regarded as having the upper range of bulimic symptoms with some disposed to BN.
Trust beliefs in close others.The Generalized Trust Beliefs Scale-Late Adolescence (GTBS; Randall et al., 2010) assesses late adolescents' trust beliefs in four close others (mother, father, romantic partner, and peer).The GTBS-LA has shown acceptable internal consistency, α > .80(Randall et al., 2010) and expected factor structure (Rotenberg et al., 2013).In the current study, the GTBS-LA showed acceptable internal consistency, α = .82.The items were summed (and averaged) to construct a scale.Higher scores denoted greater trust beliefs in close others.
Affective withdrawal.The 16-item UWIST Mood Adjective Checklist assessed 5-point Likert ratings of emotions (Matthews et al., 1990).The UWIST items in this study were subjected to a principal components analysis that yielded 5 factors accounting for 64% of the variance.The second 'affective withdrawal' component had an eigen value of 2.50 and accounted for 15.6% of the variance.This affective withdrawal factor had high loadings on the social withdrawal emotions of loneliness (.61), shyness (.60), sadness (.73), upset (.66), and nervousness (.74).The items were summed (and averaged) to construct a scale in which higher scores denoted greater affective withdrawal.
Disclosure.The Opener Scale (Miller et al., 1983) uses a 5-point Likert scale to assess the willingness to disclose 7 intimate topics to a same-sex friend (e.g., my deepest feelings).In the current study, the scale demonstrated acceptable internal consistency, α = .82.The items were summed (and averaged) to yield a disclosure scale.Higher scores denoted greater disclosure.
Social connectiveness.The 8-item self-report scale involves 7-point Likert ratings of the quality of a relationship (Bernieri et al., 1996;Rotenberg et al., 2010).The 4 relevant items were co-operative (reverse scored), unsatisfying, cold, awkward, engrossing (reverse scored), unfocused, unfriendly, and dull.The items were summed and averaged to construct the lack of social connectiveness (LSC) scale.The LSC scale showed acceptable internal consistency, α = .78with higher scores denoting greater perceived lack of social connectiveness.The scale distribution was skewed, and it was subjected to a log 10 transformation to normalize its distribution.
volume 11(4), 3 procedure The research was carried out pre-COVID in 2016.Each participant was individually administered by an experimenter the standardized scales of bulimic symptoms, trust beliefs in close others, and disclosure.She was then engaged in a conversation with a female student (a confederate) for the expressed purpose of "getting acquainted with her".The participant selected a topic from the disclosure scale, providing a disclosure on the topic.The partner did the same and chose to disclose the likes/dislikes topic (a middle ranked intimacy topic).The experimenter then stated that the study had to end because of shortness of time.The participant was asked to complete ratings of her experiences of the interaction which included the LSC scale.Preliminary analyses showed that the pattern of findings was the same for each of the two conversation partners.
results correlatioNS BetWeeN the MeaSureS
The correlations (with Ms and SDs) are shown in Table 1.The hypothesized associations between the measures of social withdrawal were found (hypothesis 1).Affective withdrawal was correlated with lack of social connectedness.Trust beliefs in close others was correlated with disclosure.Trust beliefs in close others and disclosure were negatively correlated with social withdrawal and lack of social connectiveness.As hypothesized (hypothesis 2), bulimic symptoms were: (a) correlated with affective withdrawal and lack of social connectiveness, and (b) negatively correlated with trust beliefs in close others and disclosure.
Structural equatioN ModeliNg aNalySiS
A structural equation modeling (SEM) analysis tested the adequacy of the SWS model (shown in Figure 1).It yielded χ 2 (4) = 4.08, p = .400,normed fit index (NFI) = .94,comparative fit index (CFI) = 1.00, and a root mean square error of approximation (RMSEA) = .015.There was one covariance between two error terms (designated as es in Figure 1).There are disturbances as estimates of error for both bulimic symptoms and social withdrawal syndrome (designated as ds in Figure 1).All the paths attained significance at p < .05.The model was a good fit of the data and yielded a non-significant χ 2 , NFI and CFI > .90,RMSEA < .060(Hu & Bentler, 1999).Support was found for the hypothesized model (hypothesis 3).As expected, the latent factor social withdrawal syndrome factor had paths: (a) to affective withdrawal and lack of social connectiveness measures and (b) (negatively) to trust beliefs in close others and disclosure.Also, there was a path from that social withdrawal syndrome factor to bulimic symptoms.
discussion
The study yielded support for all three hypotheses.As expected, there were associations between the measures of social withdrawal.Furthermore, the measures of social withdrawal were associated individually with bulimic symptoms.This corroborated the findings that bulimic symptoms are negatively associated with trust beliefs in close others and disclosure (Rotenberg et al., 2013;Rotenberg & Sangha, 2015).The findings further showed that bulimic symptoms were associated with affective withdrawal and lack of social connectiveness.The structural equation analysis yielded support for the social withdrawal syndrome model.There was a coherent latent social withdrawal factor with paths to all four social Structural equation modeling analysis of the social withdrawal syndrome model withdrawal measures and a path between it and bulimic symptoms.
The observed association between bulimic symptoms and affective withdrawal is consistent with the research which shows that an array of negative emotion and corresponding emotional dysregulation are associated with BN and elevated bulimic symptoms (e.g., Lavender et al., 2015).The current findings support the conclusion that a specific form of negative affect -affective withdrawal -plays a separate role in BN.Specifically, this type of affect undermines dietary restraint, which contributes to the vicious cyclic pattern of food consumption and dieting behaviors symptomatic of this type of eating disorder.
The current findings provide further insights into the psychosocial problems of women who have elevated bulimic symptoms and are at risk for BN.Their perception that they lack social connectiveness would undermine establishing satisfying peer relationships.As a consequence, the women would be unlikely to solicit social support from peer friends for assistance in their eating disorder and other psychosocial problems.The unwillingness to disclose personal information to close others, including eating behavior (accompanied by other aspects of social withdrawal) would undermine the detection of their eating disorder and the likelihood that they would receive clinical treatment (see Rotenberg et al., 2013Rotenberg et al., , 2017)).Specifically, they would be unlikely to reveal their BN cognitions and behavior to others and thus receive clinical treatment for them.
Based on current findings it would be worthwhile to include measures of social withdrawal in a program for detecting eating disorders in women (e.g., Smink et al., 2012), particularly those in college/university (NEDA, 2013).The combination of elevated bulimic symptoms and elevated social withdrawal measures would identify women who are at highest risk.They would be prone to BN but would be unlikely to: (a) establish close relationships which would provide social support for coping the eating disorder and social problems; (b) disclose their eating problems to others and (c) because of the latter receive clinical treatment.These women would comprise those with 'hidden' eating disorders and who require screening in order to be identified with the disorder and receive clinical treatment for it.
The current research is limited because it is crosssectional in design.In future, longitudinal research should be carried out to examine whether a latent social withdrawal syndrome factor is a probable cause of bulimic symptoms.Furthermore, researchers should test the SWS hypothesis for men because they also experience BN, although less frequently than women.Also, the research should examine whether women who have been clinically diagnosed with BN show the pattern indicative of the SWS rather than women who do not have any evidence for BN or other eating disorders.Finally, in future researchers could examine the adequacy of the SWS hypothesis to account for other eating disorders, such as anorexia nervosa.
Table 1
Correlations between the measures (with Ms and SDs) | 2023-04-16T15:18:44.725Z | 2023-03-28T00:00:00.000 | {
"year": 2023,
"sha1": "35a20a92e2e2f6fb059ea4beaeb12953bd94fa6b",
"oa_license": "CCBYNCSA",
"oa_url": "https://hpr.termedia.pl/pdf-161657-88680?filename=A%20test%20of%20the%20social.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "257a99b71f9f6f43ca8138ed18b587aa167b84b1",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
17588202 | pes2o/s2orc | v3-fos-license | Optimized gold nanoshell ensembles for biomedical applications
We theoretically study the properties of the optimal size distribution in the ensemble of hollow gold nanoshells (HGNs) that exhibits the best performance at in vivo biomedical applications. For the first time, to the best of our knowledge, we analyze the dependence of the optimal geometric means of the nanoshells’ thicknesses and core radii on the excitation wavelength and the type of human tissue, while assuming lognormal fit to the size distribution in a real HGN ensemble. Regardless of the tissue type, short-wavelength, near-infrared lasers are found to be the most effective in both absorption- and scattering-based applications. We derive approximate analytical expressions enabling one to readily estimate the parameters of optimal distribution for which an HGN ensemble exhibits the maximum efficiency of absorption or scattering inside a human tissue irradiated by a near-infrared laser.
Background
The biocompatibility of gold nanoparticles, along with their tunable plasmon resonances and the ability to accumulate at targeted cancer sites, has proven them to be very effective agents for absorption-based photothermal therapy and scattering-based imaging applications [1][2][3][4][5][6][7][8]. Amongst the commonly used gold nanoparticles, silicacore gold nanoshells exhibit larger photothermal efficiency as compared to gold nanorods of equal number densities [1], whereas hollow gold nanoshells (HGNs) absorb light stronger than the silica-core gold nanoshells do [9,10]. Furthermore, HGNs are comparatively less harmful to healthy tissues neighboring a cancer site [9], which makes them prospective for both photothermal and imaging applications. Although different tissue types and excitation wavelengths were analyzed before to determine the optimal dimensions of a nanoshell [10,11], no optimization has ever been performed for a nanoshell ensemble with a real size distribution. In this Letter, we fill this gap by conducting the first theoretical study of the distribution parameters of the lognormally dispersed HGNs *Correspondence: ivan.rukhlenko@monash.edu 1 Advanced Computing and Simulation Laboratory (AχL), Department of Electrical and Computer Systems Engineering, Monash University, Clayton 3800, Victoria, Australia Full list of author information is available at the end of the article exhibiting peak absorption or scattering efficiency. In particular, we comprehensively analyze the dependence of these parameters on the excitation wavelength and optical properties of the tissue, giving clear design guidelines.
Methods
Despite a significant progress in nanofabrication technology over the past decade, we are still unable to synthesize large ensembles of almost identical nanoparticles. The nanoparticle ensembles that are currently used for biomedical applications exhibit broad size distributions, which are typically lognormal in shape [12][13][14][15]. In an ensemble of single-core nanoshells, both the core radius R and the shell thickness H are distributed lognormally [15], with their occurrence probabilities given by the function [16] f (x; μ X , σ X ) = 1 where x = r or h is the radius or thickness of the nanoshell, μ X = ln (Med [X] ) and σ X are the mean and standard deviation of ln X, respectively, and Med[X] is the geometric mean of the random variable X = R or H. http://www.nanoscalereslett.com/content/8/1/142 The efficiencies of absorption and scattering by a nanoparticle ensemble are the key characteristics determining its performance in biomedical applications. In estimating these characteristics, it is common to use a number of simplifying assumptions. First of all, owing to a relatively large interparticle distance inside human tissue (typically constituting several micrometers [17]), one may safely neglect the nanoparticle interaction and the effects of multiple scattering at them [18,19]. Since plasmonic nanoparticles can be excited resonantly with lowintensity optical sources, it is also reasonable to ignore the nonlinear effects and dipole-dipole interaction between biomolecules [20]. The absorption of the excitation light inside human tissue occurs on a typical length scale of several centimeters, within the near-infrared transparency window of 650 to 1000 nm [21]. However, the attenuation of light does not affect the efficiencies of scattering and absorption by the ensemble, and is therefore neglected in the following analysis. These simplifications allow us to relate the average absorption and scattering efficiencies (S abs and S sca ) of the nanoshell ensemble embedded in a tissue to the corresponding efficiencies (Q abs and Q sca ) of individual plasmonic nanoshells as where Q α (r, h) is expressed through Mie coefficients for a coated sphere [9,22,23], which are the functions of the excitation wavelength, refractive index of the tissue, and permittivities of the nanoshell constituents. It is seen that the average absorption and scattering efficiencies of a nanoshell ensemble, excited at a fixed wavelength, are functions of the four parameters: Med[R], Med[H], σ R , and σ H . This poses the problem of finding, and studying the properties of, the optimal distribution parameters for which the nanoshell ensemble exhibits the maximum absorption or scattering efficiency.
Results and discussions
We focus on HGNs with gold permittivity described by the size-dependent model from Ref. [9], and begin by evaluating their average absorption and scattering efficiencies inside a tissue of refractive index n = 1.55. The optimal geometric means of HGNs' dimensions crucially depend on the shape of size distribution determined by the parameter σ . Figure 2 shows how the optimal distributions of R and H are transformed when σ is increased from 0.1 to 1. As expected, larger σ results in broader distributions that maximize the absorption and scattering efficiencies of the nanoshell ensemble. It also leads to the right skewness of the distributions, thus increasing the fabrication tolerance. At the same time, the increase in σ from 0.1 to 1 reduces the peak values of S abs and S sca by about a factor of 3.5 each. This indicates the need of a compromise between the performance of an HGN ensemble and the fabrication tolerance. Regardless of σ , the ensemble exhibiting the maximum absorption efficiency comprises of HGNs with core radii smaller than those required for maximizing the scattering efficiency. A similar trend exists for the optimal distribution f (h; μ H , σ ), with absorbing nanoshells being much thinner than the scattering ones.
The parameters of the optimal lognormal distribution also vary with the type of human tissue. Figures 3(d)-3(f ) show such variation for the entire span of refractive indices of human cancerous tissue [9,19], λ = 850 nm, and three typical shapes of the distribution. It is seen that the peak efficiencies of absorption and scattering by an HGN ensemble grow with n regardless of the shape parameter σ . The corresponding geometric mean of the core radii reduces with n and may be approximated as
Conclusions
In summary, we have studied the optimal distributions of lognormally dispersed hollow gold nanoshells for different excitation wavelengths and human tissues. Shorterwavelength, near-infrared sources were found to be most effective for in vivo biomedical applications. The analytical expressions obtained may be used to estimate the optimal distribution of the nanoshells providing the maximum efficiency of their absorption or scattering of near-infrared radiation inside human tissue. | 2016-05-12T22:15:10.714Z | 2013-03-28T00:00:00.000 | {
"year": 2013,
"sha1": "3b440689c38119757ae38b8208a5335781f2bb24",
"oa_license": "CCBY",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/1556-276X-8-142",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3cc57419767a426cba184de4ec21a9431b2731a7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
263047366 | pes2o/s2orc | v3-fos-license | Graphlet eigencentralities capture novel central roles of genes in pathways
Motivation Graphlet adjacency extends regular node adjacency in a network by considering a pair of nodes being adjacent if they participate in a given graphlet (small, connected, induced subgraph). Graphlet adjacencies captured by different graphlets were shown to contain complementary biological functions and cancer mechanisms. To further investigate the relationships between the topological features of genes participating in molecular networks, as captured by graphlet adjacencies, and their biological functions, we build more descriptive pathway-based approaches. Contribution We introduce a new graphlet-based definition of eigencentrality of genes in a pathway, graphlet eigencentrality, to identify pathways and cancer mechanisms described by a given graphlet adjacency. We compute the centrality of genes in a pathway either from the local perspective of the pathway or from the global perspective of the entire network. Results We show that in molecular networks of human and yeast, different local graphlet adjacencies describe different pathways (i.e., all the genes that are functionally important in a pathway are also considered topologically important by their local graphlet eigencentrality). Pathways described by the same graphlet adjacency are functionally similar, suggesting that each graphlet adjacency captures different pathway topology and function relationships. Additionally, we show that different graphlet eigencentralities describe different cancer driver genes that play central roles in pathways, or in the crosstalk between them (i.e. we can predict cancer driver genes participating in a pathway by their local or global graphlet eigencentrality). This result suggests that by considering different graphlet eigencentralities, we can capture different functional roles of genes in and between pathways.
Response:
As suggested by the reviewer, we add [1] as a related eigencentrality based measure.In particular, we add the following paragraph (lines 140-143 of page 4 in the revised manuscript): "Many variations of eigencentrality exist.For instance, the Katz centrality generalises the eigencentrality to directed networks [5].The contribution centrality extends the eigencentrality by amplifying a node's centrality if it serves as a hub node connecting densely connected parts of the network [1].".
As suggested by the reviewer, we add [3] as a recommended review paper on centrality measures and [4] as a reference for eigencentrality.We did not add [2] as it is written in Chinese.
Reviewer #2:
Major Comments: Comment 1: Supplement (page 11): "different graphlet eigencentralities are positively correlated with each other and most existing centrality measures".I was wondering what results the simple degree metric (instead of the sophisticated graphlet eigencentralities) would yield, regarding the cancer central gene case studies.Have the authors tried to compare these?For example, what is the degree of HMGA2 in the underlying network regarding the FSAHF case study?In a nutshell: how do graphlet eigencentralities outperform simple topological metrics in the topic of uncovering key implicated players in perturbed pathways?
Response: In our FSAHF case study, we show how graphlet adjacency for graphlet G6 captures the central roles of cancer drivers TP53 and RB1.These would not have been uncovered neither based on their simple degree centrality, nor based on their graphlet degree centrality for graphlet G6 (i.e. the number of times a node touches graphlet G6).We updated the text to highlight this (lines 497-502 of page 13 in the revised manuscript): "Lastly, it should be noted that within this pathway, nodes UBN1, ASF1A, TP53 touch graphlet G0 the most (i.e. have the highest degree) and nodes EP400, RB1 and H1-0 touch graphlet G6 the most (i.e. have the highest graphlet degree for graphlet G6).This means that the central roles of TP53 and RB1 trough hub node HMGA2 could not have been captured neither by using the simple degree centrality, nor by using their graphlet degree centrality for graphlet G6.".
Comment 2: Could the detection of key genes through graphlet eigencentralities work on other diseases?Or is there something special regarding cancer pathways (such as the crosstalk among them) that allows the global pathway centralities to predict these key genes only in this scenario?The authors could include a third case study with a non-cancer disease pathway to figure this out.
Response:
We agree with the reviewer that our work opens up questions with respect to diseases outside cancer.However, given the length of the current manuscript and the supplement, we add the reviewer's suggestion as proposed future work (lines 544-546 of page 14 in the revised manuscript): "Finally, our graphlet eigencentralities can be applied to study diseases outside cancer.For instance, it has been shown that rare-disease genes are characterised by a high degree and a high betweenness centrality in the PPI network [30]." | 2019-08-20T04:46:27.071Z | 2022-01-25T00:00:00.000 | {
"year": 2022,
"sha1": "e572139a8c0452805f86bbdf26749b735c576997",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9e554225b1c36a5e6187ca91fffd1a507bee1b5d",
"s2fieldsofstudy": [
"Environmental Science",
"Geography",
"Geology"
],
"extfieldsofstudy": []
} |
248158002 | pes2o/s2orc | v3-fos-license | Case Report: Ileo-Ileal Intussusception Secondary to Inflammatory Fibroid Polyp: A Rare Cause of Intestinal Obstruction
Introduction Intussusception is a telescoping of a bowel segment into another and it can be a surgical urgency. Most adult intussusceptions arise from a lead point which can be benign or malignant. For this reason, intussusception in adults should undergo surgery. Here we describe a case of ileal inflammatory fibroid polyp (IFP), presenting with ileo-ileal intussusception and obstruction. Case report A 54-year-old Caucasian woman presented for acute abdominal pain. A radiography and a CT of the abdomen were performed, which showed signs of occlusion due to an ileo-ileal intussusception. An urgent laparoscopy was performed, the intussusception was extracorporeally reduced, the ileal segment involved was resected, and an ileo-ileal anastomosis was performed. The intussusception seemed to be caused by a 3-cm intra-mural lesion. Discussion Intussusception is a surgical concern. While most cases are idiopathic in children, 90% of adult intussusceptions are caused by underlying diseases. Therefore, intussusception in adults should undergo surgery. Radiology is necessary for the diagnosis: the CT scan helps localizing the lesion and shows pathognomonic signs. This case report analyzes an intussusception caused by an inflammatory fibroid polyp. Accurate diagnosis of IFP is only possible with histopathological examination, helped by immunohistochemistry. The differential diagnosis is important because some lesions are malignant. Conclusion We reported a case of intussusception caused by an IFP. The diagnosis was made with a CT scan together with intraoperative findings and histopathological examination, which excluded potential differential diagnoses. The patient underwent an explorative laparoscopy, with an ileal resection and anastomosis. Due to the risk of malignancy, surgery is mandatory.
INTRODUCTION
Intussusception (or invagination) is a telescoping of a bowel loop into the lumen of an adjacent bowel segment. It occurs when a more proximal portion of the bowel (intussusceptum) invaginates into the more distal bowel (intussuscipiens). The pathogenesis seems to be related to an altered bowel peristalsis at the intraluminal lesion, which becomes a lead point for the intussusception (1). This situation prompts venous congestion and tissue edema, it compromises peristalsis and bowel transit. If untreated, intussusception can lead to ischemia, necrosis, and perforation. Intussusception in children is more common than in adults (2) and it is typically due to a benign condition. Conversely, invagination in adults is an extremely rare condition (5% of all intussusceptions). Only 1% of all cases of adulthood bowel obstruction is due to intussusception (3). Besides, intussusception in children tends to be transient and can be treated conservatively, whilst in adults it frequently causes an acute abdomen and should undergo surgery, because of possible underlying neoplasms, benign or malignant. Actually, in contrast to invaginations in children, 90% of adult intussusceptions are caused by a definite lead point (4). Most adult invaginations arise from the small bowel and this lead point is usually a benign condition, such as strictures, adhesions, foreign bodies, vascular anomalies, lymphoid hyperplasia, trauma, celiac disease, cytomegalovirus colitis, lymphoid hyperplasia secondary to lupus, Henoch-Schönlein purpura, Wiskott-Aldrich syndrome, appendiceal stump, Meckel's diverticulum, benign tumors, and like inflammatory fibroid polyps (IFP) (1). In some cases, the lead point can be a malignant lesion, such as adenocarcinomas, lymphomas (5), metastases, carcinoids, leiomyosarcomas, histiocytomas, and gastrointestinal stromal tumor (GIST).
Here we present a rare case of ileal inflammatory fibroid polyp, presenting with ileo-ileal intussusception, and bowel obstruction. Inflammatory fibroid polyp (IFP) is an uncommon gastrointestinal, benign, pseudotumorous lesion, mostly found in the gastric antrum. The small bowel is the second most common site of origin, where IFPs usually present as intussusception or obstruction (1).
CASE REPORT
A 54-year-old Caucasian woman presented to our emergency unit for acute pain localized in the left abdominal quadrants, since the day before, associated with nausea. The patient reported no fever, vomit, neither change in bowel habits. The patient had a non-contributory past medical history, apart from a suspected irritable bowel syndrome. She reported no allergies nor drugs taken at home regularly.
Physical examination revealed mild dehydration with minimal abdominal distension and tenderness in the left bottom quadrant. The vital signs including temperature, pulse, blood pressure, and respiratory rate were within normal limits. The blood tests revealed mild hypokalemia, which was intravenously corrected, and analgesics were administered, with mild relief.
Plain radiography of the abdomen revealed an isolated air-fluid level in mesogastrium, with diffuse impacted stools. Consequently, an abdominal CT was performed, which showed a wall thickening and hyperemia of a proximal ileal segment, with a typical "target sign" as for ileo-ileal intussusception (Figures 1A,B). The coronal reconstruction showed a "sausagelike" mass; no obvious lead point was identified, nor significant proximal bowel distention was detected. Moreover, some mesenteric lymphadenopathies were observed.
Then, after 24 h, the abdominal CT was repeated because of persistence and worsening of symptoms: an increase in the ileum segment involved in the intussusception was documented and free pelvic fluid was observed. As a consequence, an urgent laparoscopy was performed, and a twenty-centimeters-long ileoileal invagination was detected (Figures 2, 3). No further lesions were documented by abdominal inspection. An umbilical minilaparotomy, ∼5 cm in length including the umbilical port site, was performed. The affected ileum was easily retracted and therefore the intussusception was reduced. Intraoperatively, the intussusception seemed to be caused by an intra-mural tough lesion with a diameter of about 3 cm. The affected ileum was resected, it was excised with a mesenteric lymphnode, and a hand-sewn side-to-side isoperistaltic anastomosis was performed.
No major complications occurred during the hospital stay. The patient was discharged on the fifth postoperative day.
This case report has been described in accordance with SCARE criteria and PROCESS guidelines (6, 7).
HISTOLOGICAL FINDIGS
The seven-centimeters-long small bowel tract was examined. On opening the specimen, a 2.9-cm whitish submucosal polypoid lesion was identified, acting as a lead point of the intussusception. The lesion was coated with ulcerated mucosa and composed of myofibroblastic-like spindle cells admixed with inflammatory cells, including many eosinophils (Figures 4, 5). The immunohistochemical profile of the proliferation was diffusely positive for vimentin, focally positive for smooth muscle actin, and negative for CD34 and for CD117.
DISCUSSION
Intussusception is a surgical concern. It was firstly described in 1674 by Paul Barbette (8,9). This condition is described as the telescoping of one bowel segment with its mesenteric fold into an adjoining bowel tract, causing venous congestion and blood supply reduction. Intussusception can occur anywhere along the small and large bowel. Adult intussusception is rarely mentioned, in comparison with that in children. Moreover, while the majority of cases are idiopathic in children, 90% of adult intussusceptions are caused by an underlying disease (4). In children, conservative therapy can be adopted in most cases. It typically consists of non-operative reduction through hydrostatic or/and pneumatic enemas. In case of complications, such as bowel necrosis, perforation, and peritonitis, surgical treatment is indicated, even in children. Adult intussusceptions are rare, with an incidence of 1/1,000,000 cases per year worldwide (10). Almost 90% of adults with intussusception have an underlying lesion, most of them arise from the small bowel, and half of them are malignant (11). Benign conditions can be described by adhesions, strictures, Meckel's diverticulum, inflammatory bowel disease, and benign tumors (lipomas, leiomyomas, and fibroid polyps). Besides, malignant lesions can be metastatic lesions, lymphomas, and adenocarcinomas. Therefore, unlike children, a reduction is not a therapeutic choice, because of the risk of underlying malignant lesions. Intussusception in adults usually occurs with abdominal pain with bowel obstruction signs, but also fever, bowel perforation, bleeding, and abdominal mass palpation could be frequent.
With concern to diagnosis, blood tests can reveal a nonspecific increase in the inflammation indexes. Radiology helps in the differential diagnosis. Ultrasonography is cheap and useful, especially if a palpable mass is found, but in most cases, intussusception is better diagnosed with computed tomography (CT). CT scan shows a peculiar sign, described either as "bullseye, " "target, " or "sausage-shaped" lesion. This pathognomonic concentric double-ring sign can be identified at coronal and axial view. CT also gives important information about the lesion's location, its nature, its relationship to surrounding organs, and the lymph-node involvement.
This case report analyzes a rare circumstance of intussusception caused by a fibroid polyp. Less than 100 cases of intussusception secondary to ileal polyp are described in the literature (12). An inflammatory fibroid polyp (IFP), also called Vanek's tumor, is a benign submucosal tumor frequently localized in the stomach, especially in the antrum, but it can occur throughout the gastrointestinal tract (13). An IFP was described for the first time in 1949 by Vanek and can also be called eosinophilic granuloma (14). Cases of IFP are reported between 2 and 90 years of age, even if they usually present during the sixth or the seventh decade. Occasionally an IFP can become a lead point for intussusception. Basing on a brief advisory literature review, the median age of the patients suffering from intussusceptions caused by an IFP is about 55. This is probably because intussusception comes with serious signs and symptoms; otherwise, a benign tumor would remain undiagnosed. An IFP is commonly associated with a mutation of the 12th exon of the PDGRF-A gene (15).
IFPs are usually asymptomatic and can be identified during endoscopic procedures and laparoscopies or laparotomies. When symptomatic, the clinical manifestation depends on the location and size of the tumor. Abdominal pain is the most common symptom in patients with lesions in the stomach. Other symptoms (diarrhea, vomiting, tenesmus, alteration in bowel habit) frequency is low (16). The preoperative diagnosis of intussusception is controversial. Abdominal X-ray examination is usually the first diagnostic tool used, because of the obstructive Frontiers in Surgery | www.frontiersin.org 5 April 2022 | Volume 9 | Article 876396 symptoms. Once diagnosis of intestinal obstruction is made, the primary imaging modality of choice is ultrasound imaging, with a sensitivity of 98% and specificity of 88%. However, because of bowel gas interposition and the risk of malignancy, CT scan with contrast is often used (16). Anyway, because of the lack of distinctive radiological features of fibroid polyps, accurate diagnosis of IFP on CT scan is difficult and is only possible with histopathological examination of the anatomical specimen. Grossly, IFP can be polypoidal (17) or sessile varying in size from 0.2 to 12 centimeters with an average reported size of 4 cm. We reported a 2.9-centimeter-wide lesion acting as a lead point of the intussusception. They arise from the submucosa and project into the bowel lumen. The mucosal surface is usually ulcerated and pale.
Histologically, IFP is composed of an admixture of blood vessels edematous connective tissue, a marked cellular infiltrate which may contain fibroblasts and eosinophils. Some authors reported a sparsely cellular proliferation of spindle cells with fibromyxoid background and copious eosinophils (17), other authors reported the presence of plasma cells and some lymphocytes, in addition to eosinophils (18). The histopathologic sample described in this case report was characterized by an ulcerated mucosa, overlying a submucosal tissue rich in myofibroblastic-like spindle cells.
The IFP immunohistochemical profile can vary. Vimentin, and CD34 can be positive, while SMA, ALK1, CD117 (ckit), S100, beta-catenin, and desmin are usually negative (18). Negativity for CD 117, CD34, and smooth muscle actin is also testified by Gara et al. (17). On the other side, CD34 was positive in a case report by Feldis et al. (19) and Forasté-Enrìquez et al. (20), while in our case CD34 was negative, as well as CD117. Additionally, our patient presented a positivity for vimentin and focal positivity for smooth muscle actin.
The macroscopic differential diagnosis can be suggested by some peculiarities and epidemiological information (19). For example, adenomatous polyps are more common, but usually smaller, while lipomas can be distinguished on a radiological basis, because of the presence of fat. Lymphomas are more common and usually appear as voluminous endoluminal tumors. A gastrointestinal stromal tumor (GIST) has a similar appearance to IFP, but it usually shows irregular margins, heterogeneous features, and partial extra-luminal development.
Inflammatory myofibroblastic tumor (IMT) and GIST are the main microscopic differential diagnoses (20). IFP shows more eosinophils, fibrosis, and fewer lymphocytes than IMT; in IFP, these cells arise from the submucosal layer, without invasion of the serosa and the muscular layer, which are usually invaded by the IMT cells. The immunohistochemistry as well can help differentiate: ALK1, smooth muscle actin, and occasionally CD117 are usually expressed by the IMT, but not by IFP (which occasionally expresses smooth muscle actin and CD34). It is important to distinguish between IFP and IMT because the IFP does not have a recurrence, while the IMT tends to recur (21). Immunohistochemistry can differentiate between IFP and GIST: both can occasionally express CD34, while only GIST is positive for CD117 (19).
The appropriate management of adult intussusception is controversial, with the debate focusing on mostly on the issue on primary resection vs. reduction followed by more limited resection. Reduction by surgery may theoretically allow more limited resections; however, the risk of seeding or venous dissemination during manipulation of a malignant lesion should be considered (1). In any case, intussusception in adults requires surgery, because of the low accuracy of the imaging and for the risk of malignancy.
CONCLUSION
Adult intussusception is a rare entity, and it is usually caused by a lead point. We reported a case of intussusception caused by an inflammatory fibroid polyp in an adult woman. The diagnosis was made with a preoperative CT scan together with intraoperative findings and histopathological examination, which excluded potential differential diagnoses. The patient underwent an explorative laparoscopy, with an extracorporeal resection and ileo-ileal anastomosis. Due to the risk of malignancy, surgery is the best therapeutic chance for intussusception in adults.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
AUTHOR CONTRIBUTIONS
CG: conception and design of publication, collection of case report data, and writing the manuscript. FC: linguistic check and review the manuscript for important intellectual content. GG: bibliography check and review the manuscript for important intellectual content. PZ: providing pathology images and writing the manuscript. BP: providing and commenting radiological imaging. PD: supervising the case report. All authors contributed to the article and approved the submitted version. | 2022-04-15T13:18:48.080Z | 2022-04-15T00:00:00.000 | {
"year": 2022,
"sha1": "a78ccacfd65967c3d3fc8a2bdbeaed77591cfd1e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a78ccacfd65967c3d3fc8a2bdbeaed77591cfd1e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212717707 | pes2o/s2orc | v3-fos-license | Clogging in bidirectional suspension flow
The sudden arrest of motion due to confinement is commonly observed via the clogging transition in the flow of particles through a constriction. We present results of a simple experiment to elucidate a similar transition in the bidirectional flow of two species in which two species of macroscopic particles with different densities are confined in a tube and suspended in a fluid of intermediate density. Counterflowing grains serve as mobile obstacles and clogging occurs without arch formation due to confinement. We measure the clogging or jamming probability $J$ as a function of number of particles of each species $N$ in a fixed channel length for channel widths $D = $ 3$-$7$d$, where $d$ is the particle diameter. $J(N)$ exhibits a sigmoidal dependence and collapses on a single curve $J(N/D^3)$ indicating the transition occurs at a critical density. Data is well-fit by a probabilistic model motivated by prior constriction flows which assumes grains enter the clogging region with a fixed probability to produce a clogging state. A quasi-two-dimensional experiment provides insight into the interface shape and and we identify a Rayleigh-Taylor instability at large channel widths.
The sudden arrest of motion due to confinement is commonly observed via the clogging transition in the flow of particles through a constriction. We present results of a simple experiment to elucidate a similar transition in the bidirectional flow of two species in which two species of macroscopic particles with different densities are confined in a tube and suspended in a fluid of intermediate density. Counterflowing grains serve as mobile obstacles and clogging occurs without arch formation due to confinement. We measure the clogging or jamming probability J as a function of number of particles of each species N in a fixed channel length for channel widths D = 3−7d, where d is the particle diameter. J(N ) exhibits a sigmoidal dependence and collapses on a single curve J(N/D 3 ) indicating the transition occurs at a critical density. Data is well-fit by a probabilistic model motivated by prior constriction flows which assumes grains enter the clogging region with a fixed probability to produce a clogging state. A quasi-two-dimensional experiment provides insight into the interface shape and and we identify a Rayleigh-Taylor instability at large channel widths.
I. ARTICLE
When particle flow is confined by local geometry, a transition to clogging can be observed at sufficiently high particle density. In the prototypical example of gravitydriven grains falling through an opening in a hopper, an arch might spontaneously form across the orifice, supporting the weight of the grains above and leading to a blockage of flow. This transition is generic across a wide range of systems, including granular materials, colloids, and pedestrian traffic, and can be characterized by a clogging phase diagram in which increased particle density, increased compatible loading (such as confining pressure stabilizing an arch of grains), or decreased incompatible load (such as fluctuations induced by ambient vibration) promote clogging [1]. Such systems exhibit common statistical features including power-law distributions of time between consecutive particles and exponential distributions for the size of particle bursts between clogging events [1][2][3][4], which suggests a constant probability of clogging during flow [2,5].
As particles are driven through an orifice much larger than the size of the particle, they typically flow at a constant rate [6]. As the opening size D decreases, the probability to clog increases in a sigmoidal curve rising from 0 to 1 over an opening of size D ≈ 2−5 d, where d is the particle diameter [2,7,8]. As D decreases, the lifetime of the flowing state before particle arrest decreases. When a finite number of particles N is used, the jamming probability increases with N for fixed D [2,8] and the transition becomes sharper [8]. This clogging probability has historically been called the jamming probability J, though the system spanning arrest in jamming is an increasingly well characterized transition [9,10] distinct from the arrest due to local geometry in clogging [11]. Similar behavior is observed in suspensions in which * Electronic address: brian.utter@bucknell.edu a wider range of particle concentrations and velocities are accessible and hydrodynamic effects may be relevant [12,13].
In seminal work, To et al. describe two-dimensional hopper flow of monodisperse disks as a probabilistic process in which grains sample different configurations until, by chance, an arrangement of particles corresponding to a stable arch spans the orifice [7]. They find J(D) agrees with a model based on a restricted random walker. Subsequent work by a variety of authors similarly model clogging consistent with the assumption that configurations of particles are sampled statistically independently until a stable configuration is reached [2,4,8,[12][13][14]. An exponential distribution of flow durations implies clogging is a Poisson process where there is some large probability to remain unclogged at each time step. In hopper flows, it's assumed a new configuration occurs after a grain falls approximately a distance equal to its diameter [14].
These probabilistic models typically assume that an individual grain will fall through the orifice with a large probability p 1 and the probability for a clog to form at the n th configuration is (1−p 1 )p n 1 [8]. An initial transient of n T grains may occur before a steady-state concentration of uncorrelated states is reached [4,12,13]. However, the jamming probability per particle reaches a constant value and this transient can often be ignored [12,14] or estimated using data [13]. For openings larger than 3d, the probability that an individual grain will pass the orifice without leading to a clog is close to one [3], related to the mean avalanche size [8], and only weakly dependent on velocity [12] and driving force [5]. The fraction of possible flowing grain configurations that precede a clog can be determined based on the average mass discharged before clogging [14]. If there are a fixed number of grains N in the experiment, the probability to clog during the run is then the cumulative probability for n < N .
Dependence on dimension is less clear. A simple, probabilistic model based on arch formation predicts J N (D) = 1 − exp[−N Ae −B(η0D) 2 ] for 2d systems [7,8], where η 0 is indicates how the number of grains in an arch arXiv:2003.06102v2 [cond-mat.soft] 4 Apr 2020 scales with D. Janda et al. find using D 3 in the exponential for 3d as one might guess is not satisfactory [8], though Thomas and Durian take the number of grains in the clogging region to be (D/d) α with α ≈ 3, suggesting the volume of grains is the relevant factor [14]. Avalanche size has been found to depend on D 2 in 2d simulations, which is the scale of the number of particles in the vicinity of the constriction [5], and the opening area rather than the volume in 3d suspension clogging [13].
While much has been learned about clogging through orifices, relatively little is known about such behavior in bidirectional flow, in which two species attempt to pass each other as they are driven in opposite directions through a channel. Despite its simplicity, the dynamics are intriguing due to nonlinear feedback as each particle species serves as mobile obstacles for the other. In the absence of a constriction, clogging may still occur due to confinement, but stable configurations for bidirectional flow can not generically be arches, the basis of hopper clogs.
A substantial portion of the work on bidirectional flow are via simulations, for instance models of pedestrian traffic [15][16][17], typically as cellular automota or biased random walkers which may include a variety of social interactions between walkers, such as following or avoidance behavior. This work has characterized the phase diagram [17] and the so-called fundamental diagram characterizing flow rate versus density [18,19]. Brownian dynamics simulations have also been performed to model damped colloidal particles [20,21] and cat-anionic lipid layers [22] in which two oppositely charged species are driven in an electric field as well as bidirectional flows of deformable short chains [23]. There has been limited experimental work for counterflow in both pedestrian [18,[24][25][26] and colloidal [27,28]
systems.
A jamming transition is observed in bidirectional flow simulations, occurring at a critical density independent of system size, for instance for different width channels [16]. The transition density decreases with increasing drift speed [15] and increases with the inclusion of social forces [17] but is relatively insensitive to drift speed when only avoidance is included [16]. The jamming probability increases with increasing density and monotonically increases with channel length to width ratio [17]. If the interface is not flat, the lateral imbalance of particles can lead to particles pushing through to break the clog [29]. This is reminiscent of a Rayleigh-Taylor instability, observed in systems with a a density inversion, in with denser particles above a lower density layer [30].
In this article, we present experiments to quantify the clogging probability in bidirectional flow in which parti- absence of a constriction, (ii ) the obstacles are themselves transient and continuously evolve until arrest, and (iii ) the stable geometry of a clog, by necessity, can not be an arch from the perspective of both species.
The apparatus consists of macroscopic nylon and highdensity polyethylene (HDPE), spherical beads of diameter d = 6.4 mm (0.25") in a circular tube of diameter D = 3−7 d and length 1 m (≈ 160 d) filled with a water/glycerol mixture. The nylon and HDPE spheres are monodisperse to within 0.4% and 0.8% respectively and the nylon spheres are dyed to visually distinguish the two types of particles. We use a glycerol concentration of 14% by volume to produce a fluid density of ρ = 1.04 g/cm 3 , intermediate between the densities of HDPE and nylon of ρ = 0.94 and 1.14 g/cm 3 respectively. The fluid is Newtonian with a viscosity approximately 1.8 times that of water.
A particular number N of each species is enclosed in the tube which is then mounted to a rotating armature. With the light/heavy particles initially at the top/bottom of the tube, the armature is quickly flipped to the opposite orientation. The tube is held vertical to within 0.5 degrees, as misalignment leads to the two species preferentially segregating laterally, reducing the clogging probability significantly.
During an experimental run, grains on each end disperse into a cloud of particles. Individual grains rapidly reach terminal velocity v ≈ 8 cm/s. Grains interact through effectively inelastic collisions before sliding or rolling past each other. Upon reaching the interaction region, collisions can lead to substantial slowing of opposing particles. A clog forms if particles reach a mechanically stable arrangement. Visually, it is unclear until the last moment whether a clog will form or whether particles will cascade through each other. Fluctuations due to fluid effects are evident as spheres travel through the tube; despite the existence of a nonlinear and longrange interaction though, what follows is consistent, to lowest order, with a picture based on geometrical confinement and hard sphere interactions. This may be due to the substantial slowing that occurs at high density when clogging becomes likely. Friction is small, but not negligible, as slight asymmetries in the number of each particle may be stabilized by friction with the side wall.
We measure the clogging or jamming probability J(N ) as the fraction of runs leading to a static clog, typically for 50-100 attempts per data point, as a function of number of beads N of each type for tube diameter D as shown by the data points in Fig. 2. We observe a sigmoidal probability distribution J(N ) with probability varying from 0 to 100% over a relatively narrow range of particle number, reminiscent of jamming probability versus opening size for hopper flow [2,7,8]. The probability to clog increases rapidly with number of particles as larger numbers of grains encounter larger numbers of obstacles impeding their flow. For larger tube diameters, the curve shifts to the right towards larger particle number and the clogging transition broadens.
To collapse data onto a single curve, we rescale the horizontal axis by D 3 in Fig. 3, indicating that to lowest order the transition depends on a particle density given by N/D 3 and that the clogging transition occurs when a critical density is reached. Though the present experiment does not allow measurement of local packing fraction, grains spread out to extend a length of around 60 d in the tube as they pass through each other corresponding to a packing fraction of order φ ≈ 0.1. In Fig. 3 (inset), we plot the scaling factor S required to achieve the best data collapse for J(N/S); the measured scaling exponent of 2.8 ≈ 3 is consistent with the cubic dependence D 3 we use in Fig. 3.
As the number of particles increases, the probability to form a clog at multiple locations within the tube also increases. It is known that the passing time, the time for all participants to pass a specific location, increases linearly with group size [26], leading to the possibility of an extended interaction region. The probability to form two clogs increases in a similar sigmoidal curve, beginning to rise when the single clog probability is approximately 50%. We similarly observe the onset of three distinct clogs within the tube as the probability for two clogs becomes appreciable.
The fact that clogging displays a sigmoidal probability as in orifice flow suggests that a probabilistic explanation might similarly be employed. We propose a simple model to ascertain whether a probabilistic approach as used previously for hopper flow may also be appropriate for bidirectional flow. We define the approximate number of grains that fit in the cylindrical clog region given a packing fraction φ 0 of approaching grains as N c = φ 0 Dπ D Empirically, a value φ 0 ≈ 0.1 produces the best fit, consistent with the estimate above based on experimental images. There is a new configuration after some characteristic time τ , which we number with integer n. At each configuration, we assume a probability p 0 that the configuration of incoming grains will not lead to a clog. The probability that the grains might contribute to a clog at configuration n after the prior n−1 non-clogging configurations is then p n−1 0 (1 − p 0 ). We estimate the number of grains entering between configurations separated by this characteristic τ to be some fraction of the number of grains passing through the cross-sectional area of the tube, N 0 = a 0 (D/d) 2 such that N = N 0 n. We again find that a 0 ≈ 0.1 leads to the best fit for this system, comparable to the packing fraction φ 0 . The total or cumulative probability that the grains will reach a clogged state by configuration n is p t (n) = solid lines in Figure 2 with p 0 = 0.96, consistent with similar models of hopper flows. We note there are a number of simplifications assumed in this model, including the assumption of a constant clogging probability independent of tube diameter and neglecting the dependence on local particle density and fluid dynamics. However, there is a single set of parameters for all fits in Fig. 2 which captures the general behavior suggesting that a probabilistic, geometric model may also be appropriate for bidirectional flow.
We gain further insight by performing a second set of experiments in a quasi-two-dimensional channel in which 0.125" thick nylon and HDPE disks of diameter 5/16" (7.9 mm) are contained between two plexiglass sheets separated by a distance slightly larger than 0.125" to form a channel of length 70 d. The opening width D can easily be increased beyond what is feasible in the 3d experiment to explore wider channels and we track all particles to study dynamics and characterize jamming interface. Detailed results will be the focus of a future study, but we gain insight into the shape of the interface and key differences compared to hopper flow.
In Fig. 4, we show particle positions extracted from experimental clogs in the 2d geometry. We observe that clogs can form in much wider channels than observed in hopper flow (here around 16d). We note that the interface of a clog is frequently fairly flat, as may be expected by the symmetry of the experiment and, as expected, is not comprised of arch-like structures. However, roughness at the grain scale is apparent and sometimes striking, including inclusions and plumes, seen in Figures 1 and 4. Fig. 4(d) hints at a failure mechanism we frequently observe in wider channels in which a large plume of falling particles on the left and rising particles on the right may push through in a Rayleigh-Taylor like instability in wider channels as the interface rotates and breaks. This likely leads to deviations from a simple probabilistic model and may represent the development of lane formation [20,22], in which at sufficient driving force opposing traffic forms lanes to minimize collisions, previously observed in both colloidal and pedestrian experiments [18,[26][27][28].
The obstruction typically contains comparable numbers of each species as might be expected by symmetry of the experimental flow. Fig. 5 shows a plot of the number of grains of each species in a clog for 50 runs. A typical clog in an experiment with N = 160 grains and D ≈ 16d is 88 ± 25 on each side. Imbalances can be stabilized by wall friction such that the distribution of the difference in nylon versus HDPE particles is comparable (σ ≈ 30).
In summary, bidirectional flow is a remarkably simple geometry that exhibits clogging behavior due to confinement and in the absence of a constriction. Yet simple geometrical origins well-studied in hopper flow, namely arch formation, are not possible and the two species exhibit strongly nonlinear interactions as mobile obstacles for each other and mediated by fluid flows. This raises the question whether the clogging statistics and mechanism might be similar to those observed for particles flowing through a constriction. We measure a sigmoidal jamming probability J(N ) as a function of the number of each type of grain N . Rescaling of data as J(N/D 3 ) indicates that, to lowest order, the transition depends on reaching a critical density within the channel, independent of channel width for the values studied. A simple probabilistic model captures the general behavior of the system, suggesting that the relevant mechanism is likely based on randomly sampling of configurations until a stable assembly is reached, as previously determined for hopper flow. Preliminary experiments in two-dimensional channels indicate though that for wider channels, a Rayleigh-Taylor instability develops due to lateral variations in particle number. This may represent the development of lane formation and likely limits the applicability of a purely geometric, probabilistic model. Further studies are needed to understand the detailed relationship between particle dynamics during clogging and interface morphology. | 2020-03-16T01:00:45.516Z | 2020-03-13T00:00:00.000 | {
"year": 2020,
"sha1": "ec414b1891be4f2881a9c3b9c2da69ad9b1e5e50",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ec414b1891be4f2881a9c3b9c2da69ad9b1e5e50",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
4561338 | pes2o/s2orc | v3-fos-license | Myasthenia Gravis Induced by Ipilimumab in a Patient With Metastatic Melanoma
In daily clinical practice, there is a growing number of patients receiving new biological agents used in the treatment of malignancies. Ipilimumab is a fully humanized monoclonal antibody approved for patients with melanoma. It acts as an immune checkpoint inhibitor, binding and blocking cytotoxic T-lymphocyte antigen-4 in order to increase the antitumor immune response. There are several reports of autoimmune responses after its use. A 74-year-old man developed a mild rash and pruritus a few hours after the second infusion of ipilimumab and 24 h after the third dose of ipilimumab, he presented with shortness of breath, proximal limb muscle weakness, and diplopia. Repetitive nerve stimulation was consistent with a postsynaptic neuromuscular junction disorder. He began therapy with corticosteroids and pyridostigmine and ipilimumab was discontinued. Following ipilimumab suspension, the patient started to improve gradually. Here, we describe a rare case of myasthenia gravis presumably related with ipilimumab’s therapy. A better knowledge of these agents is necessary, in order to identify characteristics or biomarkers that may be associated with the development of potentially serious autoimmune responses.
In daily clinical practice, there is a growing number of patients receiving new biological agents used in the treatment of malignancies. Ipilimumab is a fully humanized monoclonal antibody approved for patients with melanoma. It acts as an immune checkpoint inhibitor, binding and blocking cytotoxic T-lymphocyte antigen-4 in order to increase the antitumor immune response. There are several reports of autoimmune responses after its use. A 74-year-old man developed a mild rash and pruritus a few hours after the second infusion of ipilimumab and 24 h after the third dose of ipilimumab, he presented with shortness of breath, proximal limb muscle weakness, and diplopia. Repetitive nerve stimulation was consistent with a postsynaptic neuromuscular junction disorder. He began therapy with corticosteroids and pyridostigmine and ipilimumab was discontinued. Following ipilimumab suspension, the patient started to improve gradually. Here, we describe a rare case of myasthenia gravis presumably related with ipilimumab's therapy. A better knowledge of these agents is necessary, in order to identify characteristics or biomarkers that may be associated with the development of potentially serious autoimmune responses.
INtRoDUCtIoN
Melanoma is the most aggressive skin tumor. In recent years, emergence of new immune-based molecularly targeted treatments dramatically improved the outcome of metastatic melanoma patients.
Ipilimumab is a fully humanized monoclonal antibody approved since 2011 for patients with unresectable or metastatic melanoma. It acts by direct blockade of the immune cytotoxic T-lymphocyte antigen-4 (CTLA-4), which is an inhibitor of T-cell activation, enhancing tumor-specific cellular immunity. This mechanism of action may lead to mild to moderate immune-related adverse effects (irAEs) and can involve the gastrointestinal tract, skin, and the endocrine and nervous systems (1,2). Other immune checkpoint inhibitors, such as nivolumab, have also been associated with irAEs. Management guidelines have been developed and strongly advice initiation of corticosteroids in any patient in whom an irAE related to ipilimumab is suspected.
Autoimmune responses against nervous system have been described, such as myopathy, neuropathy, aseptic meningitis, and posterior reversible encephalopathy, but only three cases of myasthenia gravis (MG) have been reported in the medical literature (3,4).
BaCKGRoUND aND Case pReseNtatIoN
Herein, we present a rare case of MG presumably related with ipilimumab's therapy. A 74-year-old man, with history of hypertension and atrial fibrillation was diagnosed with metastatic melanoma in 2011. He started therapy with ipilimumab at a dose of 3 mg/kg every 3 weeks for a maximum of four doses. A few hours after the second infusion of ipilimumab, he developed a mild rash and pruritus. Physical examination at that time was unremarkable except for a mild macular rash.
Approximately 24 h after the third dose of ipilimumab, he presented with shortness of breath requiring oxygen supply, proximal limb muscle weakness, and binocular diplopia. Physical examination showed signs of respiratory distress, fatigable weakness, limitation of adduction of the right eye, and binocular diplopia.
Analytical study with thyroid-stimulating hormone and free thyroxine levels were normal. CT-brain and CSF analyses did not show alterations. Tensilon test was performed, showing a significant improvement of dyspnea and diplopia, measured qualitatively by the symptoms reported by the patient. Repetitive nerve stimulation was consistent with a postsynaptic neuromuscular junction disorder, with a 15% decrement at baseline for facial nerve (Figure 1). Acetylcholine receptor binding antibodies and Musk antibodies were negative. CT chest with contrast was revised and negative for thymoma. Ipilimumab was discontinued permanently and he began therapy with high-dose corticosteroids and pyridostigmine. Following ipilimumab suspension there was a marked improvement in the patient symptoms, and no further therapy with immunoglobulin or plasmapheresis was required.
The patient continued to improve gradually and after 1 month his only complain was diplopia; he had no complains of dyspnea and his muscular strength improved a lot, with capacity to autonomous walking. He is receiving a current corticosteroid dose of prednisone 40 mg per day and pyridostigmine 60 mg four times a day.
DIsCUssIoN
Ipilimumab has an antitumor response through blocage of CTLA-4, which normally downregulate immune response.
Taking in consideration ipilimumab's mechanism of action, it may induce a dysregulation of a preexisting immune response to self-antigens, which was held in check by CTLA-4. Nivolumab, another immune checkpoint inhibitor, has a resembling mechanisms of action, blocking programmed-cell death-1. This autoimmunity profile against normal self-tissues is most likely responsible for the irAEs that have been reported after the use of this type of immunotherapy. On the basis of these findings and given the absence of any possible etiology other than ipilimumab, we conclude that our patient had a MG secondary to ipilimumab. The lack of symptomatology prior to the use of the biological agent and the temporal relationship between the onset of myasthenic symptoms and drug administration, support our diagnostic hypothesis. Although most patients experience mild to moderate irAEs, a minority of patients may also experience severe, prolonged, and even irreversible adverse effects. Therefore, the absence of complete recovery does not exclude our diagnostic hypothesis. Taking into account the cellular mechanisms of action of CTLA-4, it is also expected not to find antibodies in MG induced by ipilimumab.
CoNCLUDING ReMaRKs
This clinical case highlights the importance of the recognition of adverse events, particularly neurological manifestations related to the activation of the immune system by ipilimumab. When early recognized and timely managed, most of these immune events are reversible, otherwise they can lead to severe or even life-threatening situations. Fatigable weakness, dyspnea, and vision disturbances are symptoms that may result from an autoimmune process directed against the nervous system and MG should be considered as a complication of therapy with CTLA-4 inhibitors. Clinicians should be aware of this toxicity profile, so as to promptly recognize, identify, and manage symptoms.
etHICs stateMeNt
Written informed consent was obtained from the participant for the publication of this case report.
aUtHoR CoNtRIBUtIoNs VM: study concept and design, acquisition of data, analysis and interpretation of data, and drafting the manuscript. SS, RG, and CC: analysis and interpretation of data and critical revision of manuscript for intellectual content. FP: study supervision and critical revision of manuscript for intellectual content. The authors declare that they have each made substantial contributions to the conception, acquisition, analysis, and interpretation of the manuscript. All authors have critically revised the manuscript for intellectual content and have given their approval for the final version to be published. | 2018-04-03T13:04:50.103Z | 2018-04-03T00:00:00.000 | {
"year": 2018,
"sha1": "9dfc824e1b535b1011a55b67ef135bed22ab8615",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2018.00150/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9dfc824e1b535b1011a55b67ef135bed22ab8615",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219902787 | pes2o/s2orc | v3-fos-license | From Working in the Fields to Taking Control. Towards a Typology of Women's Decision-Making in Wheat in India
Women in India perform a range of roles in wheat-based agricultural systems. However, data remain sparse. Cultural norms which construct men as farmers serve to conceal women’s contributions from researchers and rural advisory services. We use data from communities in four Indian states, selected to exemplify high and low gender gaps, to provide insights into how women are challenging norms which privilege male decision-making in order to participate in innovation processes. We hypothesized the transitioning of women from labourers in wheat to innovators and managers of wheat is likely to be far from straightforward. We further hypothesized that women are actively managing the processes unleashed by various sources of change. We use the concept of doxa—ideas and actions in a society that are taken for granted and are beyond questioning—as an analytic lens to help us understand the ways in which women deploy their agency to secure their goals. Our analysis allows us to develop a ‘A typology of women’s strategies to strengthen their managerial decision-making power in wheat’.
Introduction
India is the second largest producer and consumer of wheat globally, and wheat is its second most important staple food crop, after rice (USDA 2017). New, increasingly climate-smart, technologies in wheat are widely promoted. Among others, they include improved wheat varieties, zero tillage, residue management, seed drillers, laser land levelling, and combine harvesters. These technologies eliminate weeks and even months of arduous labour and hold the promise of increased productivity and production (Aryal et al. 2018;Jat et al. 2014;Chauhan et al. 2012).
Despite the importance of wheat to livelihoods and food security literature searches and stakeholder discussions conducted across the Indo-Gangetic Plains (IGP) covering India, Bangladesh, Nepal and Pakistan indicate that data on the roles and responsibilities of women in wheat is sparse (Badstue et al. 2017;Jafry 2016). Drucza and Peveris (2018) review 73 papers about women in wheat-based systems in Pakistan. They find that the literature strongly reasserts "cultural norms and gender roles [in agriculture], rather than question their persistence or attempt to examine them. The binary thinking which simplistically identifies men with technology and farming, and women with tradition and home, accompanies much gender-blind work." Jafry (2016Jafry ( , 2013, in literature reviews of gender and wheat across the IGP, comes to much the same conclusion. Farnworth and Colverson (2015) coined the term 'conceptual lock-in' to describe the phenomenon of research centres, policymakers, the rural advisory services (RAS) and even farming families themselves constructing farmers as male-regardless of the reality of female farmers on the ground. In India, the word kisan (farmer) is strongly identified with men farmers (Aryal et al. 2014). Rao (2012) provides a case study from Uttar Pradesh in which she reflects on the strange phenomenon of women busy in the fields from the early hours and self-evidently watering and weeding wheat yet neither men nor women acknowledging this as happening. In discussions, men speak of their work in the wheat fields and women of their work at home. "What one saw seemed almost the opposite of what one heard" (ibid., p. 1044). Landesa and Oxfam (2013) argue that the imagery of the farmer as male helps legitimatize male rights over physical and financial capital, including control over and access to land, as well as less tangible capitals such as decision-making power, and the right to participate in information networks.
Despite deep-seated and often unquestioned social norms that privilege men as agricultural decision-makers and breadwinners women are indeed strongly involved in different aspects of wheat cultivation and decision-making across the IGP (Farnworth et al. 2018;de Neve 2017;Jafry 2013;Sinha et al. 2012;Rao 2011;Agarwal 1997). Furthermore, the broader literature on agrarian change suggests that a feminization of agriculture is well underway, including in India, the focus of this article. Gartaula et al. (2012) usefully distinguish between labour feminization of agriculture and managerial feminization of agriculture, with the latter term indicating that women have a strong decision-making role and the capacity to be change makers. We add that rather than seeing these two states as a dichotomy, the process can be best conceptualized as a kind of messy continuum, with women exerting various forms of decision-making power at various points along the continuum (Farnworth et al. 2018).
With respect to labour feminization processes, there is strong evidence that the proportion of women, relative to men, in field labour has increased over the past few decades (Ghosh and Ghosh 2014;Guérin 2013;Chayal and Dhaka 2010;Garikipati 2008;Verma 1992;Shiva 1991). Pattnaik et al. (2017) analyse four sets of occupational data drawn from the Indian Census (1981, 1991, 2001 and 2011). Table 1 indicates that agriculture as a source of employment has shrunk by 15.8% over the past forty years with men leaving at higher rates than women. In 2011 half of all male workers in 2011 were working in agriculture, compared to around two-thirds of women (ibid.).
Various explanations have been put forward as to why women dominate the agricultural workforce. The literature broadly suggests that men-though their opportunities are strongly affected by caste-experience higher agency and enjoy higher mobility than women, allowing them to respond more effectively than women to the pull factors of economic opportunity and the push factors of rural stagnation. This contributes to higher rates of male outmigration, both short and long-term, and a greater ability to seize more local off-farm employment opportunities (De Neve 2017; Saha et al. 2018;Da Corta and Venkateshwarlu 1999).
Can we assume that feminization processes mean that women are taking more managerial decisions in agriculture? Pattnaik et al. (2017) argue that there is no necessary correlation between men leaving agriculture and women left behind Table 1 Absolute percentages of men and women working in agriculture in India (1981India ( -2011. Source Adapted from Pattnaik et al. (2017) Census on the farm experiencing higher decision-making capacity. Indeed, rather than a feminization of agriculture the correct term for this process, they contend, is the feminization of agrarian distress (ibid.). Sinha et al. (2012) consider that male outmigration conserves traditional kinship structures and patriarchal/seniority values by pushing wives back into the extended family since Indian society rarely tolerates married women living alone. This can have the effect of reducing the agency of left-behind wives, particularly if decision-making is passed from the husband to other male relatives. Even so, their research suggests that some leftbehind wives are increasing their decision-making power, though they do not consider this effect to be widespread (ibid.). In Bihar, an examination of intra-household decision-making shows that technologies are adopted when they reduce male labour in the field or reduce the cost of hired labour. Although women make the case for technologies which save women's labour, their intra-household bargaining power relative to men's is too weak to affect the final choice of technologies households choose to adopt (Gulati 2016). In Karnataka, Goudappa et al. (2012) find that women are increasingly participating in agricultural decision-making though the 'final decision' rests with men. It is with these complex findings in mind, and with respect to the paucity of data on women in wheat, that this paper explores the extent to which social norms in selected locations in India's wheat belt are shifting to accommodate recognition of women as wheat farmers and innovators and as farm managers. We have two analytic starting points.
• First, we hypothesize that due to strong cultural norms which privilege male agency, the transitioning of women from labourers in wheat to innovators and managers of wheat is likely to be far from straightforward. • Second, we hypothesize that women are actively managing the processes unleashed by various sources of change.
To test our hypotheses, we develop a Conceptual Framework. The starting point of the framework is Bourdieu's (1977) concept of 'doxa'. Doxa, discussed in more detail below, can be summarized as ideas and actions in a society that are taken for granted and are beyond questioning (ibid.). In our case, the primary doxa which women may be challenging-or be unable to challenge-is that men take decisions in wheat farming. This is part of the construction of kisan-men are farmers, not women. We use our Conceptual Framework to explore the findings from fieldwork conducted in 2015 in six wheat-growing communities in four states (Bihar, Uttar Pradesh, Haryana and Punjab). The aim of the fieldwork was to examine the ways in-and the degree to which-women and men farmers are able to innovate, and to assess how locally valid gender norms affect their capacity to do so.
Our Conceptual Framework allows us to analyse the research findings in a novel way. Rather than report on gender differences between women and men in innovation capacity, for instance, or discuss the relative importance of wheat compared to other agricultural innovations to livelihoods, we use the Conceptual Framework to elicit the strategies women in 'low' and 'high' gender gap (which we define further below) communities use to take managerial control over decision-making in wheat.
This analytic process allows us to develop a typology of women's strategies which we call: 'A typology of women's strategies to strengthen their managerial decisionmaking power in wheat' (Fig. 2). Potential strategies range from a zero point (no strategy, the woman is fully immersed in the doxa that men take all decisions) through to the emergence of a new doxa, which is that women take all decisions.
Constructing a typology is valuable, because it allows us to lift what women say about their lives and how they set goals out of the realm of the merely anecdotal and to start systematizing them in a logical way. Cornwall (2016) highlights the importance of understanding the 'hidden pathways' that women travel on their journeys towards empowerment. We see our approach as allowing women's strategizing on hidden pathways to become visible. Importantly, our framework emerges from the research findings, and does not precede it. In other words we have not pre-set potential types of strategy and then attempted to assign women's responses accordingly. We feel that we simply do not have the prior knowledge to do this. Furthermore, this approach has the advantage of allowing us to foreground the emic insider understandings the women hold about their own realities (Rapley 2003). We learn from the women themselves about their strategies and how they relate them to the cultural norms shaping the societies within which they live. In other words, our approach acknowledges the "shifting, relational nature of gender, and the active role of men and women in constructing their identities, in 'doing gender'" (Farnworth 2007). It is only during analysis that we-the outsiders-start to cluster these strategies and to relate them to how women work with local norms. It is our hope that researchers trying to understand empowerment dynamics and how to think about women's agency will find our typology useful and experiment further with it. Other development partners, on the basis of our framework, should be able to more readily perceive that women develop a range of strategies that they carefully nuance, and which they own. Such strategies can be sensitively supported but they must be left in the hands of the women themselves to manage.
The remainder of the article proceeds as follows. We set out our conceptual framework. We then describe our research methods and present the six study communities. These study communities are categorized as 'low gender gap' and 'high gender gap', three in each. The findings are presented within each broad category according to type of respondent (middle-income, low-income, young women) and we add findings on the degree to which locally important institutions-Rural Advisory Services (RAS) and village heads (and shopkeepers)-recognize and respond to women's strategizing. Our typology is then presented and discussed. Figure 1 sets out our analytic approach in this article.
Conceptual framework
Data analysis (six case studies) Typology of women's strategies to assume managerial control The Conceptual Framework: Doxa as an Analytical Lens Cohen et al. (2016) define capacity to innovate as the ability to deliberatively transition or transform a system from its current state to a new state. The freedom of the actor to respond, or to take charge, is assumed. Our analytic starting point is that 'freedom to act' is partly contingent on social norms that facilitate or limit freedoms. Social norms are constituted of (usually) unwritten codes and informal understandings that define what we expect of other people and what they expect of us. They include a wide constellation of behaviours. They define, for example, how to dress, obligations to family members, expected responses to property rights, and concepts of right and wrong (Young 2015). Whilst social norms are clearly contingent on location and historical time, some are 'so embedded in our ways of thinking and acting that we […] follow them unconsciously and without deliberation, hence we are sometimes unaware of how crucial they are to navigating social and economic relationships' (ibid:3). This is close to the formulation of doxa proposed by Bourdieu (1977). Doxa denotes what is taken for granted in a particular society, the experience by which 'the natural and social world appears as self-evident'. A practice 'goes without saying because it comes without saying' (Bourdieu 1977, p. 167). Doxa can be understood as a kind of unquestioned truth which people live by.
Although doxa may appear self-explanatory and natural, they are far from neutral in their application and effects. Doxa may favour, serve and maintain the conditions for interests (economic, social, other) important to a specific group which may result in harms of one kind or another to another group. For example, doxa may favour the interests of a certain caste above other castes, one ethnicity above another, one sexual orientation above another, one gender above another, and so on. Some doxa draw their legitimacy from religious or other beliefs (Agarwal 1997) but many doxa have no easily identifiable ethical basis or other justification (Stewart 2013).
A widely acknowledged starting point for empowerment processes is an awareness by disadvantaged people of their disadvantaged state. For example, Kabeer (1999) argues that one way of thinking about power is in terms of the ability to define one's goals and-critically-act upon them. In this formulation, awareness is taken for granted, as if people have an objective understanding of their (disempowered) state and the extent of their agency. This is the starting point for Kandiyoti (1988) who suggests that women strategize within a set of specific constraints. Different forms of patriarchy, specific to time and place, present women with distinct 'rules of the game' and call for different strategies to maximize security and optimize life options with varying potential for active or passive resistance in the face of oppression. Bourdieu (1977) claims, however, that an objective understanding of one's state is not fully possible. No one can experience complete autonomy from doxa because our choices are shaped by who we are at a specific historical juncture and in a specific place. We cannot stand entirely outside our historical selves. Thus, our personhood is intrinsically interwoven with doxa. Bourdieu terms this habitus. In relation to women's empowerment, habitus plays an important role in shaping a woman's character, beliefs, preferences and choices, and in framing her conceptualization of what the process towards-and the achieved state of-empowerment might actually look like. The idea of habitus suggests that there are boundaries or limits beyond which a person cannot act or think (Risseeuw 2005).
Nevertheless, wider changes in society can force hitherto unseen and unremarked doxa to become visible. Formerly unquestioned norms become open to being talked about, discussed and challenged. However, since doxa, as noted above, usually serve particular interests, groups which have benefited from doxa may stage a backlash. One reaction by dominant groups is to codify doxa, once it has become visible, in orthodoxy. Orthodoxy imposes an opposition between right and wrong interpretations of doxa and thus aims to limit discourse and challenge. In extreme cases, this process may lead to religious dissenters, for example, being branded heretics (Qadir 2015).
However, the doxic boundary can be patrolled less dramatically through the deployment of descriptive and injunctive norms (Ball Cooper and Fletcher 2012). Descriptive norms refer to beliefs about what constitutes normal practice in a given group (ibid.). For instance, women and men may seek to enact culturally appropriate norms of a 'good wife' and a 'good husband' and to rear their children to take on these roles in adulthood. Co-performance of stereotypical gender roles, in some cases, contributes to upholding a particular social order which both women and men feel it is necessary to maintain, though not necessarily for the same reasons (Rao 2012). Injunctive norms refer to beliefs about what people in a given group should do (Ball Cooper and Fletcher 2012). Compliers with these beliefs may be positively sanctioned, for instance by being praised or accepted by the group whereas noncompliers risk being negatively sanctioned, for example through gossip, isolation and even death (Cislaghi and Heise 2017).
The role of descriptive and injunctive norms and sanctions appears to be to maintain doxa in an unchanged state. However, doxa are in fact not static. Risseeuw (2005) highlights the historical nature of doxa, showing how they are continually reinvented in response to changes in the wider environment. In some cases, doxa may mutate so subtly and incrementally that these processes of change are scarcely perceived, and in time even awareness that doxa have changed is lost. She argues, using a case study of Sri Lanka under colonial rule, that gender relations with regard to property were imperceptibly transformed over time such that concepts of access, control and ownership which would have appeared to one generation as unthinkable came to seem normal or obvious to later generations. However, these changes actually resulted in a considerable undermining of women's entitlements over the space of a century or so. Risseeuw notes that no one actively sought to diminish women's previously strong rights to property, rather than it seemed necessary, in the struggle of elite Sinhalese men against colonial rule, for their wives and daughters to accept the merging of their individual rights to property with those of their spouses. Behaving otherwise would have been seen as weakening the family's interests as a whole. Women, Risseeuw considers, were aware that they were losing their specific rights but they silently acquiesced. They did so because the changes seemingly took place for other reasons, in particular the struggle against colonialism. At the time these changes were not perceived as creating a transformation of power relations between the sexes which would lead to women becoming disadvantaged in relation to men over time (ibid.).
We take these insights from the literature on norms and doxa into our analysis of our fieldwork data. We show how middle-and low-income women are reconsidering and attempting to reshape their participation in decision-making processes. We discuss how men react, and we examine whether local institutional actors are recognizing how women are changing, and whether this recognition affects women's ability to innovate in wheat.
Method
The GENNOVATE (Enabling Gender Equality in Agricultural and Environmental Innovation) research project researched gender norms and dynamics in wheat innovation processes in twelve communities across India in 2015. The methodology builds on GENNOVATE research protocols (Petesch et al. 2018). Community selection was based on a purposive, maximum diversity sampling approach guided by two criteria considered significant for assessing gender differences in agricultural innovation: (i) gender gaps in resources and capacities and (ii) economic dynamism. Gender gaps are gauged with reference to indicators such as women's leadership, physical mobility status, education levels, access to and control over productive assets, and the ability to market and to benefit from sales of agricultural produce. Economic dynamism is estimated using indicators such as infrastructure development, the integration of local livelihood strategies with markets, labour market opportunities, and resources available to local communities for innovations in agriculture.
To reduce the number of variables under consideration, we examine the strategies of women engaged with wheat agriculture in low and high gender gap communities in the context of high economic dynamism. This provides us with six case studies in four states ( Table 2). The names of the communities have been changed.
The research teams carried out 15 sex-disaggregated data collection activities in each community (Table 3). Sex-disaggregated focus group discussions (FGD) were conducted with low-income women and men, the second with middle-income women and men, and the third with young women and men (resulting in six FGDs per community). A further nine semi-structured interviews (SSI) were conducted in each location: (i) a community profile with men and women key informants to obtain local demographic, social, economic, agricultural, and political information, (ii) innovation pathway interviews with two men and two women known for trying new things in agriculture, and (iii) life story interviews, likewise conducted with two men and two women. In total, 191 men and 212 women participated in the research in the six selected sites.
Although all data collection was disaggregated by gender, respondents were not further disaggregated by caste. However, examining respondent names show that Well-being FGD Low-income adults aged 25-55 Factors shaping socio-economic mobility, poverty trends and their gender dimensions. Includes a 'Ladder of Life' activity, which provides the basis for a discussion on the causal factors of women and men moving in and out of poverty and how they relate to women's and men's decision-making power and participation in innovations Gender norms and capacity to Innovate FGD Middle-income adults aged 25-55 Gender norms in relation to household and agricultural / marketing; intra-household bargaining over livelihood portfolios, food security, assets; gender-based violence; women's mobility; social capital Innovator pathways SSIs
Recognized innovators Individual experiences with agricultural innovation
Life history SSIs Key informants Life stories of men and women in the community who have moved out of poverty or remained trapped in poverty Aspirations of youth FGD Youth aged 16-24 Agency of young people in determining their life choices and their participation in innovation processes SC formed the bulk of respondents in the low-income FGDs. Members of Other Backward Castes (OBC) were distributed between low-income and middle-income FGDs. GC respondents joined middle-income FGDs in all cases. In a few cases, Muslims and Sikhs joined the FGDs, but in low numbers. The majority of respondents were Hindu. The data were gathered in standardized formats, cleaned, and systematically coded using NVivo social science software.
Research Locations
Prem in Bihar, and Deva and Cheeda in Uttar Pradesh were classified as having low gender gaps. Ganga in Bihar, Bete in Punjab, and Thali in Haryana were classified as having high gender gaps. All low gender gap and high economic dynamism sites experience strong male participation-regardless of caste-in off-farm employment, and short to long-term male outmigration. Conversely, in high gender gap and high economic dynamism sites men, particularly in the middle-income (GC, some OBC) brackets, are primarily farmers and male outmigration, regardless of caste, is low. We now introduce each site in more detail.
Low Gender Gaps and High Economic Dynamism
Prem has a population of 2400. Hindus form 60% and Muslims 40%. The OBC Yadav caste, 25%, is the most prosperous. SC constitute 26% of the population. Tobacco, maize, and potato are grown on the uplands and wheat, paddy, and mustard on the lowland. There are pre-to middle schools but no high school. Uniquely in the data set, men set up home independently from their parents upon marriage, meaning that almost all households are nuclear. Around 80% of men are engaged in local and migrant off-farm work. Deva has 1700 residents. It is dominated by upper caste Chandel Thakurs, Brahmins, and Bhumihars (66%). Yadavs (OBC) form 20%, and SC 13%. Every household derives part, or all, of their livelihood from farming, growing wheat, paddy, maize, sugarcane, and vegetables. The average landholding is one acre, though 10% hold more than 12 acres; 85% of holdings are irrigated. Many men work in local and migrant off-farm occupations.
Cheeda has 2500 residents. Dominant GC castes include Kurmi (30%) and Baniya (30%) and the remainder are SC (22%), OBC (19%), and Muslims (1%). Around 75% of households depend directly on agriculture. The remainder work as hired labour, run a business, or have salaried employment. Local markets are strong. Considerable development in infrastructure, irrigation, and educational facilities has taken place over the past decade. The average land holding is 0.75 acre, the smallest 0.25 and largest 10 acres. Large farmers represent 1% of the population. Male outmigration is around 50%.
High Gender Gaps and High Economic Dynamism
Ganga has a population of 2000. It is dominated by Yadavs (OBCs) and Bhumihars (GC) (relative percentages not captured). OBC are also found in the low-income category. No SC are present. Wheat, paddy, and vegetables are grown, and farming is rainfed. The government has invested heavily in road infrastructure and schooling up to lower secondary.
Bete is home to around 2800. The GC are Jat (41%) whilst 59% are SC. Bete has received considerable infrastructure development with most people benefiting from electrification and piped water as well as new roads, health, and educational facilities, a bus line and banks. More than half depend on agriculture with wheat and paddy being the most important crops.
Thali has a population of 3500. It is home to two religious groups, Hindus and Sikhs, and many castes including Rohr, Brahmin Pandit (GC), Kurmi, Badhi, Kumhaar (OBC), Valmiki, and other SCs comprise 29% of the population. Whilst a few CG farmers manage 10 acres, small and marginal farmers predominate and 30% of residents, all of whom are SC, are landless. Almost every farmer has a tube well enabling irrigation. Roads are good and most farmers carry produce to market on tractors.
Findings
The findings are presented in two subsections: low and high gender gap communities. In each case, the findings are subdivided by income: middle-income and lowincome. Data on young women, and with respect to local institutional actors, is also presented in each subsection.
Middle-Income Women
Across the three low gender gap communities, middle-income (OBC and GC) women are increasingly recognized as wheat farmers. This is very different to the scenario painted by Rao (2012) in the introduction where despite women's obvious participation in wheat, neither men nor women acknowledged this. Women respondents in low gender gap communities in this study frequently remarked that "there is not much difference in the way a man works and the way a woman works". Men commented, "women should be allowed to go out and work because this will increase our income and benefit the family".
Widespread male acceptance of women working in the fields is accompanied by recognition of women as decision-makers. In Cheeda, men explained that "50% of the women in our village do farming themselves or hire labour to do it. They save on both time and labour costs". Furthermore, some women are explicitly being recognized by their spouses as innovators. In Prem, middle-income men explained that "women take risks depending upon how much land and money they have", that "women innovators have the same characteristics as male innovators", and that "women are farmers here and use machines for cultivation". Women select improved wheat varieties and inputs such as inorganic fertilizers. Increasingly, though social norms still prescribe that men should 'not allow their women to use machines', in reality some women use them directly or employ drivers. In Deva, according to community key informants 80% of farming households use zero tillage. Women remarked, "Women can easily employ labour on their land and get the work done by zero till planters whilst their husbands are away". Since the owners of machinery have a vested interest in renting machinery to women, they teach women how to use them and thus women's technical knowledge is improving.
In Cheeda, a women wheat innovator, Meena, explained how she now takes joint decisions with her migrant husband. Immediately after he left, she had to do most of the work on the farm. This was time-consuming and "we were always under pressure. Our production volumes were low and despite putting our best efforts into the crop we did not manage to get enough for our own consumption. When the village head tried to convince us that by using the zero tiller we would increase our yield we were excited to try it out". Now, she says, "we should be machine-friendly since they are made for our benefit. They have enabled me to spend more time at home with my family and I also get time to put up my feet for rest".
Further evidence for an increase in middle-income women's managerial capacity is provided by data on changes in women's decision-making over the past ten years. In all three communities, participants claimed their decision-making capacity had doubled, or more, over the past decade, and that their sense of self-worth and dignity has increased. Whilst some of this is undoubtedly life-cycle related, women were anxious to discuss other factors. For instance, in Prem, many women stated that since men were never around to take decisions women had to take responsibility. "What decisions can men take when they are not home? They all live elsewhere earning money". Another woman added, "Earlier men made all decisions but today women go out for everything. We are called for meetings and to children's schools. We buy groceries and everything". In Deva, study participants' felt levels of decision-making capacity were lower though still fairly strong. One woman commented, "we don't have full rights to take all decisions but we can take a few".
In Cheeda, middle-income women argued their decision-making power has increased greatly. They asserted they do not take 'final' decisions, but that they 'confidently' advise their husbands. "If we hear of something new we advise our husbands", and "we tell them that we have heard of this new wheat seed variety and that they should buy it", and "if we hear of a new fertilizer we tell our husbands". Perhaps as a consequence of the subtlety of advice, some men remaining in the community felt that women "are not innovators" and that "women always follow men". When men are absent, however, women exert decision-making more openly. A recognized wheat innovator explained that "now my husband works away. For all practical purposes I take decisions for my family but I always consult him on phone first". Women explained that wives must take decisions in their husband's absence: "women enjoy decision-making rights which previously belonged only to their husbands".
Overall, middle-income women in low gender gap communities attributed their improved decision-making power directly to male outmigration and to education. Whereas women's decision-making in the absence of men was seen as a practical necessity (which also brought economic benefits to the family), education brought more intangible and synergetic gains. One older woman explained. "We started going to school, we gained more experience, our thinking widened and we became wiser".
Young Women
This increase in women's agency is inter-generational. In all three low gender gap communities, young women feel strongly empowered and experience good mobility (including young married women). "Today girls go to school and learn so many new things that it gives us the courage and confidence to speak up." Young women in Cheeda were very clear. "When our parents decided to send us to school, they decided to empower us". Education taught them to "speak up for ourselves and we do", and so "our parents listen to us". One young woman summed up the situation: "education has brought about a revolutionary change. We are wiser and more capable". Intriguing feedback loops between younger and older women have emerged.
Low-Income Women
There is no evidence of SC women in low or high gender gap communities retiring to the household should their economic situation improve. This is not part of their caste identity (Rao 2012). This allows SC women to innovate should a favourable constellation of circumstances facilitate expression of their agency. In Prem, for example, an illiterate and previously poor SC woman, Ananya, is now recognized in her own and surrounding communities as a successful and wealthy wheat innovator. She began when her husband out-migrated after ceding control over the land to her. Together with her children, she worked hard and began to visit the local demonstration farm. One low-income man, commenting on her success, said that it is now clear that women can do everything from growing their own crop to selling it at the marketplace. In Ananya's case, the dedicated support of wheat researchers at a nearby research station was important to her success.
Women in the very few low-income households in low gender gap communities which manage to hire agricultural machinery praised these technologies. They considered that men, and the household, benefit. "Earlier all male members had to work together to get our land ready for sowing. Not anymore. Now only one family member goes to supervise the work done by the rotovator driver". Poor men are freed to work on various off-farm activities, with brick kilns being an important source of employment in several locations. Poor women also appreciate being released from arduous agricultural work. In cases (such as the economically dynamic communities we studied) where reasonable income generation alternatives to hired labour in farming exist, poor families with land who can hire machinery seem able to benefit.
Locally Important Institutions
In low gender gap communities, middle-income women's engagement with the RAS and research organizations was weak although there were some links as noted above for individual women. In Prem, a few women engage with extension agents, and a research organization provided individualized support to some women innovators. Women and men in Cheeda and Deva agree that women do not learn directly from extension agents and research organizations but depend on their families and friends for help and information because-according to men "women cannot go hunting experts for help".
Village heads are decisive for determining the levels of middle-income (GC and OBC) women's inclusion or exclusion from village level meetings. In each of the three communities, the village head has mechanized his agricultural operations and typically rents out his machinery. He often owns the only zero till planter in the community. The village head in Cheeda has formalized women's inclusion by inviting them to weekly meetings called baithaks. These are conducted at his residence and allow farmers to share information and have their queries answered. In Prem, women feel they are given pride of place at community meetings. In particular being given chairs raises their self-esteem. The fact that renting machinery to women is financially beneficial to the village head or large farmer significantly contributes to the willingness of these men to include women.
Large farmers can be an important source of information for low-income farmers. "Poor farmers go to bigger farmers for help and information" according to women. In all low gender gap communities, where women's mobility is relatively high, shopkeepers selling inputs emerged as a critical source of information for low-and middle-income women (as well as men).
Middle-Income Women
In high gender gap communities, the situation for women is very different. Whereas the combination of economic dynamism and mechanization in low gender gap communities is increasingly allowing women to participate actively as decision-makers in wheat farming, the same combination in high gender gap communities is pushing middle-income OBC and GC women farmers who previously provided their labour for wheat production back into the household. The difference between the two types of community is that male outmigration in the high gender gap communities is very limited. As a consequence, men are present in the community and in the home and are strongly identified as 'the' farmers. Social norms which for generations have frowned upon OBC and GC women working in the field are now being implemented, resulting in women leaving fieldwork. The whole process appears to be reinforcing male control over wheat as well as women, particularly in the GC and OBC caste categories.
Across all three high gender gap communities GC and OBC men agreed that "women cannot be innovators".One commented, "women don't have any brains and so they can't make suggestions". Men everywhere saw themselves as breadwinners with the capacity to learn about innovations and also the necessity of doing so. They did not see any complementarities between women and men's roles in agriculture. Men asked, "Why should a woman bother about these things when her husband is there to decide on such matters?" They argued that "women's domain is their home", and "only when men are not around do their wives go to work in the fields". Men expressed pride in providing for their families and want their wives to remain at home cooking, cleaning, fetching firework, and taking care of children and elderly family members. "We are there to take care of the needs of our homes and our women need not step into our shoes." In sharp contrast, women expressed frustration with their exclusion. In Bete, a woman highlighted the power of norms governing intra-household decision-making and gender roles. "A man would rather die than let his wife step into the male domain". In Thali, women explained. "Men know all about innovations. Women are not allowed to learn." Another said, "Even if the women want to learn and to work, their husbands do not allow them", and another woman added, "my husband would kill me if I said I want to work". There are some changes, though. Some middleincome women in Thali assert an increase in empowerment over the past decade because now they can contribute to discussions in general, though independent decision-making is mainly restricted to decisions about what to cook and clothes for children. "Now women are more aware and consider themselves more capable", whereas "ten years ago we did not even go in front of the elders in our family" and "I don't remember if I took any decisions. Now at least I speak my mind".
In Ganga, women explained they do not take any decision around agricultural innovation, but nevertheless, they quietly support husbands, sometimes by giving suggestions, or through selling their personal items such as jewellery to enable rental or purchase of machinery. More broadly, some women who no longer work in the fields and experience seclusion, felt that due to education, postponement of marriage, and their increasing participation in home-based businesses, they were becoming more equal to men.
Women from successful wheat innovator families outlined benefits to mechanization: their husbands now have enough time to take up other paid work opportunities thus increasing household income, and as fathers they spend more time with the family. A few women have started working from home as tailors and beauticians and others feel relaxed because "now we can complete our household chores at a more relaxed pace".
Young Women
A young woman summed up life rather bleakly. "We study, stay home, get married". In common with low gender gap communities, adult girls are supporting their mothers to speak more freely. Women in Bete, a high gender gap community, explained, "Our daughters have grown up and they tell us that we must speak up for ourselves and participate in all decision-making processes". Even so, the data from young women suggests they experience low agency over their lives.
Low-Income Women
Some low-income households in all three high gender gap communities participate in agricultural innovation processes. Motivated by observing the successful use of technology by richer farmers some poor households are taking out leases on land and renting machinery, and purchasing improved wheat seeds. When they succeed, poor women feel "able to relax". In Bete, Saanvi, a widow, decided to innovate despite her lack of formal education. Saanvi initially drove the tractor just like her husband had done to prepare her field. She then shifted to the zero till planter. She explains that her innovative practice was fully supported by a local farmers' organization. Today, she only needs to supervise work on her farm and "is not hard pressed for time any more. I can pack in all my day's work easily now and still find time to sit and chat with my children". Kyra, a married woman with a migrant husband, likewise from Bete, explains, "I am a woman managing 5.5 acres of land on behalf of my father, and growing my own crop. I use machines. I am not afraid of machines". Kyra stressed that "there should be training sessions where women can be educated about the benefits of machinery". Finally, in Ganga, a woman innovator was strongly supported by the young, educated village head. Talking about the changes, this has brought to her life she indicated that her family were now eating and living well and, "I take all the important decisions in my family and command more respect now. The big change is that earlier I was a labourer whereas now I am a farmer".
Life has changed for these previously poor women who, either alone or together with husbands, have been able to invest in innovations. They now participate much more actively in intra-household decisions, particularly as they became wealthier and move above the locally assessed poverty line. It is important to highlight that these women were outliers in our study. However, their experiences and trajectories show the potential of poor women from marginalized castes to succeed. The key ingredients were the will of the women themselves to succeed coupled with institutional support-from wheat researchers, agricultural organizations, and village heads. This support is highly individualized and does not represent any form of structural transformation at present.
The findings are clear that the majority of poor households continue with hand tools, and poor women in high gender gap contexts generally face the same desperate situation as their peers in low gender gap communities. They have lost work as labourers to a greater extent than men, and find it harder to get work. Some alternative income generation opportunities for poor women exist in factories, brickmaking and through the Mahatma Gandhi National Rural Employment Guarantee Programme.
Locally Important Institutions
In high gender gap communities, women, regardless of caste status, are usually excluded by the RAS: "Extension agents never approach us". A woman said simply. "We cannot innovate or adopt an innovation if we are not aware of it".Women explained that even if they were to be invited to meetings, they would have to stay under veils and remain silent, which would not help them in getting their queries addressed. A middle-income man explained, "Women are the pride of our families and they cannot go hobnobbing with others".
Only in Bete, a high gender gap community, SC generally were becoming wealthier. This was also the only community where farmers were leasing land rather than sharecropping. This facilitated wealth accumulation. All farmers have a tube well and all grow wheat. SC alongside GC farmers were improving agricultural productivity, and many SC, including women, were obtaining higher wages by working offfarm in a local factory. The Bete Agricultural Cooperative (BAC) was pivotal in this transformative effect. No farmers-regardless of income level-purchased machinery because the BAC rented machinery to them at a fixed, reasonable rate. Poorer farmers come together to pay rental fees. BAC also provides extension services and credit. Women, who due to caste dictates, rarely work in the field but have some decision-making capacity, claimed they could obtain credit on fair terms from BAC. BAC conducts regular meetings to which all farmers, regardless of caste or gender, are invited. The village head additionally helps everyone, regardless of caste or income status, with agricultural advice.
Similarly to low gender gap communities, low-income farmers find large farmers to be a useful source of advice.
Discussion
Two hypotheses guided the analysis of research findings. First, we hypothesized that due to strong cultural norms which construct men as farmers and which privilege male agency, the transitioning of women from labourers in wheat to innovators and managers of wheat is likely to be far from straightforward. Second, we hypothesized that women are actively managing the processes unleashed by various sources of change. We used the concept of doxa to help us focus our analytic lens upon the strategies women use to assume some degree of control.
We now present Fig. 2. 'A typology of women's strategies to strengthen their managerial decision-making power in wheat' to pull out and systematize the findings. We find that there is a graduated typology of six strategies. The right-hand side of the figure sets out the typology of women's strategies to participate in decisionmaking in relation to wheat. (It is important to note that our data do not allow us to categorically associate specific castes to a particular strategy. However, caste status or any other marker, such as ethnicity, age, sexuality, and so on, could be added to the typology through adding a new column to the right.) The left-hand side locates external agencies in relation to their support of, or otherwise, each strategy. Returning to Conceptual Framework, we argued that institutional actors are among those which patrol the doxic boundary and mobilize social norms to either support or discourage challenge to doxa. Figure 2 places the six strategies we have identified within three broad domains (A, B, and C) which represent different relationships to doxa. As a reminder Bourdieu (1977) argues that doxa represents unspoken tradition. Because it is unspoken, 'it is the most powerful rule of all' (ibid.). Domain A thus presents the doxa that men are sole decision-makers in relation to wheat. However, since our findings suggest that this doxa is being challenged, the lightening of the colour shows how it is weakening as women develop increasing assertive strategies. Domain B indicates a growing awareness by women that this doxa exists. Doxa is becoming part of opinion but not yet open discourse or challenge. The top domain, Domain C, is the domain of open discourse and challenge. For analytic clarity, women's strategies are discussed separately here, though in reality they are more messy.
Strategy 0: Non-awareness
The doxa that women have no role to play in agricultural decision-making is held by the majority of upper caste men respondents in the high gender gap communities. There is no evidence that women are unaware of this doxa and take it as part of the natural order. Nevertheless, an upper caste woman in a high gender gap community crystallized the reality of this form of doxa by reflecting 'I don't remember if I had an opinion at that time. ' Sen (1985) describes the lack of correlations between objective measurements of deprivation and subjective awareness of it. In relation to women's lower access (compared to men) to sufficient food and nutrition, a woman 'may be resigned to her state, or have internalized cultural norms that allot her an impaired status. When she is judged by the metric of desire or happiness fulfilment, therefore, she may seem to be doing quite well although she is physically quite deprived'. Such a lack of self-awareness embodies how doxa harmful to women can operate below conscious awareness.
Strategy 1: Acquiescence
In high gender gap communities, middle-income upper caste women and men refer to norms which were once subliminal, but which are now emerging into opinion, although these norms are not yet openly discussed and contested. The weakest form of resistance to doxa is women's silence expressed as acquiescence. Risseeuw (2005) calls this the line of separation between the 'most radical forms of misrecognition', and 'the awakening of political consciousness'. Many women met in the study recognized that they were not expected to take part in agricultural decision-making, but did not articulate any forms of resistance.
Reisseuw (ibid.) argues that the potential to operate without formulated conflict must be incorporated into theory. The less powerful must be accorded a sense of awareness of their disadvantaged position. Even if this entails non-articulation or indirect forms of resistance, it should not be equated with ignorance. She adds that the relationship between powerful and less powerful is necessarily dynamic. It is not only women who change. Power relations have a dual effect, forcing those whose basis of power is increasing to transform themselves in relation to the 'other' whom they come to believe they must rule, oppress, protect or guide (ibid.). Middle-income men in the high gender gap communities express all of these beliefs. Injunctive norms imposed by men are clear: "we are there to take care of our homes" and "women need not step into our shoes". Such injunctive norms attempt to squeeze that which 'goes without saying and that which cannot be said' back into doxa. Men demonstrated that they understand the function of injunctive norms to be to quell women's discontent, and if it is too difficult to ignore, to diminish it-"women don't have brains".
Strategy 2: Murmuring
A stronger form of resistance can be termed 'murmuring', here defined as a rumble of discontent or grumbling. Murmuring is expressed in ironic remarks by middle-income women in high gender gap communities in reflections upon their inability to learn directly about innovations and to participate in decision-making. They are fully aware of the power of injunctive norms: "he will kill me". Ahluwahlia (1997) provides a similar example from Rajasthan, where the workload of women increased by up to three hours a day following the enclosure of the village commons at the instigation of a development agency. Women did not participate in the decision to enclose the commons but bore the costs in terms of more work obtaining water, fodder, and fuel. Although they complained of overwork and fatigue, none openly questioned decisions taken by men. Ahluwahlia concludes, "This is not surprising as the institutions which subordinate women, and repress their desires, speech, ideas and even emotions, remain very strong" (ibid.: 33).
Strategy 3: Quiet Co-performance
A step change occurs with middle-income women, particularly OBC in high gender gap communities, beginning to support men's ability to innovate, for example through buying machinery, financially through selling jewellery, obtaining credit, and so on (as well as through carefully nuanced 'suggestions' or 'advice'). The 'right' of men to be sole decision-maker in wheat is not questioned; rather the evidence suggests that women are openly, but 'quietly' supporting men in ways which do not challenge social norms. Rather than interpret this as a form of subjugation, it appears that such women are actively deploying male agency to support their individual and household-level goals. Middle-income women, enabled through mechanization to retire from the field, see an opportunity to increase their own well-being, expressed as having time to spend on themselves and their children, as well as on developing their own businesses. Mechanization also frees men to engage in offfarm work, thus increasing overall household income. Rao (2012) suggests that women and men 'co-perform' to jointly construct women as 'housewives' (even if they actually work outside the home) and men as 'providers'. Given the impossibility of challenging the entire context within which they live, Rao argues, women deploy their agency by seeking reciprocity for their contributions from their spouses. Women 'by quietly serving their men' improve their own individual position. She concludes, though, that when women's agency is expressed like this it is incremental and individualistic. It does not change the meaning of gender in long-lasting or transformative ways (ibid.).
Domain C: Towards Transformation of Doxa
The strategies described in Domains A and Bare deployed primarily by middleincome GC and OBC women in high gender gap communities. OBC women in low gender gap communities dominate the strategies identified in Domain C. SC women use all three strategies below, but due to their overall low participation in wheat innovation processes our data and thus understanding of their strategies is weaker.
Nevertheless, it appears that due to their caste identity, which permits mobility and work in the fields, SC women seem to adopt Strategy 6 more than OBC.
Strategy 4: Active Consultation
Women are recognized as wheat farmers. Men overtly recognize, and state, that women are-alongside men-wheat farmers: "there is not much difference in the way a man works and the way a woman works". This is also an expression of male agency, because redefining 'who does what' frees men to engage in off-farm work. Men remain key decision-makers in wheat but women seek, and obtain, consultation rights in their role as farmers.
Strategy 5: Men 'Decide': Women Manage/Control
In all three low gender gap communities, women farmers in all income categories are increasingly 'consulting' men with regard to innovation practices whilst simultaneously managing the flow of information. It is here that women's skills in promoting their gender interests-whilst maintaining the doxa that men are ultimate decision-makers-are particularly evident. Most women stressed that they phone their husbands regarding decisions to be taken regarding innovations. He is expected to take the final decision. However, women equally made it clear that 'consultation' involves presenting absent men with specific options to consider, such as a new variety of wheat seed. Women are therefore pre-determining the range of decisions their spouses can make. In this sense, consultation appears formulaic. The doxa norm that men take decisions is openly respected, but sometimes appears to be almost entirely hollowed out. Women steer discussion to the outcomes they require in order to farm and innovate as they choose.
Strategy 6: Women Decide
Finally, in low gender gap communities, some women-particularly SC-take all decisions in relation to farming and innovation. Whilst few, such women are not punished through the imposition of negative injunctive norms. It is possible that such women have not yet reached a critical mass and so can be respected, even admired. These women may be offering migrant men the security of knowing their wives can manage to run the farm in their absence, thus reducing livelihood uncertainties. Furthermore, work in agriculture is part of the caste identity of SC women, and thus, overt decision-making may seem less transgressive.
Local Institutions Often Remain Harmfully Locked in Doxa
The evidence presented here shows that women, regardless of caste identity and socio-economic class, have developed a range of strategies to insert themselves into innovation processes and seek inclusion. However, the study finds that the RAS-as a category of institutional actor (including government, research and private sector actors)-make few efforts to include women, regardless of income band or caste status, in training events and information dissemination. All RAS in our study sites completely ignore SC with marginal lands.
This said, we recognize that individual extension officers sometimes devote considerable time to helping women and SC more broadly. Research organizations and village heads may mobilize such support from the RAS as well, in order to assist women innovators. However, there is no evidence of institutional support from the RAS despite women's self-evident work in the fields, interest in machinery and innovation processes more broadly, and despite the high rate of male outmigration in some locations, which makes supporting women in wheat a necessity. Through their activities, RAS enact a form of orthodoxy by continually patrolling and maintaining the boundaries of an outdated doxa that is increasingly recognized by farmers themselves as no longer fit for purpose, although not always openly articulated as such. By way of contrast, progressive village heads and farmer cooperatives like the BAC mentioned above can be decisive in facilitating the participation of middleincome women in wheat innovations. In all cases, though, institutional support like this is ad hoc. Furthermore, village heads almost never interact with SC and other low-income farmers in relation to wheat. This is despite evidence that poor women and men are fighting to participate, by raising money for investment in new technologies, observing innovative farmers, and so on. Inclusion is particularly important for poor women because it is considerably harder for them than their husbands, to out-migrate or exploit other livelihood options. These people really are 'clinging on'.
Conclusion
It is not clear where processes transforming doxa in low gender gap communities will end. It is possible that the strategy of women consulting men may represent a satisfactory solution, whereby a hollowed out doxa of men as agricultural decisionmakers is acknowledged, but women are enabled to act effectively as decision-makers in all but name. If this occurs, it will presumably be supported by the emergence of new descriptive norms acceptable across (large sections of) whole communities, which recognize women as wheat farmers. In such case, injunctive norms would develop which reward women as innovators and managers (as they are beginning to do in outlier cases of recognized women innovators) and potentially even penalize men who insist upon retaining close control even in their absence. If this situation were to come about, women's capacity to act and participate in innovation processes would be formalized as normative.
Strategies 1-4 seem to conform to Kandiyoti's (1988) analysis that women's strategies play out in the context of patriarchal bargains. These are implicit scripts, which define, limit, and inflect women's options (ibid., p. 285). However, we have identified evidence that new strategies (5 and 6) are emerging whereby women are actively transforming the content of patriarchy. We consider that the concept of patriarchal bargains does not cater for men's, as much as women's, willingness to consider partial or complete transformation of the patriarchal bargain in relation to wheat. In low gender gap communities experiencing high outmigration, men benefit from their freedom from doxa that expect them to be primary farmers, agricultural decision-makers, and breadwinners. They are coming to realize, and openly acknowledge, that asserting male primacy in agricultural decision-making does not make sense when they are not physically present.
The findings indicate that education is an important transformative force upon doxa, though male outmigration has further facilitated this transformation by creating spaces wherein women must act if farming is to survive as a viable livelihood option. Women in both high and low gender gap communities agree that education allows them to think differently about themselves and their capacities. Nussbaum (2001, pp. 62-63) considers women may feel 'conditioned satisfaction' with their (harsh) lives. However, she argues, such women may have only been able to pursue one kind of life. Had they been offered an education, say, a whole array of different options may have enabled them to live quite differently. The findings indeed show that women across all communities are perceiving different kinds of lives and taking steps to realize these options. However, the relative strength of doxa continues to shape the range of options a woman may pursue in her specific context. That is to say, there is a mismatch between what she may perceive as 'the widest sense of limits' (ibid.) of what she could do, and what she knows she will actually be able to achieve. This mismatch is particularly clear in the high gender gap communities.
It is clear that addressing the reality of women wheat farmers as managers, leaders, and innovators in wheat farming is a significant challenge for RAS and some village heads in low and particularly high gender communities. Some of these institutions, through strict observance of an outdated perception of doxa in their daily practice, constitute a powerful brake upon what women and men are able 'to imagine, to wonder and … to know' (Nussbaum and Sen 1993, pp. 1-2). Institutional marginalization of women, and of scheduled castes, poses a serious constraint on the attempts of women and men wheat farmers to realize their aspirations and livelihoods.
Exploring partnerships with private sector players is potentially one way out of this impasse. Our evidence is limited, but what we do have indicate that input sellers at least are already recognizing and profiting from women's increased decision-making power in low gender gap communities. Supporting village heads to redefine doxa at community level is important because this helps provide a seal of legitimacy. Working with women's organizations and networks is critical, because empowerment is a process of discovery by its participants and cannot be imposed. Agricultural research and development actors must engage seriously with the reality of women in wheat in high and low gender gap communities. Part of this will entail recognizing the ways in which they have hitherto unquestioningly supported the doxa of men as decision-makers in wheat. Moving forward entails developing, with women and men, norms which acknowledge women as workers and decision-makers in wheat.
We conclude by returning to our words in the Introduction. It is our hope that researchers trying to understand empowerment dynamics and how to think about women's agency will find our typology useful and experiment further with it. Other development partners, on the basis of our framework, should be able to more readily perceive that women develop a range of strategies that they carefully nuance, and which they own. Such strategies can be sensitively supported but they must be left in the hands of the women themselves to manage.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 2020-06-11T09:04:37.410Z | 2020-06-05T00:00:00.000 | {
"year": 2020,
"sha1": "bf235e1494f562ccfae74527fe011b94c5e41e86",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1057/s41287-020-00281-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5b910cc1d94fe1bd73a21b6fa727ecd8f4762a9f",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Sociology"
]
} |
134418248 | pes2o/s2orc | v3-fos-license | The Influence of Land Intensive Use and Urbanization to Air Pollution: Evidence from China
Urbanization has a significant impact on environmental pollution, but the relationship between urbanization and environmental pollution is complex. This paper separately discusses the population urbanization and land urbanization relationship of air pollution, and the impact of urbanization level and land intensive utilization on air pollution are used for empirical analysis. The panel data of 35 provincial capitals and sub provincial cities in China are selected from 2003 to 2015, and index system is constructed to evaluate the level of urbanization, land urbanization and land intensive use. After testing the Heteroscedasticity between panels, Autocorrelation within groups and contemporaneous correlation across panels of the panel data, an econometric model is established and a comprehensive feasible generalized least squares (FGLS) method is chosen to estimate the consistent and valid results which show the effects of urbanization, land urbanization and land intensive utilization on air pollution. This paper puts forward the related questions and policy recommendations.
Literature review
The study of the relationship between environmental pollution and urbanization mainly includes two aspects: first, referring to the methods of social economic development and environmental pollution, the environmental Kuznets (EKC) curve is tested by using the level of urbanization and environmental pollution. D. F. Huang (2011) used the panel data of 29 regions in China from 1999 to 2008 to verify the inverted N-type relationship between industrial wastewater and urbanization, the N-type relationship between industrial sulfur dioxide and urbanization, and the U-type relationship between industrial dust and urbanization [3] . Second, experts have studied the relationship and industrial structural transformation, political incentives [4] , local government finance Decentralization [5] and other factors in the process of urbanization. Also, there are few empirical studies on the joint effect of urbanization and land intensive utilization on environmental pollution. The current research methods are generally based on the establishment of econometric models, using cross-sectional data especially panel data, to analyze the relationship between environmental pollution and urbanization. At present studies begin using panel data which bring new problems. Stern (1996) proposed that the econometric model used to validate the environmental Kuznets curve is easy to produce Heteroscedasticity problems [6] . This paper uses citylevel data to study the relationship between comprehensive urbanization and air pollution.
Theoretical analysis
The core feature of urbanization is that the population is gathering from countryside to urban. On one hand, due to urban population increasing and lifestyle changing, the environmental requirements of urban residents on the living space and environment quality are increasing. On the other hand, the gathering urban population can enrich human resource in the city, and the division of labors can concentrate technical expertise on fighting against air pollution, resulting in the improvement of air quality. Expanding of city construction land shows the trend of spatial expansion, which is characterized by the intensive urban built-up area and the continuous expansion of urban construction areas. The urbanization process of China and the vicious urbanization competition between cities lead to the disorderly expansion of cities. Dust in construction, vehicle exhaust and solid waste could result in air quality deterioration. Finally, the current research ignores that the transformation of urban spatial structure and the rational economic layout of cities are mutually reinforcing processes. To gain advantage in the competition, the local government generally chooses the attitude of positive competition, negative cooperation and the "race to the bottom", which may lead to the negative impact on the environment and unreasonable layout.
Benchmark model
Referring to the previous research results, the relationship between urban environmental pollution and urbanization is established as the benchmark model.
In the benchmark model, En it represents the i city environmental pollution at t year, α i is constant, X it are control variables; urban it represents the i city urbanization level in the city at t year, land it stands for the i city land intensive use level in the at t year. a i is unobserved effects in each city. u it is random error term. Alpha, beta and delta are the parameters to be estimated for each variable.
Data source
China's provincial capitals and sub provincial cities are selected as the research objects 1 .Because of the lack of data in Lhasa, this article deleted the Lhasa city. Therefore, the results of this paper do not involve Lhasa. This paper obtains data from "Chinese Statistical Yearbook", "China Population and Employment Statistics Yearbook", "China Urban Construction Statistics Yearbook", "China City Construction Statistical Yearbook" and the China Economics Information Network Database from 2003 to 2015.
The control variables include the proportion of the second industry added value, the per capita real GDP and the quadratic per capita real GDP, respectively, named as 2, , 2. Per capita real GDP, removes the factor of price change of "current price GDP" in Chinese City Statistical Yearbook, for the base year 2000 constant price GDP per capita.
The urban air pollution index selected in this paper is the total amount of industrial sulfur dioxide emissions and sulfur dioxide emissions per square kilometer, marked them as 2 and 2 , respectively. The core independent variable in this paper are urbanization rates and land intensive use. Since the main research object is urban environmental quality in this paper, the urbanization is investigated from population urbanization and land urbanization two ways.
Index system and Index weights
The measurements of urbanization generally include population proportion method, coefficient adjustment method, rural urbanization index method, urban land use index method and modern urbanization index method [7] . However, in current studies, the proportion of urban population is generally used to measure the urbanization rate. It is necessary to establish the index system to calculate the urbanization level from many angles [8] . Therefore, this paper chooses to establish the index system to comprehensively assess the population and land urbanization.
From the perspective of population urbanization, this paper takes the convenience of data acquisition into account, compares it with other studies [9][10][11] , and prepares to reflect population urbanization from two aspects of urban population and urban employment. So this paper chooses the urban total population, non-agricultural population proportion, the proportion of urban population, the proportion of urban employment population, and non-agricultural employment proportion to establish an index system to evaluate population urbanization. The score of population urbanization recorded as . In the perspective of land urbanization [9][10][11] , this paper, considering the urban land use diversity, chooses population density, road area per capita, city construction land proportion and built-up area of the city four indicators to establish an index system, marked as . The population density index represents the bearing capacity of the city to population, road area per capita index reflects the situation of traffic within city, construction land proportion and built-up area of the city accounted for the expansion of land.
The level of intensive land use in the city is measured from the land economic index and land investment [10][11] . This index system includes: land economic density index, city economic expansion coefficient, city construction land expansion index, electricity consumption per square kilometer, city public transportation passenger volume and green area accounts for the built-up area. The land economic density is expressed by the ratio of the urban economic output and the urban area. City economic expansion coefficient means that the ratio of city construction land growth rate and the ratio of the second and third industrial output growth rate; the ratio of city construction land growth rate and the ratio of urban population growth rate stands for construction land expansion index.The determination of index weight has a very important impact on the effectiveness of an index system. Therefore, this paper uses entropy method to determine the weight of each indicator in the index system. Specific weights are shown in The econometric model of environmental pollution and the population of the city for the equation (2), environmental pollution and city land of econometric model for equation (3), the environmental pollution and the level of land intensive use of econometric models for equation (4), on behalf of the City Environmental Quality at time , is a constant.
2 stands for the City in the second industry outcome proportion of GDP at the time t.
on behalf of the city at the time of per capita GDP, , , on behalf of the population urbanization, land urbanization and land intensive use, respectively. a is the unobserved effects of each city. is random error term. Alpha, beta and delta are the parameters to be estimated for each variable.
Heteroscedasticity between panels Test
The variance of random error of city is σ 2 ≡ ( ). If 2 ≠ 2 (i ≠ j), the random error term has Heteroscedasticity between panels.. In this paper, Wald test is built to test the Heteroscedasticity, which provided by Greene (2000). The result is shown in Table 3.
Tab.3
The results of Heteroscedasticity test between panels.
Autocorrelation within panels Test
If Cov( , ) ≠ 0(t ≠ s, ∀i), then the random error tern has Autocorrelation within panels. This paper uses Wald test to test the Autocorrelation (Wooldrige,2002). The results are shown in Table 4.
Tab.4
The results of Autocorrelation test within panels.
Contemporaneous Correlation between panels Test
Cov( , ) ≠ 0(i ≠ j, ∀t) stands for the random error term has Contemporaneous Correlation which we follows Pesaran (2004) and Frees (1995Frees ( , 2004 to test it. The result shown in Table 5
Empirical results
Throughout the above test, we could know that the panel datas have the Heteroscedasticity between panels, Autocorrelation within groups and contemporaneous correlation across panels. If the residual correlation caused by some unobserved common factors, then the common OLS, WLS, fixed and random effects may obtain consistent estimates of parameters, but the estimation is invalid, which would lead to a wrong conclusion. So we assumes the specific form of heteroscedasticity, contemporaneous correlation and autocorrelation, and then uses a feasible generalized least squares (FGLS) method to estimate the results. The results of the feasible generalized least squares method are shown in Table 6. In Model1, the coefficient of per capita real GDP is significantly positive, and its quadratic term is significantly negative. That is to say, it is the same as the environmental Kuznets curve predicted, the relationship between sulfur dioxide emissions per square kilometer and economic development is inverted U on the basis of controlling the urbanization level of the population. As the economy develops, air pollution level rises first and then declines. The coefficient of population urbanization is negative, which shows that the agglomeration effect of population will reduce sulfur dioxide emissions.
In Model2, the coefficient of per capita real GDP is negative, and its quadratic term is not significant. The relationship between sulfur dioxide emissions per square kilometer and the level of social and economic development is a linear relationship. As economic develops,air pollution declines. The coefficient of land urbanization is positive, which means the urban spatial expansion will significantly increase the sulfur dioxide emissions per square kilometer.
In Model3, the coefficient of per capita real GDP is significantly negative, and its quadratic term is zero.That means the relationship between air pollution and economic development is linear, and air pollution is decreasing with the development of economy. The coefficient of land intensive use is significantly positive, indicating that urban land intensive utilization will increase the amount of sulfur dioxide emissions. The possible explanation is that vicious urbanization competition of local governments on environmental regulation, which results in land intensive use of sulfur dioxide emissions and the level of simultaneously rises in the city.
In this paper, weighting the population urbanization and land urbanization as 1:1, we get variable to judge the overall urbanization level. In Model4, the coefficient of per capita GDP was not significant, while its quadratic item is significantly positive. Controlling the overall level of urbanization, the curve between social and economic development and air pollution is the inverted U type, and its symmetry axis is Y axis. In the first quadrant, with the increase of the social economic development, the air pollution declined. The coefficients of were positive. It shows that the process of urbanization will make air pollution more serious. The coefficient of is significantly positive, showing that the level of intensive land use will increase pollution, but comparing with the Model3, the absolute value of its coefficient has decreased significantly. The result indicates that, on the basis of controlling the urbanization level, the level of land intensive use works to reduce air pollution.
Conclusions
The empirical results show that with the development of social economy, air pollution will gradually decline, as EKC predicted. When control the level of urbanization and the level of intensive utilization of land at the same time, the development of social economy and the relationship between air pollution become an inverted U type, and the axis of symmetry is the Y axis. in the model1-model5, the 2 coefficient is negative, indicating second industry value adding will reduce the environmental pollution. This is contrary to the general conclusion. The possible reason is that China's industrialization has entered post industrialization stage with low pollution and high added value, which illustrates that the transformation of China's industry was relatively successful in the past decade. after controlling intensive land use , the coefficient of overall urbanization is also positive, meaning that the urbanization process may lead to the deterioration of air quality.
Population urbanization has a major positive influence on the city's air quality, but the land intensive use and spatial expansion of the city has increased the city environmental problems. Therefore, in the process of urbanization, it is necessary to stop the disorder expansion of the urban space and change the present urbanization, so as to attract the population as much as possible while expanding the space. The size of the space of the city should be restricted, especially China domestic large and medium-sized city, it should also actively explore the optimal size of city space, eliminate the disorder of city land expansion, and improve the level of intensive use of land. Second, the optimal amount of population should be | 2019-04-27T13:07:57.287Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "c33952c77bad8db9dd328ff0dee3dc306619504b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/94/1/012139",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1fffc40ff4eeeef691b583d014df8e39b065bfb5",
"s2fieldsofstudy": [
"Economics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
267573170 | pes2o/s2orc | v3-fos-license | A single-point mutation in the rubella virus E1 glycoprotein promotes rescue of recombinant vesicular stomatitis virus
ABSTRACT Rubella virus (RuV) is an enveloped plus-sense RNA virus and a member of the Rubivirus genus. RuV infection in pregnant women can lead to miscarriage or an array of severe birth defects known as congenital rubella syndrome. Novel rubiviruses were recently discovered in various mammals, highlighting the spillover potential of other rubiviruses to humans. Many features of the rubivirus infection cycle remain unexplored. To promote the study of rubivirus biology, here, we generated replication-competent recombinant VSV-RuV (rVSV-RuV) encoding the RuV transmembrane glycoproteins E2 and E1. Sequencing of rVSV-RuV showed that the RuV glycoproteins acquired a single-point mutation W448R in the E1 transmembrane domain. The E1 W448R mutation did not detectably alter the intracellular expression, processing, glycosylation, colocalization, or dimerization of the E2 and E1 glycoproteins. Nonetheless, the mutation enhanced the incorporation of RuV E2/E1 into VSV particles, which bud from the plasma membrane rather than the RuV budding site in the Golgi. Neutralization by E1 antibodies, calcium dependence, and cell tropism were comparable between WT-RuV and either rVSV-RuV or RuV containing the E1 W448R mutation. However, the E1 W448R mutation strongly shifted the threshold for the acid pH-triggered virus fusion reaction, from pH 6.2 for the WT RuV to pH 5.5 for the mutant. These results suggest that the increased resistance of the mutant RuV E1 to acidic pH promotes the ability of viral envelope proteins to generate infectious rVSV and provide insights into the regulation of RuV fusion during virus entry and exit. IMPORTANCE Rubella virus (RuV) infection in pregnant women can cause miscarriage or severe fetal birth defects. While a highly effective vaccine has been developed, RuV cases are still a significant problem in areas with inadequate vaccine coverage. In addition, related viruses have recently been discovered in mammals, such as bats and mice, leading to concerns about potential virus spillover to humans. To facilitate studies of RuV biology, here, we generated and characterized a replication-competent vesicular stomatitis virus encoding the RuV glycoproteins (rVSV-RuV). Sequence analysis of rVSV-RuV identified a single-point mutation in the transmembrane region of the E1 glycoprotein. While the overall properties of rVSV-RuV are similar to those of WT-RuV, the mutation caused a marked shift in the pH dependence of virus membrane fusion. Together, our studies of rVSV-RuV and the identified W448R mutation expand our understanding of rubivirus biology and provide new tools for its study.
cognitive deficiencies (2)(3)(4).Although vaccination has eliminated RuV from many parts of the world, including the Americas, worldwide, it is estimated that about 100,000 babies are born with CRS each year (5,6).RuV is a member of the genus Rubivirus in the Matonaviridae family, with humans as the only known hosts in nature (1)(2)(3).Two new rubiviruses were recently discovered by metagenomic analyses: Ruhugu virus (RuhV), which was discovered in apparently healthy bats (7), and Rustrela virus (RusV), which was found to cause lethal encephalitis in various wild mammals and domestic cats and was also found in apparently healthy mice (7)(8)(9).These results suggested both that RuV may have had a zoonotic origin and also that rubiviruses, such as RuhV and RusV, may have the potential to spill over to humans from other animal hosts.
RuV particles are rather pleomorphic and can be cylindrical or irregular in shape (2,3).The particles contain an inner nucleocapsid core composed of the capsid protein (Cp) and the ~10 kb genomic RNA.The core is enveloped by a lipid bilayer studded with heterodimers of the E2/E1 transmembrane (TM) glycoproteins (3,(10)(11)(12).The RuV structural proteins are translated from a subgenomic RNA as a polyprotein that is cleaved cotranslationally in the endoplasmic reticulum (ER) by signal peptidase, generating Cp, E2, and E1 (2, 3, 13) (Fig. 1A).The signal sequence (SS) for E2 is retained at the Cp C-terminus and confers Cp membrane binding (14,15), and the SS for E1 is retained at the C terminus of E2 (2,3,16).E2/E1 dimerizes in the ER and is cotransported to the Golgi where RuV buds (17,18).A Golgi targeting signal was mapped to the E2TM (19).
RuV infects cells by endocytic uptake and low pH-triggered fusion in early endosomes (3,20).E1 is the RuV membrane fusion protein and the principal target of neutralizing antibodies.The postfusion structure of E1 (21) shows that it is a class-II membrane fusion protein with two fusion loops that interact with calcium; calcium was shown to be essential for RuV fusion and infection (20,22).RuV fusion and infectivity are inactivated by low pH, and E1 conversion to the postfusion form is irreversible (22).Following the virus budding into the Golgi, it is unclear what protects RuV from premature low pHtriggered fusion during virus exit.While morphological changes suggest that the virus undergoes maturation in the secretory pathway, the molecular mechanism of such maturation and its possible role in pH protection during virus exit are unknown (3,18).
Vesicular stomatitis virus (VSV) is a nonsegmented negative-sense RNA virus in the Rhabdoviridae family (23).The VSV reverse genetic system has been effectively adapted to express and incorporate foreign glycoproteins into VSV particles (24)(25)(26).Strategies based on either replication-competent VSV (rVSV) or single-cycle pseudotyped-VSV (psVSV) allow the generation of viral particles in which the VSV G membrane protein has been replaced with heterologous glycoproteins (26).VSV particles pseudotyped with RuV glycoproteins were successfully used to test RuV cell tropism (27).However, the infectivity of psVSV pseudotyped with RuV E2/E1 was only a log higher than the psVSV without any viral glycoprotein (27).Similarly, lentiviruses pseudotyped with RuV glycoproteins produce very low levels of infectious particles (28).One challenge in these approaches may be that VSV and lentiviruses bud from the plasma membrane (PM) (29,30), while RuV buds into the Golgi.Thus this budding site would require that the RuV envelope proteins transit through the low pH exocytic environment (31) prior to assembling onto VSV particles at the PM.
In this study, we set out to generate rVSV in which the G protein sequence was replaced with the sequence encoding the RuV E2/E1 glycoproteins.Sequencing of the rescued virus identified a W448R mutation in the E1 TM domain, a region reported to act as an ER retention signal (32).When provided in trans, E2/E1 carrying this mutation promoted psVSV infectivity and E2/E1 incorporation without increasing the levels of E2/E1 at the PM.Studies with rVSV and RuV showed that the mutation did not signifi cantly affect virus cell tropism, Ca 2+ dependence, neutralization by E1specific antibodies, or RuV growth.However, in both rVSV and RuV, the mutation strongly shifted the fusion threshold to a more acidic pH.Taken together, our results suggest that this alteration in pH dependence promotes the ability of the RuV envelope proteins to generate infectious rVSV at the PM budding site.
A single-point mutation in RuV E1 rescues rVSV-RuV
To generate VSV-RuV recombinants (rVSV-RuV), we engineered the VSV antigenome plasmid to encode the RuV E2/E1 envelope proteins in place of the native VSV glycopro tein G.The constructs were designed to include a C-terminal region of the RuV Cp containing the E2 SS, followed by E2/E1.Two antigenome plasmids were produced: pVSV-RuV-237-E2E1 or pVSV-RuV-245-E2E1, with the Cp protein truncated at residue 237 or 245, respectively (Fig. 1A and B).To generate rVSV-RuV (26,33,34), we then cotransfected 293FT cells with either of the antigenome plasmids plus VSV helper plasmids (Fig. 1C).Supernatants from both sets of 293FT cells were repeatedly transfer red onto naïve Vero cells.Approximately 3-4 weeks after transfection, Vero cells that received the supernatants from 293FT cells transfected with pVSV-RuV-245-E2E1 but not pVSV-RuV-237-E2E1 showed wide-spread expression of the enhanced green fluorescent protein (eGFP) reporter, indicating successful propagation of rVSV-RuV (Fig. 1C).We harvested the supernatants as the P0 stock of rVSV-RuV, sequenced the glycoprotein region of the virus population following RT-PCR, and identified a single mutation, W448R in the E1 TM domain, a region previously shown to contain an ER retention sequence (3,32).The P0 harvest was used for preparing a plaquepurified stock of rVSV-RuV, which was sequenceverified and used for further experiments (Fig. 1C).
To test if the E1 W448R mutation was directly and solely responsible for the res cue of rVSV-RuV, we engineered this mutation de novo into the pVSV-RuV-245-E2E1 antigenome plasmid.We then transfected 293FT cells with the original pVSV-RuV-245-E2E1 or the pVSV-RuV-245-E2E1W448R mutant antigenome plasmid plus the helper plasmids, as described above.Culture supernatants were collected 5 days post-transfec tion and transferred onto Vero cells.At the indicated time points, Vero cell infection was monitored by imaging the eGFP signal (Fig. 2A), and culture supernatants were collected for virus titration.Both the RuV-E2E1 and RuV-E2E1W448R mutant samples produced a small number of eGFP-positive Vero cells at 24 h post-infection.While no spread of virus infection was observed in Vero cells receiving the RuV-E2E1 sample (Fig. 2A top panels), increasing infection by the RuV-E2E1W448R mutant sample was observed starting at 48 h post-infection (Fig. 2A bottom panels).The virus titer from the E1 W448R mutant sample also gradually increased, reaching 10 6 focus-forming units (FFU)/mL at 96 h after Vero cell infection (Fig. 2B).In contrast, no infectious virus was detected from the WT sample at any of the tested time points (Fig. 2B).Thus, the E1 W448R mutation alone was key to the successful rescue of rVSV-RuV.
Effect of E1 W448R on envelope protein expression, intracellular transport, and localization
We used transient expression of the WT E2E1 and mutant E2E1W448R envelope proteins to evaluate their properties in the absence of virus replication, other viral proteins, and virus budding.Expression plasmids were transfected into Vero cells, and cell lysates were harvested at 48 h post-transfection.Western blot (WB) analysis revealed that E1 W448R does not affect steady-state expression of the E2/E1 glycoproteins (Fig. 3A).Treatment of the lysates with PNGase F and Endoglycosidase H (Endo H) (35) showed that the mutation does not detectably affect glycosylation or transport to the Golgi as defined by acquisition of Endo H resistance (Fig. 3A).Coimmunoprecipitation analyses revealed comparable E2/E1 heterodimer formation (Fig. 3B).
To analyze the subcellular localization of E2/E1 proteins, Vero cells were transfected with the WT or mutant expression constructs, fixed and permeabilized at 48 h posttransfection, and analyzed by immunofluorescence using mAbs against E2 and E1.Confocal microscopy showed that both samples had comparable colocalization of the E2 and E1 glycoproteins in the perinuclear region (Fig. 3C), consistent with the previously observed Golgi localization of the RuV envelope proteins (36).Thus, expression, glycosy lation, dimerization, and intracellular localization of RuV E2/E1 were not detectably affected by the E1 W448R mutation.
Since RuV buds into the Golgi compartment (2,3,18) and VSV buds from the cell surface (30), we hypothesized that the E1 W448R mutation might promote E2/E1 incorporation into budding VSV particles by increasing their steady-state expression on the cell surface.To test this directly, Vero cells were transfected with WT or mutant expression plasmids, harvested after 24 h, stained with mAb against E2 or E1 under permeabilized (total) or nonpermeabilized (cell surface) conditions, and analyzed by flow cytometry.The results showed comparable total levels of E2 and E1 between WT and mutant-expressing cells (Fig. 3D), in agreement with the WB analysis (Fig. 3A).However, the results showed that the E1 W448R mutation actually caused a decrease in the cell surface levels of E2 and E1 (Fig. 3D).
E1 W448R increases E2/E1 incorporation and psVSV infectivity
Alternatively, the E1 W448R mutation might have increased E2/E1 incorporation and/or virus infectivity as compared to the WT glycoprotein.To investigate this, 293FT cells were transfected with expression plasmids for the WT or mutant E2/E1, or with an empty expression plasmid.At 48 h post-transfection, the cells were infected with single cycle psVSV-G, which did not encode G but was pseudotyped with G by production in G-expressing cells (37).Cells were then washed extensively to remove residual inoculum and incubated for 48 h.The culture media were harvested, and equal volumes of each sample were pelleted and analyzed by WB using RuV pAb and VSV-M mAb.Production of psVSV particles, as detected by blotting for M protein, was comparable between the three samples (Fig. 4A), consistent with the known G protein-independent budding of rhabdoviruses (38,39).We set the ratio of E1/M as 1 for WT and compared with that of the mutant.In three independent experiments, the mutant E1/M ratio was 2.6, 1.7, and 1.3 with a mean of 1.9 (Fig. 4A).We reproducibly observed a slower-migrating E2/E1 band that was resistant to SDS-denaturation and reduction, and this band was also significantly more abundant for the mutant (Fig. 4A).Thus, the E1 W448R mutation increased incorporation of the RuV glycoproteins onto psVSV particles.
Parallel aliquots of the culture media were titered on Vero cells.Media from cells transfected with the empty vector contained some infectious virus, indicating the presence of residual virus from the original inoculum (Fig. 4B).To remove this back ground, we incubated the culture media with I1, a neutralizing mAb against the VSV G protein (40), or with a control mAb or growth media.I1 was found to efficiently neutralize psVSV-G, specifically reducing the titer from 10 8 IC/mL to below the limit of detection (Fig. 4B).The media sample from empty vector-transfected cells was also completely neutralized by I1 (Fig. 4B).Under these conditions, the infectivity of psVSV pseudotyped with E2/E1W448R was significantly higher than that of psVSV pseudotyped with WT E2/E1 (Fig. 4B).Together, our results suggest that the E1 W448R mutation improves RuV E2/E1 incorporation into VSV particles and their infectivity.
RuV growth is not affected by E1 W448R
The E1 sequences of the related rubiviruses RuhV and RusV are ∼56% and 51% similar to that of RuV, respectively (7).We compared the RuV E1 TM region (21,41) with those of RuhV and RusV.Alignment of the E1 TM regions revealed that E1W448 is conserved across the three species and is preceded by conserved H and W residues (Fig. 5A).Using the NCBI Virus website, we expanded our alignment by considering >150 rubivirus sequences (∼110 RuV, 3 RuhV, and ~40 RusV).This more extensive comparison revealed that while E1 W448 appears to be completely conserved, the preceding H and W are only loosely conserved across the rubivirus sequence space, suggesting a possible functional importance of E1 W448 in rubivirus biology.
To test the effect of the E1 W448R mutation on authentic RuV, this substitution was introduced into the pBRM33 infectious clone of the RuV M33 strain (41).Viral RNAs were produced by in vitro transcription of the RuV-WT clone and two independent mutant clones and electroporated into BHK-21/WI-2 cells (22).Culture supernatants were harvested at 48 and 72 h as the P0-48 and P0-72 stocks.Sequence analysis of the P0-48 stocks of both mutant clones confirmed the presence of the E1 W448R substitution and the absence of additional mutations in the RuV structural protein ORF.Multicycle growth curves were then performed by infecting Vero cells with P0-72 virus stock at a low MOI (0.01 FFU/cell).The results showed that both clones of RuV-E1W448R have similar growth kinetics as those of RuV-WT (Fig. 5B).Sequence analysis of the 96 h virus samples from these growth curves confirmed the presence of the E1 W448R mutation and the absence of additional mutations in the structural proteins.Thus, despite the high sequence conservation at this position, the E1 W448R mutation does not affect RuV growth in cell culture and is stable across several virus passages.
Biological properties of E1W448R in rVSV and RuV
We then tested the effects of the E1W448R mutation in the context of RuV and the VSV-RuV recombinant.The antibody response to RuV E1 has been reported to play an essential role in long-term immunity to RuV (42,43).An E1 region from residues 223-239 is the binding site for E1-20, a potent neutralizing Ab (44).The generation of antibodies to this site strongly correlates with vaccine protection and disease convales cence in humans (43).Structural studies indicate that Abs that bind this site would prevent E1 trimerization and membrane fusion (21).We compared neutralization of RuV-WT, RuV-E1W448R, and rVSV-RuV-E2E1W448R by either E1-20 (MilliporeSigma), a similar E1 mAb from Meridian Bioscience, or a control mAb (44,45).All three viruses were neutralized by the 2 mAbs to RuV E1 and not by the control mAb (Fig. 6A).The IC 50 values for neutralization of RuV-WT or rVSV-RuV-E2E1W448R were between ~1 and 5 µg/mL for both mAbs.The IC 50 for neutralization of RuV-E1W448R was higher for both mAbs, ~13 µg/mL.These results indicate that this important E1 epitope is recognized in all three viruses but may be somewhat less accessible in RuV-E1W448R.
RuV fusion is strictly dependent on Ca 2+ , which binds to conserved Asn and Asp residues in the E1 fusion loops (20)(21)(22).In the absence of Ca 2+ , E1 does not insert into the target membrane, and fusion and infection are blocked.We tested if Ca 2+ was also required for fusion and infection of the rescued rVSV-RuV-E2E1W448R (Fig. 6B).The results showed that infection by both RuV-WT and rVSV-RuV-E2E1W448R was strongly and comparably dependent on Ca 2+ , with maximal infection occurring at a concentration of ~2 mM, in agreement with prior results.
Although humans are the only known host in nature, RuV has a wider tropism in cell culture (2,3,27).We compared the susceptibility of monkey, hamster, and human cell lines to RuV-WT, RuV-E1W448R, and rVSV-RuV-E2E1W448R infection.Infection of Vero, BHK-21/C-13, and U-2 OS cells was relatively efficient and broadly comparable among the three viruses (Fig. 6C).Prior reports indicated that HEK 293T cells were relatively resistant to RuV infection (46).While all three viruses showed reduced infection on HEK 293T cells, infection by rVSV-RuV-E2E1W448R was more efficient than for either RuV-WT or RuV-E1W448R (Fig. 6C).
RuV fusion is low pH-dependent, with maximal fusion observed at ~pH 6.2 (22).We used a fusion infection assay to compare the pH dependence of rVSV-RuV-E2E1W448R and RuV-E1W448R with that of RuV-WT.Viruses were prebound to Vero cells on ice, treated with buffers of varying pH for 3 min at 37°C to trigger fusion with the PM, and the resultant virus infection was quantitated (Fig. 6D).Maximal fusion of RuV-WT was observed at pH ~6.2 and rapidly declined due to virus inactivation at low pH, as previ ously observed (22).In contrast, both rVSV-RuV-E2E1W448R and RuV-E1W448R showed maximal fusion at ~pH 5.5 (Fig. 6D).Fusion of either virus carrying the E1 W448R mutation showed a relatively broad pH dependence and little inactivation until pH 5.0 (Fig. 6D).Thus, the E1 W448R mutation conferred a difference of ~0.7 pH units in the fusion maximum when incorporated into either rVSV-RuV-E2E1 or RuV.
Low pH-induced conformational change of WT and mutant E1
Low pH triggers the rearrangement of RuV E1 to the postfusion homotrimer, resulting in its increased resistance to trypsin digestion (22,47).To test the effect of W448R on this E1 conformational change, we incubated WT or mutant RuV at the indicated pH and then digested with 125 µg trypsin/mL (Fig. 7A and B).Both the WT and mutant E1 proteins showed an increase in trypsin resistance after low pH treatment.However, control virus samples that were untreated or incubated under pre-neutralized conditions showed that WT E1 was already significantly more (40%) trypsin-resistant than mutant E1.This difference was observed even after digestion with 200 µg trypsin/mL (Fig. 7A).These results suggest that WT E1 is more sensitive to low pH exposure during virus transit through the secretory pathway and thus contains more trypsin-resistant E1 than the mutant RuV.
DISCUSSION
Here, we describe the first successful rescue of a replication-competent rVSV encoding the RuV E2/E1 glycoproteins.This recombinant virus contained a single amino acid (AA) substitution, W448R, in the E1 TM domain.The properties of rVSV-RuV-E2E1W448R were generally comparable to those of WT RuV, showing similar localization and processing of E2/E1, virus neutralization by E1 mAbs, and virus Ca 2+ dependence and cell tropism.However, the E1 W448R mutation markedly shifted the pH dependence of membrane fusion in either the rVSV or RuV context, from the WT RuV pH maximum of pH 6.2 to the mutant pH maximum of pH 5.5.RuV-E1W448R showed growth kinetics similar to those of WT RuV, and the mutation was stable over several passages in cell culture.We note, however, that E1 W448 is highly conserved across RuV and other rubiviruses.This stability in nature thus suggests evolutionary constraints that may reflect E1's roles in vivo.
Recombinant VSV surrogates could be a strategy to characterize novel rubiviruses, and sequence alignments show considerable conservation across the rubivirus structural proteins (3,7).However, our attempts to rescue rVSV-RuhV and rVSV-RusV using similar structural protein constructs were unsuccessful, even if engineered with an E1W448R mutation (data not shown).Thus, while our work provides proof of concept to inform the study of other rubiviruses using rVSV, further work is needed to determine how broadly applicable this strategy could be.
Previous RuV studies showed that the E1 TM domain and cytoplasmic tail are required for virus or VLP transit out of the Golgi (41,50,51).The W448R substitution acquired by rVSV-RuV-E2E1 is within the E1 TM domain, which also harbors an ER retention signal (2,3,32).However, contrary to our initial hypothesis, the W448R mutation actually decreased cell surface levels of E2/E1.Results with psVSV indicate that despite lower levels at the PM, the E1 W448R mutation boosts E2/E1 incorporation and increases the infectivity of psVSV particles.
How does the E1 W448R mutation promote RuV glycoprotein incorporation and infectivity of rVSV-RuV-E2E1?Given the limits in sensitivity of the system, we cannot conclude that the increase in production of infectious rVSV-RuV-E2E1W448R is due solely to the observed increase in E2/E1 incorporation vs if there are additional increases due to the activity of the mutant fusion protein.Our fusion infection assay results showed that both RuV E1W448R and rVSV-RuV-E2E1W448R have a broader pH dependence for fusion, which could increase infectivity by promoting virus fusion at other points in the endocytic pathway rather than primarily at the early endosome as observed for WT RuV (20).The E1 postfusion structure does not include the TM domain and residue W448 (21).However, the fold-back mechanism of membrane fusion suggests that the TM domain would be adjacent to the fusion loop in the fused membrane (21,52).Recent studies of the SARS-CoV-2 S protein reveal that the TM domain packs against the fusion loop in the postfusion structure, an interaction essential for membrane fusion (53).If such interactions occur in the RuV E1 protein during membrane fusion, they could be enhanced by substitutions in the E1 TM domain, although this seems less likely based on the substitution of tryptophan by the positively charged arginine residue.Instead, we favor a model in which the W448R substitution in the E1 TM domain stabilizes the E2-E1 dimer, which then shifts the pH dependence of the virus membrane fusion reaction.Since W448R is located within the E1 ER retention signal (32), it is also possible that the mutation decreases ER retention and increases transport rates.We speculate that such mechanisms could protect the RuV envelope proteins during transit through the low pH exocytic environment, promoting their cell surface delivery in a fusion-active state.This is in keeping with our finding that the mutant RuV has a lower proportion of E1 in the trypsin-resistant (postfusion) form.
Coronaviruses generally bud into the ERGIC, although egress through the lysosomal pathway has also been reported (54).The coronavirus S protein cytoplasmic tail contains an ER retention motif that slows S protein trafficking through the ERGIC (55,56).Rescue of recombinant VSV bearing the S proteins from SARS-CoV-1, SARS-CoV-2, or MERS is facilitated by truncation of the ER retention signal (57)(58)(59).Given the wide use of the rVSV system, it would be interesting to determine if the rescue of such VSV-CoV recombinants or those of other viruses that bud intracellularly have features in common with those we describe for rVSV-RuV.
Transfection of 293FT cells was performed using polyethylenimine (PEI MAX, Polysciences, Inc).PEI was diluted to 1 mg/mL in Milli-Q water, adjusted to pH 7.25 with NaOH, and filteredsterilized using a 0.22 µm syringe filter.A 4:1 ratio of PEI/DNA was used for each transfection reaction.Cells were incubated for 6 h at 37°C in Opti-MEM (Gibco) media containing the PEI/DNA mixture and then maintained in high glucose DMEM supplemented with 5% FBS.Vero cells were transfected where indicated using Lipofectamine 2,000 (Invitrogen) according to the manufacturer's instructions.
Virus titration
Virus samples were titered on Vero cells using infectious center assays (ICA) or focusforming assays (FFAs).Vero cells were seeded at 1.2 × 10 4 cells/well in 96-well plates, cultured overnight, and infected with 100 µL of serial dilutions of virus samples for 4 h.The virus inocula were aspirated and replaced with 100 µL of DMEM plus 5% FBS and 20 mM NH 4 Cl or 1% carboxymethylcellulose in modified Eagle's Medium supplemented with 2% heat-inactivated FBS and 10 mM Hepes pH 7.4.For ICA, 48 h post-infection, cells were fixed with 4% paraformaldehyde (PFA, Electron Microscopy Science) and stained with RuV pAb and fluorescently labeled secondary antibody, and infection was quantitated by fluorescence microscopy.For FFA, cells were fixed by adding 100 µL of prewarmed 1% PFA in PBS to the overlay and incubating for 1 h.Cells were washed with PBS, permeabilized with 0.1% saponin in PBS containing 0.1% bovine serum albumin (BSA), incubated with RuV pAb followed by incubation with horseradish peroxidase-con jugated rabbit anti-goat IgG (Seracare, Milford, MA).Foci were developed using TrueBlue Peroxidase substrate (Seracare) and quantified using an ImmunoSpot S6 Macroanalyzer with Biospot 7.0.9.10 software (Cellular Technologies, Shaker Heights, OH).The titer for psVSV-RuV was determined by ICA, with initial infection for 2 h and scoring of eGFP-posi tive cells 24 h post-infection.
VSV-RuV rescue
The standard plasmid-based VSV reverse-genetic system was used to rescue rVSV-RuV-E2E1 (33,34).293FT cells were cotransfected with the pVSV-RuV-237-E2E1 or pVSV-RuV-245-E2E1 antigenome plasmids and helper plasmids expressing T7 polymerase and VSV N, P, M, G, and L. Starting at day 2 post-transfection, supernatants from the transfected cells were added to Vero cells every 24 h, and the eGFP signal was monitored by fluorescence microscopy.At 3 weeks post-transfection, the Vero cells were split, and the cultures transferred to 30°C.
Approximately 1 month after transfection, the Vero cells showed a wide-spread eGFP signal.The supernatants from two parallel plates were harvested as the P0 stock of rVSV-RuV and titered by ICA on Vero cells.RNA was extracted from the virus-containing culture media, reverse transcribed, and PCR amplified, and the sequence of the RuV structural protein region was determined.The P0 virus stock was then plaquepurified and used to generate a P1 stock termed rVSV-RuV-E2E1W448R, which was verified by sequencing and used for subsequent experiments.
For validation of the E1 W448R mutation as the driver of rescue, 293FT cells were cotransfected with pVSV-RuV-245-E2E1 or pVSV-RuV-245-E2E1W448R antigenome plasmids and helper plasmids as above.Supernatants were harvested after 5 days and used to infect naïve Vero cells.Infected Vero cells were incubated at 30°C, and the eGFP signal was monitored daily by fluorescence microscopy (Zeiss Axiovert 200M).In parallel, supernatant samples were collected at the indicated timepoints and titered by FFA.
RuV production and growth assays
Infectious RuV RNA was generated by in vitro transcription (22) of the WT or mutant pBRM33 and electroporated into BHK-21/WI-2 cells.The P0 RuV stocks were harvested at 48 h postelectroporation, and the virus RNA was reverse-transcribed and the structural ORF sequenced by Sanger sequencing.For growth assays, Vero cells were seeded at 1 × 10 5 cells/well in 12-well plates, cultured overnight, and infected with P0 stock at MOI 0.01 FFU/cell.Culture media were harvested at the indicated time points and titered by FFA.The sequence of the mutant virus harvested at 96 h post-infection was analyzed as described above.
Glycosylation and coimmunoprecipitation assays
Vero cells were seeded at 3.2 × 10 5 cells/well in six-well plates and incuba ted overnight, and duplicate wells were transfected with pTWIST-RuV-245-E2E1 or pTWIST-RuV-245-E2E1W448R.Cells were lysed 48 h post-transfection in 0.15 mL of ice-cold lysis buffer [50 mM TRIS pH 7.4, 100 mM NaCl, 1 mM EDTA, 1% TritonX-100, 1 µg pepstatin/mL, 2 µg aprotinin/mL, and 1 mM phenylmethylsulfonyl fluoride (PMSF)], and duplicate samples were combined and clarified by centrifugation.For glycosylation analysis, aliquots of lysates were denatured at 100°C for 10 min in the presence of 40 mM dithiothreitol (DTT) and 0.5% SDS and digested with PNGase F or Endoglycosydase H according to the manufacturer's protocol (New England Biolabs).For coimmunoprecipi tation, 200 µL of lysates were precleared with Protein-A agarose beads (Thermo Fisher), then incubated with E1-20 mAb for 60 min on ice.Protein-A agarose beads were added, and samples were rotated for an additional 90 min.Beads were washed three times with wash buffer (50 mM TRIS pH 7.4, 100 mM NaCl, 1 mM EDTA, 0.1% Triton X-100, and 2 µg/mL aprotinin) and resuspended in elution buffer (10 mM TRIS pH 6.8, 1 mM EDTA, 1 mM PMSF, and 2 µg aprotinin/mL) and stored at −20°C.Before gel analysis, buffer containing SDS and DTT was added to 0.5% and 40 mM, respectively, and samples were boiled at 100°C for 10 min.Samples were analyzed by SDS-PAGE and WB using RuV pAb and E2 mAb.
Flow cytometry
Vero cells were seeded at 3 × 10 5 cells/well in six-well plates cultured overnight at 37°C, transfected with the pTWIST-RuV-245-E2E1 or pTWIST-RuV-245-E2E1W448R expression constructs, and incubated for 24 h at 30°C.Cells were harvested using Accutase (Sigma) and washed two times with staining buffer (15 mM HEPES pH 7.0 and 2% FBS in PBS).For intracellular staining, the cells were fixed with 2% PFA in PBS for 10 min at room temperature and permeabilized by 0.01% Triton in staining buffer for 10 min at room temperature.For cell surface staining, the cells were blocked in staining buffer, washed once, and stained with RuV E1-20 (1:200, MilliporeSigma) or RuV E2 (1:25, Thermo Fisher) for 40 min at 4°C.Then the cells were washed twice and stained with appropriate Alexa Fluor-conjugated secondary antibody at 1:500 for 30 min at 4°C.The cells were washed twice, postfixed with 2% PFA for 10 min at room temperature, and then washed twice with PBS. 1 × 10 4 cells were analyzed for each sample using a BD LSR-II analyzer (BD Biosciences, San Jose, CA, USA) in the Einstein flow cytometry core.Mock-infected cells stained as above were used to delineate the gates for flow analysis (60).The flow data were processed using FlowJo 10.2 software.
Incorporation experiments
293FT cells were seeded in 10 cm culture dishes precoated with poly-D-lysine and cultured overnight, and duplicate plates were transfected with pTWIST-RuV-245-E2E1, pTWIST-RuV-245-E2E1W448R, or empty expression vector.48 h post-transfection cells were gently washed with DMEM and inoculated with single cycle psVSV-G [VSV with a G protein deletion, encoding the eGFP reporter and pseudotyped with G by production in G-expressing cells (37), a gift from Drs. Megan Slough and Kartik Chandran].After a 2 h incubation, cells were washed eight times with DMEM and incubated at 30°C for 48 h, and the culture media were harvested.Aliquots were stored at −80°C for titration.Equal volumes of freshly harvested supernatants for each construct were pelleted at 20,000 rpm in an SW32Ti rotor for 2 h at 4°C.The virus pellets were resuspended in DMEM and pelleted through a 10% sucrose cushion [(wt/vol) in 50 mM Tris-HCl, 100 mM NaCl] in an SW41 rotor at 20,000 rpm for 2 h at 4°C.The virus pellets were resuspended in 100 µL buffer containing 50 mM TRIS pH 7.4, 100 mM NaCl, 1 mM PMSF, 1 ug/mL pepstatin, and 2 µg/mL aprotinin and stored at −80°C.Just before gel analysis, samples were adjusted to 0.5% SDS and 40 mM DTT and boiled for 10 min at 100°C.Samples were analyzed by SDS-gel electrophoresis and WB using RuV pAb and a mAb to VSV-M protein.
For titration, samples were incubated with I1, control antibody, or media for 1 h and then titered by ICA.
Assays of E1 W448R properties in VSV and RuV
To test virus calcium dependence, ~150 FFU of RuV-WT or rVSV-RuV-E2E1W448R was bound on ice to prechilled Vero cells for 1.5 h in calcium-free binding medium (calciumfree MEM without NaHCO 3 plus 0.2% BSA and 10 mM Hepes pH 7.0).Unbound viruses were removed, and cells were incubated at 37°C for 20 min in a calcium-free binding medium containing the indicated concentrations of CaCl 2 .The cells were then incubated for 48 h at 30°C in a growth medium (with calcium) containing 20 mM NH 4 Cl to prevent secondary infection and scored by FFA.
To determine the pH dependence for virus fusion, 150 FFU of RuV-WT, RuV-E1W448R, or rVSV-RuV-E2E1W448R were bound on ice to prechilled Vero cells for 1.5 h in bind ing medium (RPMI 1640 without NaHCO 3 , plus 0.2% BSA, and 10 mM Hepes pH 7.0).Unbound viruses were removed, and cells were incubated at 37°C for 3 min in a fusion medium (binding medium plus 20 mM MES) adjusted to the indicated pH.After the pH pulse, the fusion medium was replaced with a growth medium containing 20 mM NH 4 Cl.Cells were incubated at 30°C for 48 h and scored by FFA.
Antibody neutralization assay
Approximately 150 FFU of RuV-WT, RuV-E1W448R, or rVSV-RuV-E2E1W448R were incubated for 1 h at 37°C with 3-fold serial dilutions of the indicated mAbs (starting with 300 nM) in MEM plus 0.2% BSA and 10 mM HEPES pH 7.0.Vero cells in 96-well plates were infected with the antibody:virus complexes for 3 h, the media were replaced with 1% carboxylmethylcellulose overlay, and the samples were incubated at 30°C for 48 or 30 h (rVSV-RuV only) and scored by FFA.The number of foci in wells containing mAb was normalized to wells infected with the virus without antibody.Nonlinear regression analysis was performed, and IC 50 was calculated using Prism 10 (GraphPad Software).
Assay of generation of trypsin-resistant E1
P1 stocks of RuV-WT and RuV-E1W448R were prepared by infecting Vero cells with P0 stocks at an MOI of 0.1 FFU/cell.At 96 h post-infection, the supernatants were harvested and pelleted through a 10% sucrose cushion [(wt/vol) in 50 mM TRIS pH 7.4 and 100 mM NaCl] by centrifugation in an SW32Ti rotor (Beckman Coulter) at 28,000 RPM for 2 h.The virus pellets were resuspended in 10 mM MES pH 7.0, 10 mM HEPES pH 7.0, and NaCl 100 mM overnight on ice and aliquoted and stored in −80°C.
pH treatment and trypsin digestion of WT and mutant RuV were carried out as described previously (22,47).Briefly, for each treatment, 30 µL of the resuspended virus was treated at the indicated pH for 10 min at 37°C by adding pre-calibrated volumes of 0.5 N acetic acid followed by neutralization with 1M HEPES pH 8.0.pre-neutralized samples were treated with a mixture of acetic acid and HEPES equivalent of pH 5.0 treatment.Samples were then solubilized for 10 min on ice with a final concentration of 0.9% TritonX-100 and then digested as indicated with trypsin (Sigma-Aldrich catalog no.T1426) prepared in PBS containing 0.9 mM CaCl 2 and 0.5 mM MgCl 2 at a final concentration of 125 or 200 µg/mL for 30 min at 37°C.Digestion was quenched by adding PMSF to a final concentration of 1 mM.0.5% SDS and 40 mM DTT were added, and samples were heated for 5 min at 95°C and analyzed by SDS-PAGE and WB using RuV pAb.
Statistics
All statistical analyses were carried out using GraphPad Prism 10.The specific analyses are listed in the figure legends.
FIG 1
FIG 1 Topology and sequences of RuV structural proteins incorporated into rVSV-RuV.(A) Topological arrangement of the RuV structural proteins, showing Cp (brown), E2 (blue), and E1 (red) on ER or viral membrane, with the E2 SS (pink), E2 TM domain (yellow), E1 SS (orange), and E1 TM (green) in the indicated colors.An arginine-rich loop that connects the E2TM and E1SS is shown as a dotted blue line while the E1 cytoplasmic tail is depicted as a dotted red line.Redrawn from reference (3).(B) Schematics of the RuV E2E1 expression constructs.Features are shown as in A with polyprotein amino acid numbering at the bottom of the diagrams.The Cp C-terminal region (solid brown) harboring the E2SS (pink) marks the start site of the E2E1 expression open reading frame (ORF), while the excluded N-terminal part of Cp is shown in striped brown.Two variants of the expression cassette, termed RuV-237-E2E1 (AA 237-1063 plus an added M shown in italic) and RuV-245-E2E1 (AA 245-1063), are shown.(C) Rescue strategy for rVSV-RuV-E2E1.293FT cells were transfected with pVSV antigenome plasmids encoding RuV-237-E2E1 or RuV-245-E2E1 plus the indicated helper plasmids.Supernatants from the transfected cells were repeatedly transferred onto Vero cells, which were cultured until the emergence of rVSV-RuV-E2E1.
FIG 2
FIG 2 E1 W448R mutation promotes rVSV-RuV rescue.(A and B) Growth of rVSV-RuV-E2E1 and rVSV-RuV-E2E1W448R.293FT cells were cotransfected with helper plasmids plus the pVSV antigenome plasmid encoding either the WT E2E1 or the mutant E2E1W448R and incubated for 5 days.Supernatants were then used to infect Vero cells.(A) Representative images of eGFP reporter expression in Vero cells at the indicated times post-infection (scale bar, 200 µm).(B) The culture supernatants from infected Vero cells were collected at the indicated timepoints and titered on Vero cells by focus-forming assay.Data shown are the mean ± SD of four independent experiments with open circles showing results from each experiment.The limit of virus detection is shown as a dotted line.
FIG 3
FIG 3 Effects of E1 W448R on the properties of expressed RuV E2/E1.(A-D) Vero cells were transfected with constructs expressing either the RuV WT E2E1 or the E2E1W448R mutant and analyzed as follows: (A and B) Cell lysates were prepared at 48 h post-transfection.(A) Cells were tested for E2/E1 glycosylation.Lysates were treated with no enzyme (NE), PNGaseF (F), or EndoH (H) and analyzed by WB with RuV pAb (left panel) or E2 mAb (right panel).PNGaseF-sensitive and EndoH-resistant forms of E1 and E2 are marked by blue or magenta asterisks, respectively.(B) E2/E1 heterodimer formation was analyzed by coimmunoprecipitation.Samples were precipitated with E1 mAb and analyzed by WB using RuV pAb to detect E1 and E2.(C) Colocalization of E2 and E1 was analyzed at 48 h post-transfection by staining with E2 and E1 mAb and corresponding isotypespecific secondary antibodies.Nuclei were stained with Hoechst-33342.Images were acquired using confocal microscopy and are representative examples of two independent experiments (scale bar, 10 µm).(D) Total and cell surface expression of E1 and E2.Cells were harvested at 24 h post-transfection, stained with antibodies against E1 or E2 under permeabilized (total) or nonpermeabilized (surface) conditions, and analyzed by flow cytometry.The relative mean fluorescence intensities (MFIs) of positive cells are shown and represent the expression of E1 or E2 relative to WT, which was set as 1.Bar graphs show the mean of two independent experiments, with individual results shown as points.
FIG 4
FIG 4 Effect of E1 W448R mutation on E2/E1 incorporation and infectivity of VSV pseudo particles.293FT cells were transfected with an empty vector or with expression constructs for WT or E1 W448R versions of RuV E2/E1.Expressing cells were infected with psVSV-G, a single-cycle VSV lacking the G gene but carrying VSV G protein.After infection, cells were washed to remove residual inoculum and cultured for 48 h.(A) VSV particles in the culture supernatant were pelleted and analyzed by WB with RuV pAb and a mAb to the VSV matrix (M) protein.An E2/E1 population resistant to reduction and boiling is marked by an asterisk.(B) The infectivity of psVSV produced from cells transfected as indicated was measured by infectious center assay on Vero cells in the presence of the I1 mAb against VSV G, control mAb, or culture media alone.A psVSV-G single-cycle virus stock was used as a control for I1 neutralization.The bar graph represents the mean ± SD of three independent experiments, with open circles showing the results of individual experiments.The limit of virus detection is shown as a dotted line.Statistical analyses were performed using unpaired t-test.**, P < 0.01.
FIG 5
FIG 5 Conservation of E1 W448 and effect of mutation on RuV growth.(A) Amino acid sequence alignment of the E1TM region from RuV (gene bank ID P08563), RuhV (gene bank ID QKO01647.1),and RusV (gene bank ID QKO01649.2),with the putative TM domain (21, 41) indicated as a dotted green box.The natural reservoir for each virus is indicated on the left.The site of the E1 W448R mutation is indicated above (RuV E1 numbering).(B) Vero cells were inoculated at multiplicity of infection (MOI) = 0.01 FFU/cell with virus stocks generated from the RuV-WT infectious clone or from two independent infectious clones of the RuV-E1W448R mutant.Culture supernatants were harvested at the indicated time points and titered by focus-forming assay on Vero cells.The sequences of the mutant viruses were confirmed at the 96 h timepoint.
FIG 6
FIG 6 Effects of the E1 W448R mutation on the properties of E2/E1.(A) Neutralization of RuV-WT, RuV-E1W448R, and rVSV-RuV-E2E1W448R by RuV E1 antibodies.Viruses were incubated with the indicated concentrations of mAb E1-20 (Millipore), mAb E1 (Meridian), or the negative control mAb chCHK-152 (control mAb).Titers were determined by focus-forming assay (FFA) on Vero cells.IC 50 values are shown as inset.(B) Fusion infection assay to test calcium requirement.Dilutions of RuV-WT or rVSV-RuV-E2E1W448R virus stocks were prebound to Vero cells on ice for 90 min.Cells were then incubated at 37°C for 20 min in a medium containing the indicated concentrations of CaCl 2 and then cultured for 48 h at 37°C in a growth medium containing 20 mM NH 4 Cl to prevent secondary infection.Infected cells were scored by FFA.Infectivity was normalized to that observed at 2 mM CaCl 2 .(C) Cell-type dependence of primary infection of RuV-WT, RuV-E1W448R, and rVSV-RuV-E2E1W448R.Virus infectivity on the indicated cell lines was determined by FFA.The limit of detection (20 FFU/mL) is shown as a dotted line.(D) Characterization of the pH threshold for RuV-E1W448R and rVSV-RuV-E2E1W448R fusion.Viruses were prebound to Vero cells as in Fig. 6B, then incubated for 3 min at 37°C in the medium of the indicated pH, and cultured for 48 h at 37°C in growth medium plus 20 mM NH 4 Cl.Infection was scored by FFA and normalized to maximal fusion which was observed at pH 6.2 for RuV-WT and pH 5.5 for RuV-E1W448R and rVSV-RuV-E2E1W448R.Graphs in A show individual data points from two independent experiments.Data in B, C, and D represent the mean ± SD of three independent experiments, with the open circles in C showing the results from each experiment.Statistical analyses were carried out by one-way ANOVA with Dunnett's multiple comparisons test.****, P < 0.0001; ***, P < 0.001; **, P < 0.01.
FIG 7
FIG 7 pH dependence of generation of trypsin-resistant E1. (A) WT or E1 W448R RuV preparations were either pre-neutralized (PN, treated with a mixture of acetic acid and HEPES mimicking the end stage of pH 5.0 treatment) or incubated at the indicated pH for 10 min at 37°C, adjusted to neutral pH, solubilized with Triton X-100, and digested for 30 min at 37°C with 125 µg trypsin/mL (+ samples).In parallel, equivalent aliquots of non-pH treated virus were incubated with 125 µg trypsin/mL (+), 200 µg trypsin/mL (++), or no trypsin (−).Samples were analyzed by SDS-PAGE and WB with RuV pAb.The white space between the WT and W448R panels indicates two different gels from the same experiment that were aligned.The data shown are representative of three independent experiments.(B) Quantitation of E1 trypsin sensitivity from samples prepared as in panel 7A.The E1 signal of no treatment samples (not treated with pH or trypsin, lanes 3 in panel A) was set to 100%.The trypsin-treated pre-neutralized samples (lanes 4 panel A) and pH-treated samples (lanes 5-7 in panel A) are shown relative to the no treatment samples.Data shown are the mean ± SD of three independent experiments, with individual data points shown as open circles.Statistical analyses were carried out by unpaired t-test.*, P ≤ 0.05; ns, not significant. | 2024-02-10T06:17:32.280Z | 2024-02-09T00:00:00.000 | {
"year": 2024,
"sha1": "389af56e1aa30d137b5b84f90d38559f3c33d2d8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/mbio.02373-23",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1f86cdb2241f7ea6ebc440194c503b58e82a54d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119214386 | pes2o/s2orc | v3-fos-license | Four-pion production in tau decays and e+e- annihilation: an update
An improved description of four-pion production in electron-positron annihilation and in tau lepton decays is presented. The model amplitude is fitted to recent data from BaBar which cover a wide energy range and which were obtained exploiting the radiative return. Predicting tau decay distributions from e+e- data and comparing these predictions with ALEPH and CLEO results, the validity of isospin symmetry is confirmed within the present experimental errors. A good description of two- and three-pion sub-distributions is obtained. Special emphasis is put on the predictions for omega pi (->pi+pi-pi0) in e+e- annihilation and in tau decay. The model amplitude is implemented in the Monte Carlo generator PHOKHARA.
I. INTRODUCTION
The production of four pions in τ decays and e + e − annihilation has received considerable attention, both from the theoretical [1,2,3,4,5,6,7,8,9] and the experimental side [10,11,12,13,14,15,16,17]. Relating the cross sections and rates for the four charge combinations (π + π − 2π 0 , 2π + 2π − , 2π − π + π 0 and π − 3π 0 ) gives important hints on the validity of isospin symmetry and the size of the isospin breaking terms. The dependence of the rates and the cross sections on Q 2 , the invariant mass of the four-pion system, and the investigation of differential distributions, e.g. of the two-and/or three-pion masses, gives information on the resonance structure of the amplitude. In the low Q 2 region, predictions based on the chiral Lagrangian can be tested which, however, must be complemented by resonance physics in order to properly describe the rates in the dominant region between 1 and 3 GeV. The e + e − cross section is, furthermore, important to evaluate the hadronic vacuum polarization which in turn is essential for the precise prediction of the muon anomalous magnetic moment and the running of the electromagnetic coupling [18,19].
From the experimental side precise τ data have been obtained by ALEPH [15] and by CLEO [11] collaborations, which, however, are naturally restricted to Q 2 below 1.77 GeV. The e + e − cross section has been measured by CMD2 [10,12,14] and SND [13] are far less accurate and will not used in this paper) and, more recently, by BaBar [16,17] through the method of radiative return which covers energies up to 4.5 GeV. This method, which was proposed in [5,20,21] allows to use the large luminosity at B-factories for a measurement of the e + e − cross section in the region of interest.
From the theory side the first evaluation based on chiral perturbation theory has been performed by Fischer, Wagner and Wess [1] and applied to τ decays. Subsequently this ansatz was extended [3] to include ρ, a 1 and f 0 resonances, which are clearly visible in subdistributions. In addition the ωπ mode was introduced, again predicted from the chiral anomaly [2,6]. Later this ansatz, slightly modified, was implemented in the generator EVA [21] to simulate 4π production in the radiative return [5]. As stated above, the low Q 2 region should be best suited for a description based on chiral Lagrangians. Combining one-loop chiral corrections at low Q 2 with resonance enhancements at intermediate energies, precise predictions have been obtained in [6] which will be discussed below.
In view of these recent theoretical and experimental developments, together with need for an optimal implementation of the 4π mode into the Monte Carlo event generator PHOKHARA [22,23,24,25,26,27,28,29], an improved ansatz for the corresponding hadronic amplitude has been developed. The ansatz is largely based on [3,5] and [28] (concerning the ω part) with model parameters fitted to the recent BaBar results. In order to accommodate the ρ + ρ − signal observed in [17] we include a contribution, which is modeled to mimic a SU (2) gauge theory with the ρ-meson (and its radial excitations) as gauge boson(s).
Our paper is organized as follows: To facilitate the subsequent discussion, in Section II the basic definitions are introduced and the (well known) isospin relations be-tween the amplitudes and the rates of the four channels are collected. The validity of these relations is investigated in Section III, using data from e + e − annihilation to predict the corresponding, experimentally measured distributions for τ decays. The ingredients for the ansatz for the matrix element of the hadronic current are discussed in Section IV. The comparison of this ansatz with e + e − data and the fit of its parameters is presented in Section V together with the comparison between the model and data for a variety of distributions. The implications of the model for τ decays is discussed in Section VI, the implementation into the generator PHOKHARA and related technical tests in Section VII. A brief summary and our conclusions are given in Section VIII. A detailed description of our model with the complete list of parameters can be found in the Appendix.
The function J µ ≡ J µ (q 1 , q 2 , q 3 , q 4 ) is symmetric (antisymmetric) with respect to the interchange of q 1 and q 2 (q 3 and q 4 ).
The currents defined in Eq. (3) contain the complete information about the hadronic cross section through where R(Q 2 ) is equal to σ(e + e − → hadrons)(Q 2 )/σ point , with σ point = 4πα 2 /(3Q 2 ), and dΦ n (Q; q 1 , . . . , q n ) denotes the n body phase space with all statistical factors included. The amplitude describing the τ decay into an arbitrary number of hadrons plus a neutrino (excluding radiative corrections) is given by with and J − α (0) ≡dγ α u at the quark level, where we restrict our considerations to the Cabbibo allowed vector part of the hadronic current.
The differential τ decay rates are given by and Γ e = G 2 F m 5 τ /(192π 3 ). Note the relative factor of 2 between the definitions in Eq.(6) and Eq.(10). We have included here also electroweak correction factor S EW (we use S EW = 1.0198 [7]) to account for standard electroweak corrections.
The function R τ is related to the spectral function defined by CLEO [11] through R τ (− − +0) = 3πV 3ππ 0 and through R τ = 3v 1 to the vector spectral functions defined by ALEPH [15]. In this paper we will use normalization of the spectral functions chosen by ALEPH.
The four pion spectral functions and the cross sections can be expressed as linear combinations of two integrals The relations read The additional contribution to R (+ + − −) = A + B + C of the form vanishes for any symmetric phase space configuration. Eqs.12 correspond to the familiar relations between τ decay rates and e + e − annihilation cross sections:
III. ISOSPIN SYMMETRY -EXPERIMENTAL SITUATION
In this section we would like to address the question, if present experiments require inclusion of isospin violating effects in the model. Combining the results from BaBar [16] on σ(e + e − → 2π + 2π − ) with their preliminary results on σ(e + e − → 2π 0 π + π − ) [17] and using Eqs. (14), one obtains predictions for the τ spectral functions. These can be compared with ALEPH [15] and CLEO [11] data (compare also [17]). As shown in Fig. 1 and Fig. 2, τ -and e + e − -data are in good agreement within the errors, even if one observes systematical shifts. However, these shifts are well within the 5% systematic error of CLEO and the 6% (2π − π + π 0 ) and 10% (π − 3π 0 ) errors for ALEPH spectral functions (ALEPH does not give separately the systematic error) as well as the 5% to 12% systematic error for BaBar σ(e + e − → 2π + 2π − ). For the preliminary BaBar data [17] on σ(e + e − → 2π 0 π + π − ) a 10% systematic error is assumed. Truly isospin breaking effects are expected to occur at the percent level due to the π ± − π 0 mass difference alone [5].
From the cross section σ(e + e − → 2π 0 π + π − ) and the relative contributions of the ωπ final state as given in [17] (since the errors were not specified there, we attribute 20% error to the spectrum) one can infer the ω contribution to σ(e + e − → 2π 0 π + π − ). Based on this result one can predict the omega part of the τ − → ν2π − π + π 0 spectral function and compare it with the CLEO result. Satisfactory agreement is observed in Fig. 3.
From the comparisons of experimental data we conclude that no isospin symmetry violation is observed within the present accuracy. Thus the model we propose to describe the data is based on isospin symmetry. However, effects from the pion mass difference in the phase space are included. . The spectral function of the τ − → ν3π 0 π − decay mode. ALEPH data [15] versus predictions from BaBar data [16,17] and the model predictions. . The spectral function of the τ − → ν2π − π + π 0 decay mode. ALEPH [15] and CLEO [11] data versus predictions from BaBar data [16,17] and the model predictions.
IV. THE MODEL OF THE FOUR PION ELECTROMAGNETIC CURRENT
There are many motivations why the model adopted in [5] should be updated. First of all new and more accurate data are available. The CLEO data on tau decays [11], which were not used in [5], the tau spectral functions from ALEPH [15] and the measurement of the cross section of the reaction e + e − → 2π + 2π − via the radiative return method by BaBar [16] provide us with the opportunity for a substantial improvement of the model implemented in the event generator PHOKHARA [22,23,24,25,26,27,28,29]. The omega part of the current, which in [3,5] was implemented without structure, is now known much better from phenomenological studies [28]. The new preliminary data from BaBar [17] on the reaction e + e − → 2π 0 π + π − also show richer structure . The omega part of the spectral function of the τ − → ν2π − π + π 0 decay mode. CLEO [11] data versus predictions based on preliminary BaBar data [17] and the model predictions. than implemented in [5]. All this was taken into account in constructing the model presented in this paper. The amplitude used in [5] is schematically depicted in Fig. 4. In the contributions from the first two diagrams, which proceed through the intermediate resonances ρ → a 1 π and ρ → f 0 ρ respectively (where ρ stands for ρ(770) and its radial excitations), only the parameters of the current were adopted to the improved data and a new ρ resonance (ρ(2040)) was added (necessary to fit the BaBar [16] data). The contribution from ω, where previously the substructure of the omega decay was not taken into account, is now modeled using information from [28]. Schematically, the new ω amplitude is depicted in Fig.5.
BaBar has, furthermore, observed [17] a strong ρ + ρ − contribution. Thus a new part containing the ρ → ρρ contribution has been added, treating the ρ particles like SU (2) gauge bosons. The contributions to the amplitude are depicted in Fig. 6. For more general frameworks, where such terms are present see [30] (and references therein). The SU (2) The only free parameter, the coupling constant g (g = g ρππ ), can be extracted from ρ → ππ decay. However, as it stands, the model leads to a wrong high energy behavior of the cross section, falling less rapidly then the data. This problem can be cured by adding ρ ′ contributions and allowing for trilinear couplings between ρ and ρ ′ . It was also necessary to relax the fixed coupling g to fit the data. The detailed description can be found in the Appendix. The model can be further refined, when more experimental information is available.
The behavior of the four pion amplitude in the low Q 2 region has also been studied [6] in the framework of chiral resonance theory, including terms up to O(p 4 ) [31]. The implementation of resonances and their parameters differs from the choice in this paper. The results of the two models are compared to the data in Figs. 10 and 11.
V. FIT OF THE CURRENT PARAMETERS TO THE EXPERIMENTAL DATA
To separate the well measured ω contribution, from the rest, we fitted the parameters of the model to the ω part of the cross section of the reaction e + e − → 2π 0 π + π − extracted from preliminary BaBar data [17]. Furthermore, we fitted the model parameters to the cross sections of the reactions e + e − → 2π 0 π + π − and e + e − → 2π + 2π − measured by BaBar [16,17].
model BaBar It is interesting to see how the model compares to predictions based on the chiral Lagrangian [6] in the low Q 2 region, where this ansatz is expected to be applicable. In Fig. 10 (Fig. 11) this comparison is shown for the charged (neutral) mode, together with data from BaBar [16,17], CMD2 [10,12,14] and SND [13]. Since our model parameters were fitted to that cross section the thick dotted curve is not a prediction, apart of the Q 2 region below 0.8 GeV, from where the contribution to χ 2 of the fit is negligible due to the low accuracy of the data.
The sub-distributions can be qualitatively compared ( Fig. 12) with plots presented by BaBar [16]. These were not used in the fit and thus can be considered as Fit to the data for σ(e + e − → 2π 0 π + π − ), taken from [17] (10% systematical error was added to the statistical error). For comparison also CMD2 [10] and SND [13] data, which are consistent with BaBar data, are shown (without their 10%-20% error-bars). Contributions from ρ part of the current (Eq.(A.8)) to the cross section (see text for definition) are also shown. Fit to the data for σ(e + e − → 2π + 2π − ), taken from [16]. For comparison also CMD2 [10,12,14] and SND [13] data, which are consistent with BaBar data, are shown (without their 7%-20% error-bars).
predictions. Integrals for both experimental and theoretical plots are equal by construction. Further refinements of the model will be possible when the data on sub-distributions will become available.
The contributions from two ρ mesons in the final state are shown as dashed line in Fig. 8. They were extracted selecting events with π + π 0 and π − π 0 invariant masses within the range from m ρ − Γ ρ to m ρ + Γ ρ . These are affected by background from the other amplitudes and thus do not correspond exactly to the contributions from ρ part of the current (Fig. 6 and Eq.(A.8)), hence the separation is not as clean as for the ω case. The model prediction is smaller than the BaBar result [17].
Selected two and three pion invariant mass subdistributions for the reaction e + e − → 2π 0 π + π − γ(γ) are shown in Fig. 13. The contributions from various resonances included in the model are clearly visible. Comparisons of the predictions will be possible, when the final BaBar results are published.
VI. MODEL PREDICTIONS FOR τ DECAYS
One can confront the model with the data [32] for the partial τ decay rates to four pion final states. The results are collected in Table I. The theoretical error is obtained from the errors of the model parameters extracted in the fit. Within the quoted errors, the predictions are in good agreement with the data even if one observes sizable difference between the data for Br(τ − → ν τ 2π − π + π 0 ) and the prediction via the isospin relations. At present the results are still consistent within the conservatively estimated error, which is dominated by the one of the preliminary BaBar result for σ(e + e − → 2π 0 π + π − ). With an expected error of about 5%, the final BaBar result will further push the accuracy of the isospin symmetry tests.
The model of 4π hadronic current proposed in this paper was fitted to BaBar data and relies on isospin symmetry. Thus its predictions for the τ spectral functions follows the predictions from BaBar data based on the isospin symmetry assumption (presented in Section III), apart from few percent phase space effects coming from the π ± − π 0 mass difference. The model predictions are also shown in Fig. 1, 2 and 3. The central curve represents the model predictions, the upper and lower curves are the error estimates based on errors coming from the fitted parameters of the model. Two, three and four pion invariant mass distributions obtained within our model are compared with CLEO data (available only as plots) in Fig. 14. Although predictions and the data differ as far as the detailed description is concerned, good qualitative agreement is observed. [32] and predictions based on BaBar data [16,17] and isospin symmetry
VII. IMPLEMENTATION INTO PHOKHARA AND TESTS OF THE MONTE CARLO GENERATOR
The model for the hadronic current was implemented into the PHOKHARA event generator (version 7.0).
It will be available at http://ific.uv.es/∼rodrigo/phokhara/ together with the implementation of the J/ψ and ψ(2S) contributions to 2-body hadronic final states (in preparation). Only the current J µ was coded in the form described in the Appendix; the charged mode is obtained via the relation Eq. (3). Neither the ω-part of the current nor the double ρ resonance diagrams (left in Fig. 6) contribute to that part. Only a priori weights, used in the multi-channel Monte Carlo generation, were changed as compared to previous versions [23]. Nevertheless, tests checking the implementation were performed to assure a proper technical precision of the code. The NLO version of the code was checked for the configurations without any cuts against analytic NLO results of [33] (see also [23]), separately for one-and two-photon contributions. The separation w = Eγ √ s = 10 −4 between soft (integrated analytically) and hard (generated) parts was used in this test. The precision of the tests, limited by the Monte Carlo statistics, is significantly below one per mill. As the analytic formula contains as a factor the cross section of the process without photon emission (σ(e + e − → 4π)), which is not known analytically, it was obtained be means of Monte Carlo integration by a dedicated program. In that program, in contrast to PHOKHARA, flat phase space generation was used to avoid any errors due to the change of variables. The independence of the results of the generation on the separation into soft and hard parts was also tested with similar precision. | 2019-04-12T17:26:10.095Z | 2008-04-01T00:00:00.000 | {
"year": 2008,
"sha1": "7c5a31381b1ee7a4c2f301b2252a01753821acf7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0804.0359",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "103850f50fb9841429710e34f79e33c81b4e11bf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
198392290 | pes2o/s2orc | v3-fos-license | Application of The Standardized Form Magnetite Nanoparticles (Icnb) in Creature Simple and Practical Method of Additive Modernization of Preservation Solutions for Red Blood Cells
This study was devoted to the learning of the use of nanotechnology to correct the functional activity of red blood cells (RBCs) at the storage stages at a positive temperature. It was established that saline NaCl, which had previously been processed by magnetite nanoparticles (ICNB) had a marked membrane-stabilizing effect, inhibits hemolysis and increasing the sedimentation stability of preserved RBCs. The complex analysis of the obtained data allowed to determine the primary mechanisms effect of the saline NaCl, which had previously been processed by ICNB on the preserved RBCs. The proposed method of additive modernization of preserved RBCs was adapted to the production process. The optimization results were obtained in creating a simple and practical method of additive modernization of preservation solutions that does not violate the compliance requirements, improves the quality, efficiency and safety transfusion of RBCs.
The benefits gained by improved RBCs component quality should more than justify any real or perceived inconvenience to the blood services in implementing adjustments to their processing procedures or additional processing costs of the introduction of new generation RBCs additive solutions.The bigger challenge that has hindered the advancement of this field is the significant financial burden and risk for manufacturers of blood collection systems to obtain licensure and to bring a new RBCs storage system to a market that is inherently based on very low profit margins, such as the blood services sector.
The financial burden to technology developers of new RBC storage systems is largely due to regulatory requirements, particularly those mandated by the FDA.In addition to in vitro data, the FDA requires in vivo data on the 24 hours post transfusion recovery of transfused autologous RBCs.Recently the FDA has tightened and increased the assessment and acceptance criteria making it potentially more difficult and expensive to bring new RBCs storage systems to market.Although the regulatory agencies are to be commended for focusing on the safety of new therapies and devices for patients, there are concerns that the regulatory requirements for RBC storage systems have become excessive and are hindering progress [12].
Another significant challenge for obtaining licensure of new RBCs storage systems is the inherent donor-related variability in stored RBCs quality.It has long been recognized that RBCs from some donors do not store well, as evidenced by higher levels of hemolysis at RBC component expiry 14 and poorer in vivo 24 hr recovery data [13].The relationship of specific donors and poorer quality of some stored RBCs components was confirmed in a recent paired cross-over study designed to compare manual and automated whole blood processing methods [14,15].Technology developers are unwilling to take on the risk that a random poorquality RBCs component could jeopardize the success of licensure tests and clinical trials of their new blood storage systems and their significant financial investment.
In Ukraine, the first standardized and biocompatible magnetite nanoparticles for medical use were manufactured and patented in 1998.These are intracorporal nanobiocorrector of brand ICNB, magnet-controlled sorbent of brand MCS-B, and biologically active nanodevice of brand Micromage-B [16].It is well established that the magnetite nanoparticles effectively modulate the metabolic processes in leukocytes, regulate activity of the enzyme link of the antioxidant system in erythrocytes in healthy and sick patients [17][18][19].Previously the complex investigations that were performed in the study of the influence on metabolism of cells by preparations of nanotechnology show that in whole standardized biocompatibility of magnetite nanoparticles have nonspecific and modulated effect on metabolic processes.Research of ultrastructure investigations of the reticuloendothelial system (liver, lungs and kidneys) it was proved that after injection of biocompatibility magnetite nanoparticles into a vein caused nonspecific activation of the metabolic processes, increase adaptive mechanisms and potential of organelle cells, acceleration of reparative processes a level of membranes and macromolecules [18][19][20][21].Existing sorption and indirect (magnetic) effects not only allow selectively absorb the protein of surface membrane cells by magnetite nanoparticles (according to the principle of magnetophoreses), but also to prevent the oxidative modification of proteins by way of stabilizing the active groups, normalizing a state of receptors that are located on the surface membrane of cells, increasing activity of enzymes' membrane-bound [22][23][24].
Recent scientific work related to use of magnetite nanoparticles (ICNB) in contrast means in an MRI investigation of cancer reliably was shown that nanoparticles because reversible changes associated with a temporary increase in the mobility of hydrogen protons in the pericellular fluid that inevitably modifies the metabolism in malignant cells [25].The results of these investigations have not only widened the understanding of the mechanisms of action of nanoparticles on condition outside and intracellular spaces but also have revealed new aspects of the cellular(cells) metabolism, determined the membrane role of cellular enzymes in the regulation processes of metabolism [23,[26][27][28][29].
Also, it was established that extra corporally processing the blood by nanoparticles of MCS-B reliably reduces activity of Ca, Mg -АТPHese of erythrocytes.Currently, studies have shown that magnetite nanoparticles are able to inhibit hemolysis of heparinized blood, increase the activity of ATP and 2.3 DPH in red blood cells, regulate transmembrane metabolism and inhibit eryptosis [23,30,31].The above was the basis for the choice of the theme of this study, devoted to the learning of the use of nanotechnology to correct the functional activity of red blood cells at the storage stages at a positive temperature.The main purpose of the first stage of the study is to develop a simple and practical method of additive modernization of preservation solutions that does not violate the compliance requirements, improves the quality, efficiency and safety transfusion of red blood cells.
Methods
Before starting the experience with red blood cells (RBCs) we used visual assessment by comparison of the brightness of the imagines different variants of solutions.These were: ICNB, 0.9% NaCl solution and 0.9% NaCl solution that had been treated by magnetite nanoparticles (ICNB).The tests were performed on the Siemens MR-tomography Magneton Concerto with power magnetic-field 0.2T.
The axial tomograms were received: a.
T1 -the self-weighted sequences of Echo Spin of TR 50ms, TE 17ms the field of review the 250mm, the thickness cut 2mm.
b.
T2 -the self-weighted sequences Echo Gradient of TR 500ms, TE 17ms the field of review a 180mm, the thickness cut 4mm.
Of each bag of 3ml amounts of red blood cells was distributed into 20 sterile glass tubes.Then, into the first 10 tubes of control were added of 2ml amounts 0.9% NaCl solution.Into the next 10 tubes of test were added of 2ml amounts 0.9% NaCl solution, which previously was processed by ICNB.Thus, the distribution of tubes was as the follows: Tubes of control: a.
The state of red blood cells was determined visually by the registration of signs of hemolysis.Also, hemolysis was controlled by photometric method by means Plasma / Low Hb and GPHP-01 devices.The centrifuge mark of SM-70M-07 was used to obtain supernatant.Hematocrit was calculated by means hematocrit ruler and using the formula: ( ) Morphology of the red blood cells was studied by direct microscopic method.Sedimentation stability of red blood cells was studied by Panchenkov's method.Change in the acidity of the red blood cells was performed by means of pH metric.Tests were carried out in six stages: day 1-I, day 7-II, day 1 -III, day 21-IV, day 28-V, day 35-VI.The blood after performance of the biochemical investigation was stored in the refrigerating chamber at temperature +4ºС.Statistically processing the obtained results was carried out by parametrical method of variation statistics by Student criterion.Processing the obtained data was carried out by means of Excel.
Results and Discussion
Results of visual assessment of the images brightness used in experiment liquids at the MRI are represented in Figure 1.(Figure 1) illustrates the difference of image brightness in liquids compared at the MRI.The order of the brightness increase is the following: ICNB, 0.9% NaCl solution, 0.9% NaCl solution that was treated by nanoparticles of ICNB.The difference in brightness of images is explained in the following way: a.
Variant 1. Magnetite nanoparticles of ICNB reduce the mobility of hydrogen protons in the liquid medium (0.9% NaCl solution).Therefore, the image brightness was very low in the MRI.
b.
Variant 2. Rising mobility of hydrogen protons in the intact 0.9% NaCl solution increased brightness in comparison with Variant 1. c.
Variant 3. Mobility of hydrogen protons in 0.9% NaCl solution that has previously been processed ICNB nanoparticles is maximized.Therefore, the image brightness is much higher than before (in the previous variants).Thus, previously conducted research clearly shows that the nanoparticles of ICNB change the mobility and the orientation of the hydrogen atoms in liquids that are registered in the visual evaluation of MRI.
The next set of studies was essential and aimed at studying of functional activity of red blood cells at the storage stages at a positive temperature after by modifying the mobility and spatial orientation of hydrogen protons in the pericellular fluid using magnetite nanoparticles of ICNB.
A study of the sedimentation stability of RBCs showed a highly significant difference between control and test data.Data of sedimentation stability of the RBCs at the stages of a study were presented in Figure 2. (Figure 2) shows that sedimentation stability of RBC in test tubes are reliably more highly (p<0.001) then in control tubes at stages of research.It should be said that negative surface charge of human RBCs's results primarily from the presence of ionogenic carboxyl groups of sialic acids on the cell surface [32][33][34].The value of the charge is determined by the amount of Adenosine Three Phosphate (ATP).ATP is macroergic compounds, the product of glycolysis.The sedimentation stability of RBC is determined by the amount of ATP.In this case, the change of mobility and spatial orientation of the hydrogen protons in the extracellular liquid significantly increased the sedimentation stability of RBCs in the test compared to the control.For greater clarity results of RBCs sedimentation is shown in Figure 3. Figure 3 is showing that with preserved in the anticoagulant CPD the sedimentation stability in the control was 62mm; in the test -53mm.With preserved in the anticoagulant CPDA-1: control -58mm; test -52mm.Thus, following the logic of the above reasoning, if improving sedimentation stability of RBC is associate with an increase in ATP, then the isotonic solution which previously was processed by ICNB should actively stabilize the membranes of RBCs and inhibit hemolysis.Therefore, the next investigation was to study the hemolysis processes preserved of the RBCs at various stages.Results of the visual assessment hemolysis of erythrocytes in various aspects of exposure are presenting in Figure 4 & NaCl solution which previously was processed by ICNB; 3-RBCs (preserving agent CPDA-1) + 0.9% NaCl solution; 4 -RBCs (preserving agent CPDA-1) + 0.9% NaCl solution which previously was processed by ICNB. Figure 4 & 5 shows that in the control tubes at the stage VI of the study there are pronounced signs of hemolysis.In contrast, hemolysis is not recorded in test tubes.Visual analysis was supplemented by objective data of the photometric method, as well as the method of calculation of hematocrit.Objective data of free Hb and calculated HCT at the stage VI of the study are presented in Table 5.Thus, the objective indicators of Table 5 are highly reliable (p<0.001) and confirm the obtained visual effect of inhibiting hemolysis of preserved RBCs by physiological solution, which was pretreated by ICNB.Microscopic observation of changes in the morphology of RBCs was a logical continuation of the study of the erythrocyte membranes stabilization effect.The results of microscopic examination of the morphology of erythrocytes in different variants of preservation and treatment are shown in Figure 6. Figure 6 clearly demonstrates that microscopically in the control variant at the VI stage of the study, widespread appearance of spheroechinocytes is observed.On the contrary, in the test variants, the shape of red blood cells at the stage VI was unchanged.Pathological changes in the RBC's shape and size in the control variants are most likely associated with inhibition of glycolysis processes [35].Consequently, the number of ATP and 2.3 DPG decreases, the permeability of the erythrocyte membranes is disturbed, the state of the hemoglobin buffer changes.As a result, the pH of intracellular and extracellular media are changes.
The decrease in the formation of 2.3 DPG leads to the acidulation of intracellular environment of the RBCs.Deoxygenated hemoglobin which was previously formed actively binds the [H+], that comes from the extracellular environment and alkalizes the extracellular environment.The effect of RBC reduction, the appearance of widespread spheroechinocytes is observed in microscopy.Subsequently, oxyhemoglobin moves to the extracellular environment as a result of processes intensification destruction of the membranes of RBCs.The accumulation of oxygenated hemoglobin in the extracellular environment causes by shifting towards the acid of the pH.
The above mechanisms have been confirmed in the study of the dynamics of pH changes in the extracellular medium of preserved of the RBCs.The dynamics of pH changes in the extracellular medium of RBCs storage at key stages of the study on the example of preserving agent CPD is shown in Figure 7. Figure 7 demonstrates that despite the initial acidic environment of the preservative (pH CPD = 5-6) in the control and test in the extracellular medium at the first stage of the study alkaline pH is registered.The appearance of differences in the dynamics of change in the color of the pH indicator between the control and the test is clearly observed in the subsequent stages of the study.So, against the background of the appearance of hemolysis signs significant decrease of the pH to 7.1-7.2 in the control at the VI stage of the study is registered.On the contrary, the pH of the extracellular medium remains relatively stable and corresponds to the parameters 7.4-7.5 in the test at the VI stage of the study.Thus, obtained result, show that change in cytoplasmic pH is both necessary and sufficient for the shape changes of human erythrocytes [36].The effect of hemolysis inhibition by the method of additive modernization of preservation solutions, that adapted to the manufacture process at the VI stage of the study is shown in Figure 8.As a result of the studies it was found that physiologic solution NaCl which previously was processed by ICNB and added to the preserved of the red blood cells actively inhibits of hemolysis processes of RBCs at the storage stages at a positive temperature.A comprehensive analysis of data revealed the primary mechanisms of the effect modernized of the saline solution on the preserved RBCs.It was established that saline NaCl, which had previously been processed by magnetite nanoparticles (ICNB) had a marked membrane-stabilizing effect, inhibits hemolysis and increasing the sedimentation stability of preserved RBCs.In General, these effects provide the sustainability of the functional activity of preserved RBCs in during storage.Thus, the first optimistic results were obtained on the way of creation a simple and practical method of additive modernization of preservation solutions that does not violate the compliance requirements, improves the quality, efficiency and safety transfusion of red blood cells.
Figure 1 :
Figure 1: Images of fluids that were studied in the research at the MRI.Materials: 1. Standardized intracorporeal Nano bio corrector of ICNB was taken as nanoparticles.Magnetite nanoparticles synthesized by co-precipitation method.The main physics and chemical properties of ICNB the following data and also in Tables 1-4; Figure 1 & 2 were presented
Figure 2 :
Figure 2: Study of the sedimentation stability of RBCs at the stages (M±m; p<0.001).
Figure 3 :
Figure 3: Sedimentation stability of erythrocytes at the stage VI of the study
Figure 4 :
Figure 4: Visual assessment hemolysis of erythrocytes in various aspects of exposure at the stage VI. 5.
Figure 5 :Table 5 :
Figure 5: The presence of free Hb in the supernatant (preserving agent CPD).
Figure 6 :
Figure 6: Study of the sedimentation stability of RBCs at the stages (M±m; p<0.001).
Figure 7 :
Figure 7: Dynamics of pH changes in the extracellular medium of RBCs storage at key stages of the study on the example of preserving agent CPD.
Figure 8 :
Figure 8: The hemolysis inhibition effect with using method of additive modernization of preservative solutions adapted to the manufacture process at stage VI of the study.
Table 1 :
The calculated lattice parameters of the phases.
Table 2 :
Determination of percent composition of the ICNB by Х-ray spectrometer ARL OPTIM'X (semi-quantitative analysis). | 2019-05-01T13:04:45.306Z | 2018-09-11T00:00:00.000 | {
"year": 2018,
"sha1": "f3d572e363ade21fbc51ee51bb6a7a15352389a5",
"oa_license": "CCBY",
"oa_url": "https://lupinepublishers.com/anesthesia-pain-medicine-journal/pdf/GJAPM.MS.ID.000101.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d3dd3fa9e7820f37077519ddce515f500659868f",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Chemistry"
]
} |
214026205 | pes2o/s2orc | v3-fos-license | Evaluating the Performance Employee Using TOPSIS
The aim of this study is to make a decision support system using the Technique for Ordered Preference By Similarity To Ideal Solution (TOPSIS) method for evaluating the performance employee of BJB bank. The system was developed with the waterfall development model and the PHP programming language. This study use a TOPSIS Method that is is a Multi Criteria decision making method to identify this study. The final results of the system in the form of an alternative sequence of data (ranking) of the best employees based on preference values obtained from the calculation results. This decision support system will be used by BJB bank as a supporter in making decisions to determine the best employees in the office.
1.
Introduction BJB bank successfully posted net profit growth in September 2016 of 55. 6% year on year (y-oy) The achievement was inseparable from the quality of Human Resources (HR) owned by the BJB bank. In Q3 2016, the number of BJB bank employees was 7,535 employees and to support better HR performance, BJB bank always provided training and development of knowledge and expertise. Human aspects are key and important indicators in a company, one of which is an effort to develop business and strengthen corporate organizations [1]. The fundamental goal of HR in an organization is to effectively manage its employees by encouraging positive attitudes like increasing productivity, job satisfaction, motivation, organizational citizenship behavior, and reducing negative employee attitudes like increased turnover, absenteeism, and deviant work place behaviour. These factors collectively describe an individual employee's performance at work [2]. Human resources of an organization are considered as important resources especially in banking sector. To make use of people as a valuable resources attention must be given to the employees and to attain organizational based performance [3]. Related performance can influence in taking a decision regarding the efficiency of employee performance and it can also influence the decision of the seniors either he would consider his employee for the promotion or not [4]. It is generally considered that employees with higher emotional intelligence will have higher job satisfaction. This is because the employees with higher emotional intelligence are able to develop strategies to overcome the possible consequences which may arise out of stress whereas those with less emotional intelligence won't be in a position to overcome the stress situations [5]. Company need employees who have high performance in developing company, performance plays an important role to promote the company [6]. Workplace diversity refers to the concept where an organization has different cultures and employees with different characteristics represented. This leads to cultural diversity in working area. People might be diverse in several aspects which include the range of ways in which people experience a unique group identity [7].
Definition of Criteria
Based on the results of interviews and analysis with BJB bank, the Human Capital Division obtained nine assessment criteria from the previous 14 criteria. From each criterion, we will determine the weights which consist of five fuzzy numbers, namely Special / Extraordinary with a weight of 90-100, satisfying weighing 80-89, both weighing 70-79, need repairs with a weight of 60-69, and not as expected with a weight of 0-59 [11]. After determining the weight of each criterion as in the tables above, the next step is to make the preference weight or the level of importance of each criterion set by BJB Bank, with a maximum preference weight of 500 and referring to the mapped variable information.
Representation of Hierarchical Structures
The hierarchy structure in the explanation of the case study above can be seen in Figure 1.
Figure 1. Hierarchical structure of Employee Performance Assessment
After the data is entered (data criteria and employee data), a representation is made into the hierarchical structure. The problem that must be formulated in building a hierarchical structure is the goal as the end of the decision [12][13]. Goals are the most important decision in a case. The purpose to be achieved in this thesis is the assessment of employee performance. The identification of selection criteria for employee performance can be initialized as a symbol C (criteria). The alternative identification stage is identifying employees who are the object of assessment and the employee's performance goals. IOP Conf. Series: Materials Science and Engineering 662 (2019) 062018 IOP Publishing doi:10.1088/1757-899X/662/6/062018 4
Normalization decision matrix
Calculation of Z parameter can be seen in Figure 2.
Figure 2. Calculation of Z parameter
In this study normalized decision matrix by a using concepts of normal distribution. In fact, represent the statistical normalization method. Here is definition of the steps of this method with its result decision matrix, It's obvious that normal distribution convert basic value of different statistics to standard value between -3.59 and +3.59 by decreasing mean of meter and dividing the result of this function on the standard deviation of data as show in Figure 2.
Calculation of Z parameter can be seen in Figure 3.
Figure 3. Calculation of Z parameter
Zij is the standard value of each data, mj is more favourable and rational content of each criterion that has been defined by experts of organization and is standard deviation of each criterion that calculates in Figure 3. Table 2 that contains Z value of each data.
Weighted Normalization
After calculating standard value for each parameter by using standard distribution formula, it's time to calculate the probability of occurrence of standardized content. In this part we apply the below formula to obtain probability of occurrence of each criteria [14]. For example, when we convert the content of first strategy (codification strategy) for first criterion (top management support) to probability of their occurrence, we actually calculated how much percent top management of organization support from implementing codification strategy and so on [15]. Also we can use normal distribution table and calculate Probability of any standard content. It is important to mention that after such convert, the value of all content will become between 0 and 1 and in this time, we can continue extant steps of TOPSIS [16]. Multiply the columns of normalized decision matrix by the associated weights from entropy method. As can be seen in table 2. The weighted and normalized decision matrix is obtained as figure 4 and Weighted and normalized decision matrix is shown in Table 3. Calculation of two Euclidean distances for each alternative can be seen in Figure 6. The author calculated the content of ideal and nadir ideal, distances of each alternative from the ideal and nadir for our problem, and the relative closeness to the ideal solution and represent results in Table 4. 7 We compute the sums for each line , by subtracting each alternative from the larger one and by subtracting each alternative from the smaller one. Based on the values of coefficients of decreasing rank, fifteen alternatives are ranked as in Table 4, then the alternative A13 is also the best solution [17].
The TOPSIS method can used for ranking of banks in terms of their financial, non-financial and total performances [18], where one of them is for employee performance.
4.
Conclusion Based on the research results by applying the TOPSIS method on the process of evaluating employee performance more efficiently, based on measurements of employee performance results and compared with targets and standards. The assessment used is an assessment which is a criterion for measuring employee performance by comparing with other criteria, also candidates will be obtained who are in accordance with the criteria so that the assessment becomes objective, the employee performance appraisal is measured and becomes transparent and is expected not to be affected by outside judgments (political elements). | 2019-11-22T00:58:33.918Z | 2019-11-20T00:00:00.000 | {
"year": 2019,
"sha1": "ad1fc15d10db36ecee5c6b9578482d45efc32bed",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/662/6/062018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "48db7dd87a594efd724ca1aea04d05aa9a3263b8",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
55617579 | pes2o/s2orc | v3-fos-license | Recipes for sparse LDA of horizontal data
Many important modern applications require analyzing data with more variables than observations, called for short horizontal. In such situation the classical Fisher’s linear discriminant analysis (LDA) does not possess solution because the within-group scatter matrix is singular. Moreover, the number of the variables is usually huge and the classical type of solutions (discriminant functions) are difficult to interpret as they involve all available variables. Nowadays, the aim is to develop fast and reliable algorithms for sparse LDA of horizontal data. The resulting discriminant functions depend on very few original variables, which facilitates their interpretation. The main theoretical and numerical challenge is how to cope with the singularity of the within-group scatter matrix. This work aims at classifying the existing approaches according to the way they tackle this singularity issue, and suggest new ones.
Introduction
Discriminant analysis (DA) is a descriptive multivariate technique for analyzing grouped data, i.e. data where the observations are divided into a number of groups that usually represent samples from different populations [14].Recently DA has also been viewed as a promising dimensionality reduction technique.Indeed, the presence of group structure in the data additionally facilitates dimensionality reduction.The best known variety of DA is linear discriminant analysis (LDA), whose central goal is to describe the differences between the groups in terms of canonical variates which are linear combinations of the original variables [14].LDA requires solving a generalized eigenvalue problem [19, §8.7].
The interpretation of the canonical variates is based on the coefficients of the original variables in the linear combinations.The interpretation can be clear and obvious if the coefficients in the loadings vectors take one of a small number of values which includes exact zero.Unfortunately, in many applications this is not the case.The interpretation problem is exacerbated by the fact that there are three types of coefficient, raw, standardized and structure, which can be used to describe the canonical variates [37,44], where the disadvantages for their interpretation are also discussed.A modification of LDA, aiming for better discrimination and possibly interpretation, is considered in [11,27].In this approach the vectors of coefficients in the canonical variates are constrained to be orthogonal.
These difficulties are similar to those encountered when interpreting principal component analysis (PCA) [25].Last decade this problem was approached by developing PCA procedures that produce sparse component loadings, i.e. containing many zeros.Such techniques are commonly known as sparse PCA [42], and can be adapted for use in LDA.This was realized first by Trendafilov and Jolliffe [44] who obtained sparse discriminant functions.The non-zero entries correspond to the variables that dominate the discrimination.This method cannot be applied directly to horizontal data, but triggered active research in this direction, which we try to review here.
Horizontal data occur when the number of variables ( p) is larger than the sample size (n).Such datasets are nowadays common in many applications.There are two main problems in using classical LDA on horizontal data.First, the within group covariance matrix W is singular or nearly singular and, hence, it cannot be inverted.This is because of the presence of many variables which are not useful for discrimination.Second, computations are very difficult if not impossible, hence deterring the applicability of classical LDA on horizontal data.
The paper is organized as follows.The classic LDA is briefly revised in Sect. 2. Section 3 briefly summarizes the idea for sparse solutions and the approaches to achieve sparseness.Section 4 is central and is divided into three parts.Section 4.1 reviews several approaches to do LDA of horizontal data by replacing the singular within-group scatter matrix W by its main diagonal.Another alternative to avoid the singular W is presented in Sect.4.2, where sparse LDA is based on minimization of classification error.Section 4.3 lists techniques equivalent to LDA which do not need inverse W or T, as optimal scaling and common principal components (CPC).Section 4.4 briefly reminds the application of multidimensional scaling is discrimination problems.Finally, sparse pattern with each original variable contributing to only one discriminant functions is discussed in Sect.4.5.The last Sect.5 briefly reports the performance three methods for sparse LDA on several data sets.
Basic notations, definitions and assumptions of classical LDA
Consider the following linear combinations XA also called discriminant scores.This is a linear transformation of the original data X into another vector space.There is interest in finding a ( p × s) transformation matrix A of the original data X such that the a priori groups are better separated in the dimensions of the transformed data XA than with respect to any of the original variables.The number of transformed dimensions s is typically much smaller than the original p.Thus the transformation also achieves dimension reduction.Fisher's LDA works by finding a transformation A which produces the "best" discrimination of the groups by simultaneous maximization of the between-groups variance and minimization of the within-groups variance of XA.Formally this is organized by maximizing: where and G is the g × n group indicator matrix, i.e.G has 1/n j at its (i, j) position if the ith observation (row of X) belongs to the jth group, and 0 otherwise.Then, the matrix of the group means X = GX.
In other words LDA depends on the between-groups sums-of-squares matrix, B, and the within-groups sums-of-squares matrix, W, of the original data.B and W are also called between-and within-groups scatter matrices respectively.
The procedure of finding A from B and W is sequential.Suppose a is the first column of A. One can show [14,27, §11.1] that a should be chosen so that (1) is maximized.The maximisation of objective function ( 1) is equivalent to the following generalized eigenvalue problem: Thus the maximum of objective function ( 1) is the largest eigenvalue of W −1 B and is achieved at the corresponding eigenvector a. Successive columns of A are also eigenvectors of W −1 B and the corresponding values of objective function (1) are the corresponding eigenvalues.
The rank of W −1 B is r ≤ min( p, g − 1), i.e. all the eigenvalues after the first r are 0s.The number r is called the dimension of the discriminant function representation.The number of useful dimensions for discriminating between groups, s, is smaller than r , and the transformation A is formed by the eigenvectors corresponding to the s largest eigenvalues ordered in decreasing order.Clearly the ( p × s) transformation A determined by Fisher's LDA maximizes the discrimination among the groups and represents the transformed data in a lower s-dimensional space.
Note, that the Fisher's LDA problem ( 1) and ( 4) can be solved without forming B and W explicitly.For vertical data, the maximum of (1) can be found by the generalized SVD of [19, §8.7.3].For large data this method is rather expensive as it requires O((n + g) 2 p) operations.The method proposed by [12] seems a better alternative if one needs to avoid the calculation of large B and W. Further related results can be found in [38].
The eigenvalue problems (4) can be rewritten in matrix terms as BA = WAD, where D is the (s × s) diagonal matrix of the s largest eigenvalues of W −1 B ordered in decreasing order.This is not a symmetric eigenvalue problem and in general the columns of A are not orthogonal.However the matrix A WA is diagonal i.e. the solution is orthogonal in the W-space.
Note also that an important assumption for a valid LDA is that the population withingroup covariance matrices are equal.This can be checked by using the likelihood-ratio test [27, p. 370] to compare each within-group covariance matrix to the common one.If the null hypothesis is rejected in some groups than the results from LDA are considered unreliable.
The common principal components (CPC) model has been introduced by Krzanowski [26] and Flury [15] to study discrimination problems with unequal group covariance matrices.
Interpretation and sparseness
It turns out that in the modern applications the typical data format is with more variables than observations.Such data are also commonly referred to as the small-sample or horizontal data.In other words horizontal data occur when the number of variables ( p) is larger than the sample size (n).The following two data are examples of horizontal data.
1. Ovarian cancer data [9] are collected from women who have a high risk of ovarian cancer due to family or personal history of cancer.The objective is to distinguish ovarian cancer from non-cancer observations (women).The data contains 216 samples, 121 cancer samples and 95 normal samples.The number of variables is as many as 373,401.But only 4000 variables are considered in this study.2. Rice data [29,36] have 100 variables and 62 observations.They have four groups (varieties) of rice with 7, 19, 9 and 27 observations in them.
The main problem with such data is that the within-groups scatter matrix is singular and the Fisher's LDA (1) is not defined.Moreover, the number of variables is usually huge (e.g.tens of thousands), and thus, it make sense to look for methods that produce sparse discriminant functions, i.e. involving only few of the original variables.
Broadly speaking a vector/matrix is called sparse when it has very few non-zero entries.The number of the non-zero entries is called cardinality of the vector/matrix.There are two main ways to impose sparseness on a vector/matrix solution: by specifying certain cardinality constrain on the solution, or by finding the solution subject to sparseness inducing penalties.The most popular sparseness inducing penalty is the Least Absolute Shrinkage and Selection Operator (LASSO), introduced by Tibshirani [39] for multiple regression problems.For a unit length vector a ( a 2 = 1), the LASSO has the form a 1 = i |a i | < τ, where τ is called tuning parameter.By reducing τ , one forces the smaller entries of a to become exact zeros.Apparently, the sparsest a has only one non-zero entry equal to 1.
It is also possible to obtain sparse solution by prescribing in advance certain pattern of sparseness [40,47].For example, one can be interested to find a sparse matrix A having a single non-zero entry in each raw, as considered in Sect.4.5.
Another possible option is the employ the vector/matrix majorization [31], which intuitively is expressed by the following example for unit length vectors from ≺ (0, 0, 1) , i.e. the "smallest" vector has equal entries.One can use some procedure for generation of majorization [31, p. 128] in order to achieve sparseness.A benefit of such an approach is that sparseness can be achieved without tuning parameters.For example, the procedure to obtain sparse patterns by Trendafilov [41] is equivalent to what is known now as soft-thresholding.However, the threshold is found easily by the majorization construction, rather than by tuning different values.Such a pattern construction can be further related to the fit, the classification error, and/or other desired features of the solution.
Sparse LDA with diagonal W
The straightforward idea to replace the non-existing inverse of W by some kind of generalized inverse has many drawbacks, and thus is not satisfactory.For this reason, Witten and Tibshirani [49] adopted the idea proposed by Bickel and Levina [2] to circumvent this difficulty by replacing W with a diagonal matrix W d containing its diagonal, i.e.W d := I p W. Note, that Dhillon et al. [10] were even more extreme and proposed doing LDA of high-dimensional data by simply taking W = I p , i.e.PCA of B. Such LDA version was adopted already by Trendafilov and Vines [45] to obtain sparse discriminant functions when W is singular.
Sparse LDA as a two-stage sparse PCA
Probably the simplest strategy can be based on the LDA approach proposed by Campbell and Reyment [7], where LDA is performed in two stages each consisting of eigenvalue decomposition (EVD) of a specific matrix.This approach was already applied by Krzanowski et al. [30] with quite reasonable success to LDA problems with singular W. When W d is adopted, the original two-stage procedure simplifies like this.At the first stage, the original data are transformed as . Then, at the second stage, the between-groups scatter matrix B Y of the transformed data Y is formed: and then, some kind of sparse PCA on B Y is applied.Let the resulting sparse components be collected in a p × min{ p, g − 1} matrix C.Then, the sparse canonical variates are given by The sparseness achieved by C is inherited in A because W d is diagonal.Note that the calculation of B Y in ( 5) is not really needed.Following (2), the sparse PCA can be performed directly on (GG ) −1/2 GY.
Krzanowski [28] proposed a generalization of this two-stage procedure for the case of unequal within-group scatter matrices.He adopted the CPC model for each of the withingroup scatter matrices in each group.For horizontal data, this generalized procedure results in a slightly different way of calculating B Y in (5) , where X i is the data sub-matrix containing the observations of the ith group and W i,d = I p W i , where W i is the within-group scatter matrix of the ith group.
Function constrained LDA (FC-LDA)
By adopting the simplification where , and d is found as a solution of the standard Fisher's LDA problem (1) with Note that b are in fact the so-called raw coefficients [27, p. 298].As W d is diagonal, a and b have the same sparseness.Then, the modified Fisher's LDA problem (6) to produce sparse raw coefficients is defined as: Thus, the problem ( 7) is in fact a function-constraint PCA problem [42].For small data, as those considered in the following examples one can readily apply the dynamical system approach [43].For this reason, one can employ some kind of smoothing of the 1 vector norm, e.g.: with some large γ > 0. Other smoothing options are considered elsewhere [22].Let f denote the objective function from (7), i.e.
Then, the solution of ( 7) can be found as an initial value problem (IVP) for: where ∇ f denotes the gradient of f with respect to the standard (Frobenius) matrix inner product and The current ordinary differential equations(ODE) solvers [32] are not suitable for solving large optimization problems.They track the whole trajectory defined by the ODE which is time-consuming and undesirable when the asymptotic state is of interest only [35].Instead, one can employ numerical methods for optimization on matrix manifolds, and in particular on the Stiefel manifold [1], and employ some existing software [3,48].
Replacing (7) by increases considerably the speed but usually increases the classification error.
In experiments with simulated and real data, solving (11) outperforms [49] in any case (probably to blame the MM optimization method), and is comparable to [8].
Example 1 The data in the following examples are centered, and normalized to variables with unit length.Iris data [14] have four variables and three groups with 50 observations each.First we solve the original Fisher's LDA (1).The effective number of discriminant functions for this problem is min(4, 3 − 1) = 2.The first two eigenvalues are 32.1919 and .2854(32.4773 in total), and the raw coefficients are depicted in the first two columns of Table 1.The projection of the data onto the space spanned by the first two discriminant functions is given in the (1,1) panel of Fig. 1.It is well known that there are three misclassified points (52, 103 and 104) for this solution, i.e. 2 % misclassification.Then, we solve the original Fisher's LDA with W = W d .The first two eigenvalues are 31.0969and .3125(31.4094 in total), and the raw coefficients are depicted in the second two columns of Table 1.There are six misclassified points (9, 31, 50, 52, 103 and 119) for this solution, i.e. 4 % misclassification.The discriminant plot of the data is given in the (1,2) panel of Fig. 1.Next, we solve (7) with τ = 1.2.The minimum of the objective function in ( 7) is 1.0680.The first two eigenvalues 31.0969 and .3125are approximated by 30.7763 and .4407respectively.The sparse raw coefficients are depicted in the third two columns of Table 1.There are five misclassified points (9,31,50,52,103) for this solution, i.e. 3.3 % misclassification.The discriminant plot of the data is given in the (2,1) panel of Fig. 1.Finally, we solve (7) with τ = .5.The Example 2 Rice data [29,36] have 100 variables (wavelengths) and four groups of rice with 7, 19, 9 and 27 observations in them.The effective number of discriminant functions for this problem is min(100, 4 − 1) = 3.The first three eigenvalues are 25.3009, 1.6737 and .0077,which indicates that the discrimination power of the second and the third discriminant functions are not high.There are 37 misclassified points for this solution, i.e. 59.68 % misclassification.This solution is worse than the results obtained by [29] and employing PCA as a preprocessing (reduction the number of variables).The projection of the data onto the space spanned by the first two discriminant functions is given in the (1,1) panel of Fig. 2. The panel (1,2) contains the raw coefficients of these discriminant functions.Next, we solve (7) with τ = .5.The minimum of the objective function in ( 7) is 1.1896.The first three eigenvalues are approximated by 23.6843, 0.0874 and 0.0803, respectively.The discriminant plot of the data is given in the (2,1) panel of Fig. 2.There are 40 misclassified points for this solution, i.e. 64.52 % misclassification.The panel (2,2) contains the raw coefficients of these discriminant functions, and the first ones are not sparse at all.Finally, we solve (7) with τ = .01.The minimum of the objective function in ( 7) is 1.0000.The first three eigenvalues are approximated by .4260,.1437and .2418,respectively.The discriminant plot of the data is given in the (3,1) panel of Fig. 2.There are again 37 misclassified points for this solution, i.e. 59.68 % misclassification.The panel (3,2) contains the sparse raw coefficients of these discriminant functions.It is really surprising to achieve such discrimination by two variables only!They are probably too sparse and one can look for a better τ .
Sparse LDA based on minimization of the classification error
Fan et al. [13] argued that ignoring the covariances (the off-diagonal entries in W) as suggested by Bickel and Levina [2] may not be a good idea.In order to avoid redefining Fisher's LDA for singular W, Fan et al. [13] proposed working with (minimizing) the classification error instead of the Fisher's LDA ratio (1).The method is called for short ROAD (from Regularized Optimal Affine Discriminant) and is developed for two groups.Let , or minimize a Ta subject to d a = 1.Then, the ROAD problem is to find such a minimizer a, which is moreover sparse.Thus, the ROAD minimizer a is sought subject to a LASSO-type constraint introduced as a penalty term, i.e.: min Further on, Fan et al. [13] replace the affine constrained problem (12) by a quadratic penalty term, which results in the following unconstraint problem: Let us forget for a while for the sparseness of a, and solve (13) for the Iris data with τ = 0.Then, the first group Iris setosa is perfectly separated by the cloud composed by the rest two groups of Iris versicolor and Iris virginica.The difference between the means of these two groups is d = (−.1243,.1045,−.1598, −.1537).The discriminant is a = (.6623,1.4585, −5.1101, −.7362).One can check that a d = 1.However, this solution of ( 13) is not convenient for interpretation because one cannot assess the relative sizes of the elements of a. Another related problem is that the LASSO constraint may not work well with vectors a with arbitrary length.
Thus, it seems reasonable to consider a constrained version of problem ( 13) subject to a a = 1.Solution of the following related "dense" problem (with τ = 0) is available by Gander et al. [17].One can consider "sparsifying" their solution to produce unit length ROAD discriminants.Other works in this direction exploit the fact that the classified error depends on T −1 and d only through their product T −1 d [5,23].As the ROAD approach, they are also designed for discrimination into two groups.This is helpful for obtaining asymptotic results, however not quite helpful for complicated applications involving several groups.
Finally, the function-constraint reformulation of ROAD is: where d is a standard eigenvalue of T.
Indirect methods for discriminant analysis
The main purpose of this class of approaches is to avoid the explicit use of the singular matrices T −1 and/or W −1 .
Equivalent definitions of discriminant analysis
Clemmensen et al. [8] make use of the LDA re-formulation as optimal scoring, discussed in detail by Hastie et al. [24].The optimal scoring problem does not require W −1 , and thus, is
Discriminant analysis with CPC
Common principal components (CPC) are developed by Flury [15] and can be used to discriminate several groups of observations with different covariance matrices in each group.Zou [50] already considered briefly such an option.In a simulated study, Flury et al. [16] demonstrated that even a simpler CPC model with proportional covariance matrices [15,Ch 5] can provide quite competitive discrimination compared to other more complicated methods.
Sparse LDA based on metric scaling
Gower [20] showed that metric scaling of the matrix of Mahalanobis distances between all pairs of groups will recover the canonical variate configuration of group means.However, the Mahalanobis distances use the pooled within-group scatter matrix, and thus, this approach is not applicable for horizontal data.It was mentioned before, that Dhillon et al. [10] avoided this problem by simply doing PCA of the between-group scatter matrix B to obtain LDA results.Trendafilov and Vines [45] considered sparse version of this LDA procedure.
The above approach can still be applied if the equality of the population covariance matrices of the groups cannot be assumed.A particularly elegant solution, employing Hellinger distances, can be obtained if the CPC hypothesis is appropriate for the different covariance matrices [28].
Another unexplored option would be to consider linear discrimination employing withinand between-group distance matrices [21], which have sizes n × n.
Sparse LDA without sparse inducing penalty
In this section we consider a new procedure for sparse LDA.The sparseness of the discriminant functions A will be achieved without employing sparse inducing penalties.Instead, we will look for a solution A with specific pattern of sparseness, with only one nonzero entry in each row of A. The methods is inspired by the recent works of Timmerman et al. [40] and Vichi and Saporta [47].
The following model represents the original data X by only the group means projected onto the reduced space, formed by the orthonormal discriminant functions A. The model can be formally written as: where X is the g × p matrix of group means and U is the n × g indicator matrix of the groups, such that G = (U U) −1 U .In this notations, one has X = GX = (U U) −1 U X, and the model ( 16) can be rewritten as: where one notes that P = U(U U) −1 U is a projector.The p × r orthonormal matrix A contains the orthonormal "raw coefficients" of the problem, and r is the number of required discriminant functions.We want to find sparse raw coefficients A but without relying on sparseness inducing constrains as in the previous sections.In general, this is unsolvable problem, but it can be easily tackled if we restrain ourselves to a particular pattern of sparseness: each row of A should posses a single nonzero entry.Thus, the total number of nonzero entries in A will be p.To construct A with such a pattern, we introduce a p × r binary (of 0's and 1's) membership matrix V, indicating which variables have nonzero loadings on each particular discriminant function i.e. in each column of A. Then, A will be sought in the form of a product A = Diag(b)V, where Diag(b) is a diagonal matrix formed by the vector b.The ith element of b gives the nonzero value at the ith row of A. In other words, V is responsible for the locations of the nonzero entries in A, while b will give their values.Apparently, the choice of V and b will affect the fit of the model (17).Thus, we need to solve the following least-squares problem: which will be called for short SDP (Sparse Discriminative Projection).
Example 3 We apply SDP to the Iris data.Two solutions (matrices A) are depicted in the last four columns of what was achieved by other approaches, and probably needs further checking.The discriminant plot of the data is given in the (4,1) panel of Fig. 2. The panel (4,2) contains the raw coefficients of these discriminant functions: the first ones has a single nonzero entry, while the second is not sparse at all.The separation achieved by these discriminant functions is quite satisfying, but not the sparseness.
One can develop a better SDP method if the classification error is minimized instead of fitting the data matrix X or its projection onto the subspace spanned by the discriminant functions.Nevertheless, the main weakness of SDP is that for large p the SDP solutions are not sparse enough, and thus, not attractive for application.
Comparison of existing methods
We consider three sparse discriminant analysis methods for comparison using five datasets.The three methods are: • Function constrained linear discriminant analysis (FC-LDA) which is introduced in Sect.
In Table 2 we summarize the results from numerical experiments with the three methods referred above.The solutions produced by FC-LDA and SDA have about 5 % non-zero entries.From this table we see that FC-LDA works as well as SDA.The reason that FC-LDA does not show superiority as compared with SDA may be due to the fact that FC-LDA uses diagonal within group covariance matrix.The results in [49] have higher percentage of non-zero entries, so not quite comparable with the other two.
Connection with Gini's transvariation
This paper mainly revises sparse LDA methods on horizontal data.The main objective of LDA is to find the linear combination of p variables which maximizes group separation.In other words, it is obtained by maximizing the ratio of between to within covariance matrices [14].
Alternatively, the linear combinations can also be obtained in terms of Gini's transvariation [33].Gini [18] defined that two groups are said to be transvariate on a variable X if the sign of the difference of any two values of X from different groups is opposite to the sign of their corresponding mean difference.Any difference satisfying this condition is called a transvariation [46].Montanari [33] has shown that transvariation measures can be used to discriminate between groups.Moreover, Caló [6] has used the transvariaon method to measure group separablity.Other authors such as [34] and Bragoli et al. [4] have also applied transvariation for group separation and classification.
These references show that LDA is related to the Gini's transvariation due to the fact that linear discriminant function can be derived as the linear combination which minimize transvariation probability or area.Therefore, we can see that our method, FC-LDA is also related to Gini's transvariation.It is also possible to impose sparsity penalty on the transvariation method so as to find only few important variables in the case of horizontal data.The nature of the trasvariation formulation most likely will require non-parametric methods.Nowadays, Bayesian methods with sparseness inducing priors are widely used for sparse PCA and factor analysis.This could be a new contribution of Gini's transvariation to LDA and in general, to discrimination and classification problems.reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Table 1
Different raw coefficients for Fisher's Iris data of of the objective function in (7) is 1.0579.The first two eigenvalues 31.0969 and .3125areapproximatedby30.502 and .616,respectively.The sparse raw coefficients are depicted in the last two columns of Table1.The same five points are misclassified in this solution.The discriminant plot of the data is given in the (2,2) panel of Fig.1.It seems, that the LDA (1) with W = W d gives the worst solution, while the sparse LDA with τ = .5 is most satisfying both in terms of fit and interpretability. minimum
Table
1, two for each solution.The first pair of columns is the solution A for which the SDP objective function (17) is minimal (1.0563) among several random starts.However, this solution produces 11 misclassified observations, which is 7.33 % misclassification rate.This solution looks less satisfying compared to previous ones, reported in Example 1.The last two columns of Table 1 give another SDP solution for which the objective function (17) is 1.1121, but the misclassification is 4 % only, with six misclassified observations (9, 31, 50, 52, 103 and 104).The quality of this solution resembles the (dense) LDA solution with W = W d from Example 1.It's clear that SDP performance is not satisfying for this data set.Now, we apply SDP to the Rice data.The best solution (among several random starts) produces 26 misclassified observations, which is only 41.94 % misclassification rate.This result looks much better than
Table 2
Results from three methods for sparse LDA applied to several data sets | 2019-04-18T13:04:06.948Z | 2015-07-22T00:00:00.000 | {
"year": 2016,
"sha1": "a28bd43df893b827422a95b450eda539e08a807b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40300-016-0093-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "eae860e2c48ae4580c713388352d369b2350e42a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257251045 | pes2o/s2orc | v3-fos-license | 3D-Printing of Silk Nanofibrils Reinforced Alginate for Soft Tissue Engineering
The main challenge of extrusion 3D bioprinting is the development of bioinks with the desired rheological and mechanical performance and biocompatibility to create complex and patient-specific scaffolds in a repeatable and accurate manner. This study aims to introduce non-synthetic bioinks based on alginate (Alg) incorporated with various concentrations of silk nanofibrils (SNF, 1, 2, and 3 wt.%) and optimize their properties for soft tissue engineering. Alg-SNF inks demonstrated a high degree of shear-thinning with reversible stress softening behavior contributing to extrusion in pre-designed shapes. In addition, our results confirmed the good interaction between SNFs and alginate matrix resulted in significantly improved mechanical and biological characteristics and controlled degradation rate. Noticeably, the addition of 2 wt.% SNF improved the compressive strength (2.2 times), tensile strength (5 times), and elastic modulus (3 times) of alginate. In addition, reinforcing 3D-printed alginate with 2 wt.% SNF resulted in increased cell viability (1.5 times) and proliferation (5.6 times) after 5 days of culturing. In summary, our study highlights the favorable rheological and mechanical performances, degradation rate, swelling, and biocompatibility of Alg-2SNF ink containing 2 wt.% SNF for extrusion-based bioprinting.
Introduction
Tissue engineering (TE) has been applied to develop tissue and organ substitutes for clinical transplantation and restore organ function [1][2][3]. In this strategy, the scaffold made of biocompatible materials supporting cell functions is the crucial component. Currently, three-dimensional (3D)-bioprinting is known as an emerging strategy for creating scaffolds to link the divergence between artificially engineered structures and native tissues [4,5]. The unique advantage of this strategy is the fabrication of 3D functional and predesigned constructs by a precise layer-by-layer deposition of various types of appropriate materials, called inks and living cells, to regenerate target tissues. Moreover, the ability of 3D bioprinting for various cells in an orderly manner mimicking heterogeneous buildings of native tissues is another advantage of this strategy [6].
The preparation of inks is directly related to the working mode of 3D printing, including droplet-based, extrusion-based, laser-induced forward transfer, and stereo lithography techniques. Extrusion-based bioprinting is one of the interesting explored techniques that extrudes continuous fibers to form 3D scaffolds [7]. It shows promising properties, including versatility, multiple modes of solidification, and the ability to print complex structures [6]. In this system, inks should be well extrudable to avoid obstruction during the process. In addition, it should generate 3D structures with the desired profile reliability and mechanical solidity [8]. Accordingly, inks often need additional upgrading in printability, mechanical support, and bioactivity [9]. In this regard, applying inks with shear-thinning and thixotropic characteristics to decrease the force essential to motivate ink extrusion is promising. Shear-thinning permits inks to be placed via lower extrusion forces, whereas thixotropy could protect the shape of 3D-printed constructions before further crosslinking [10].
Various types of natural polymers have been applied in extrusion-based 3D printing approaches thanks to their promising biocompatibility and adjustable mechanical robustness. Among various types of hydrogels, alginate, a polysaccharide derived from different kinds of brown seaweed, is considered a suitable ink in many bioprinting techniques, given the distinctive properties, including biocompatibility, shear-thinning, biodegradability, ease of processing and fast gelation [11]. However, alginate presents challenges of mechanical instability and high fluidity, which is unfavorable for good shape fidelity [12]. Moreover, 3D-printed alginate hydrogels could not provide satisfactory mechanical performances and often show high volume shrinkage after ionic crosslinking. These issues could be overcome by various crosslinking strategies, surface coating, and composite formation [12,13]. The formation of hybrid inks based on alginate is one of the main strategies which has been widely developed [12]. Aartad et al. [14] studied the effect of mechanically-fibrillated and oxidized cellulose nanofibrils (CNFs) on the mechanical and rheological performances of 3D-printed alginate. It was concluded that alginate-CNF hydrogels revealed greater Young's modulus and weaker syneresis than pure alginate making it more suitable for tissue engineering. Marksted et al. [15] also reported the bioink of alginate -CNFs and found that the addition of CNFs induced shear-thinning properties to the alginate ink leading to high shape fidelity and increased printing resolution.
Silk fibroin (SF), extracted from silkworms with unique properties of low immunogenicity, tunable mechanical performances, biodegradation, and biocompatibility [16], could also be considered for the engineering of bone [17], cartilage [18], and artery [19] and is used in both natural fiber and regenerated forms [18,20]. SF could also be simply administered into hydrogels offering a chance to be applied as bioinks in bioprinting [21]. Gelation of SF is prompted via the structural transition from the random coil to a β-sheet [22]. However, poor crosslinking and variation of viscosity are the barriers hindering the printing of pure SF hydrogel. To overcome these issues, 3D-printed alginate-silk structures have been reported in previous studies. For instance, Joshi et al. [23] reported alginate-gelatin bioink loaded with silk fibers for osteochondral grafts. In that study, alginate was used as a modifier to create printable ink. Aharonov et al. [24] also presented laminate constructed from long natural silk and fibroin fibers embedded in an alginate hydrogel matrix. In the mentioned study, different fiber volume fractions were characterized to tailor-design their mechanical behavior. To improve the mechanical properties of SF hydrogels, such as tensile strength and transparency, silk nanofibrils (SNFs) have also been proposed. SNFs as a natural protein biomaterial platform with diameters of 20-100 nm are the basic mesoscopic structural units of the hierarchical structure of silk materials [25]. SNFs not only have the benefits of availability, low price, biocompatibility, and degradability, but they could also significantly promote the mechanical toughness and biological functions of silk components [26][27][28]. These properties make SNFs attractive for various tissue engineering applications [29,30]. Moreover, contrary to silk, SNF may have a significant effect on the viscoelastic responses of inks. In order to develop 3D-printed structures with exact and precise shapes, accurately controlled pore structures, distinctive mechanical characteristics, and supporting cells, the formation of inks with viscoelastic properties is crucial [30]. Similar to other nanofibrous structures, SNF can be utilized as a rheological modifier for the inks, supporting the viscoelastic response essential for 3D printing. SNF exhibits shear thinning behavior, which facilitates the extrusion of inks. At the same time, it could keep self-supporting and high shape fidelity [21]. In addition, the incorporation of SNF into inks provides structural similarity of 3D-printed scaffolds to ECMs possessing filamentous architecture, consequence to the promotion of cell growth [31]. Therefore, the use of SNF instead of silk with micro-scale fibers in alginate matrix, might be promising to improve the viscoelastic responses, interaction of the matrix and the nanofibrils and cell responses. Accordingly, this study aims to combine the fast gelation properties of alginate with the mechanical properties of SNFs to develop hybrid ink containing alginate-SNFs for soft tissue engineering. It is hypothesized that SF in nanofibril morphology and large matrix-nanofibril interfacial have critical effects on the mechanical and shear-thinning properties (printability) and cell responses of 3D-printed alginate.
Synthesis of Silk Nanofibrils (SNFs)
The SNF extraction process is schematically described in Figure 1A. Silk was first degummed using a 0.02 M sodium hydrogen carbonate solution to produce pure fibroin by removing the sericin covering silkworm cocoons, according to previous studies [32]. Consequently, SNF was extracted from SF by dissolving it in 9.3 M LiBr solution for 2 h, according to previous protocols with minor modifications [33]. To eliminate LiBr, the aqueous SNF solution was dialyzed against DDW for 4 days while DDW was continuously stirred and replaced with fresh DDW every day. Before ink formation, the SNF solution was subjected to ultrasound treatment for 20 min using probe sonication equipment (Fisher Sonic Dismembrator, USA) to enhance random coil to β-sheet transition [34].
Pharmaceutics 2023, 15, x FOR PEER REVIEW 3 might be promising to improve the viscoelastic responses, interaction of the matrix the nanofibrils and cell responses. Accordingly, this study aims to combine the fast gelation properties of alginate w the mechanical properties of SNFs to develop hybrid ink containing alginate-SNFs soft tissue engineering. It is hypothesized that SF in nanofibril morphology and large trix-nanofibril interfacial have critical effects on the mechanical and shear-thinning p erties (printability) and cell responses of 3D-printed alginate.
Synthesis of Silk Nanofibrils (SNFs)
The SNF extraction process is schematically described in Figure 1A. Silk was degummed using a 0.02 M sodium hydrogen carbonate solution to produce pure fib by removing the sericin covering silkworm cocoons, according to previous studies Consequently, SNF was extracted from SF by dissolving it in 9.3 M LiBr solution for according to previous protocols with minor modifications [33]. To eliminate LiBr, aqueous SNF solution was dialyzed against DDW for 4 days while DDW was cont ously stirred and replaced with fresh DDW every day. Before ink formation, the SNF lution was subjected to ultrasound treatment for 20 min using probe sonication equipm (Fisher Sonic Dismembrator, USA) to enhance random coil to β-sheet transition [34].
Fabrication of 3D-Printed Alginate-SNF Scaffold
3D-printed hydrogels based on alginate (5 wt.%) containing various concentrations of SNFs (0, 1, 2, and 3 wt.%) were prepared using a 3D printer. In this regard, SNFs were firstly dispersed in DDW using ultrasonic equipment (WUDD10H, 770W, Korea) for 30 s and then mixed for 2 h at room temperature. It is worth mentioning that the concentration of alginate and SNF solutions were optimized with trial and error according to the concentration of each component in previous studies [35,36]. The preliminary trials aimed to formulate inks having suitable viscoelastic properties so that they flow through the nozzle and retain their structure after being deposited. For instance, clogging was experienced when the concentration of SNF and alginate was more than 3 wt.% and 5 wt.% in 3D printing.
After mixing the 2 above solutions overnight to get a homogenous solution, the inks were printed using the modified FDM 3D printer with a 300-µm nozzle. The models of scaffolds were designed with Catia software (V.5), then sliced with Ultimaker Cura3 to get G-code output usable on the Printer device. Printing parameters were also set to a 3-5 mm·s −1 speed rate, flow rate 100-500%, and 0.05-0.07 mm dosing distance. The 3D-printed hydrogels were cross-linked using 90 mM CaCl 2 by spraying during printing ( Figure 1B). The interaction between matrix and additive is also schematically illustrated in Figure 1B. According to the concentration of SNF (0, 1, 2, and 3 wt.%), the samples were named Alg-0SNF, Alg-1SNF, Alg-2SNF, and Alg-3SNF, respectively.
Characterization of Alginate-SNF Hybrid Hydrogel
X-ray diffraction (XRD, X'Pert Pro X-ray diffractometer, Phillips, Germany) and Fourier transform infrared spectroscopy (Tensor27, Bruker, Germany), operating in the wavenumber range of 4000 cm −1 -400 cm −1 , were used to characterize alginate, hybrid hydrogels and SNFs. The β-sheet content of degummed SNFs in the nanofibrillar state and after the 3D printing process was estimated using the FTIR spectra and Equation (1) [37]: where A 1515 and A 1630 represent the areas under the bands at 1515 and 1630 cm −1 , respectively. These bands are related to amide I and amide II in the structure of silk, respectively. Also, scanning electron microscopy (SEM, Philips, XL30, Eindhoven-Netherland was applied to examine the microstructure of 3D-printed hydrogels. Before imaging, the samples were freeze-dried for 24 h after being submerged in liquid nitrogen and were then gold-coated using a Bal-Tec SCD 050 Sample Sputter Coater (Bal-Tec AG, Switzerland). Furthermore, the pore size distribution of the 3D-printed hydrogels (n = 10) was determined using ImageJ software and SEM images. The mechanical characteristics of Alg-SNF hydrogels under compressive and tensile situations were evaluated using a tensile tester (Hounsfield H25KS, United Kingdom) with a load cell capacity of 500 N. In the compressive testing, the hydrogels (n = 3) with a diameter × height of 9.75 × 17.5 (mm × mm) were fabricated and then cross-linked using 90 mM CaCl 2 solution. The samples were compacted with the strain rate of 2 mm.min −1, and then the compressive strength (at strain = 70%) and modulus were calculated using the stress-strain curves. To investigate the tensile behavior of 3D-printed hydrogels, specimens were printed using ISO 527-2/1B/2 parameters and a gauge length of 40 mm. After crosslinking using 90 mM CaCl 2 solution, the samples were put on a piece of tape in the tension grips of a tensile tester Hounsfield H25KS (United Kingdom) and exposed to the tensile strain with a constant rate of 2 mm/min. The elastic modulus was examined by determining the slope of the stress-strain curves in the linear area. Furthermore, a rheometer (Anton Paar GmbH, Graz, Austria) was also used to measure the viscoelastic characteristics of pre-polymers to show the printability of ink. The test temperature was set at 25 • C.
To investigate the effects of SNF on the rheological properties and printability, rheological testing was performed using an MCR 502 rheometer (Anton Paar, Graz, Austria) equipped with 25 mm parallel plate geometry at a distance of 1 mm. At first, a flow test was performed to assess the viscosity of the polymer solution, while the shear rate was modulated at 0.01-100 s −1 at 25 • C. The power law equation (Equation (2)) was also employed to study the shear viscosity of the ink (η) as a function of shear rate (γ) [38]: where K and n define the consistency index and flow index, respectively. The exponent (n) was used to determine the flow properties of inks. While n < 1 implies the shear thinning properties, n > 1 proves the shear thickening ability. Moreover, n = 1 shows the Newtonian flow characteristic. An oscillatory frequency sweep was also used at 0.01-100 Hz and 25 • C. The strain was kept constant. From the LVR (linear viscoelastic region), 0.1-1% strain was selected for the oscillation frequency evaluation conducted at a frequency range of 10 −2 -10 2 Hz. The effect of SNFs on the dimension changes of scaffolds after cross-linking was examined by immersion of scaffolds in CaCl 2(aq) for 10 min and measuring dimensions before and after crosslinking [39]. Swelling and degradation tests were also performed to investigate the role of SNF concentration on the physiological stability of 3D Alg-SNF scaffolds. To measure the swelling ratio, the samples (n = 3) were freeze-dried, weighed (W1), and then immersed in phosphate buffer solution (PBS, pH = 7.4, 37 • C) for 2 h. The hydrogels were weighed (W2) after being wiped off, and the swelling ratio was estimated using Equation (3) [40]: In addition, to investigate degradation properties, the scaffolds (n = 3) were freezedried and weighed (W I ). Then, samples were submerged in a PBS solution with a pH of 7.4 for 14 days. At each time point (1, 3, 5, 7, and 14 days), the weight of the freeze-dried samples was recorded (W F ), and the degradation rate was recorded, based on Equation (4) [41]:
Cell Culture Investigations
Mouse L929 fibroblast cells (received from Royan institute, Isfahan, Iran) were used to study the cellular behavior of 3D-printed Alg-SNF scaffolds. In this regard, the samples with a dimension of 1 × 1 cm 2 were sterilized under the UV light for 20 min. Then, the cells were seeded on them and in tissue culture plastic (TCP as control), with a density of 10,000 cells/sample, and incubated in the Dulbecco s Modified Eagle s Medium (DMEM, Sigma-Aldrich, St. Louis, USA) enriched with 10% Fetal bovine serum (FBS, Bioidea, Tehran, Iran), 1% Gentamicin (Sigma-Aldrich, Taufkirchen, Germany), and 1% (v/v) GlutaMax (Bioidea, Tehran, Iran) for 5 days. After 1st and 5th day(s), the calcein-AM/ethidium homodimer (EthD-III) live/dead test (Biotium, UK) was utilized to determine the vitality of cells. The cell-cultured samples were rinsed, and then 100 µL of a live/dead solution containing 2 M calcein AM and 4 M ethidium homodimer was added to cover the samples. After 1 h incubation (n = 3) at 37 • C, the samples were imaged using an inverted fluorescent microscope (Nikon TE2000-U, Japan). Lastly, the Image J program was used to calculate the cell viability by dividing the number of living cells (green cells) by the whole cell number (green + red stains).
MTT test was also done, based on the manufacturer protocol (Sigma), to examine the relative cell growth. The culture media was removed at the specified periods (1, 3, and 5 days) and replaced with MTT solution (5 g·mL −1 ). After 3 h incubation at 37 • C, the formed formazan crystals were dissolved in DMSO (Merck, Darmstadt, Germany), and the optical density (OD) of each solution was measured against DMSO (blank) at the wavelength 490 nm using an ELISA reader (Biotek Instruments, China). The relative cell viability (%) was computed using Equation (5) [42].
Relative cell viability (%) =
where A Sample , A b , and A c represent the absorbance of the sample, blank (DMSO), and control (TCP), respectively. DAPI/phalloidin staining was performed to investigate the role of SNF on the cytoskeletal structure (F-actin) of fibroblasts. At the specific time points, the cell-seeded samples were fixed using a 4% paraformaldehyde (Sigma-Aldrich, St. Louis, USA) solution for 20 min. After two-time rinsing, the cells were permeabilized in 0.1% Triton X-100 (Sigma-Aldrich, St. Louis, USA) for 5 min. The cells' actin filaments were stained using a 1:40 dilution of rhodamine phalloidin (Cytoskeleton Inc., Denver, USA) solution for 20 min, and then the cell nuclei were stained using a 1:1000 dilution of 40, 6-diamidino-2-phenyl indole dihydrochloride (DAPI, Sigma-Aldrich, St. Louis, MO, USA) in PBS. Finally, a fluorescence microscope was used to examine the stained samples. Moreover, the cell area of each sample (n = 3) was measured in order to quantify cell density.
Statistical Analysis
A 1-way ANOVA was used to statistically analyze the results. The significant difference between groups was reported according to the Tukey-Kramer post hoc test by GraphPad Prism Software (V.8). p-values < 0.05 were treated as statistically significant.
Chemical Characterization of 3D-Printed Alg-SNF Scaffolds
To create a shear-thinning ink and a robust 3D-printed hydrogel for soft tissue engineering, we incorporated SNFs into alginate hydrogel. SNFs extracted from silkworm cocoons were applied to mimic the fibrous proteins of the native extracellular matrix [43]. According to Figure 1C, silk fibroin possesses complex hierarchical structures that combine crystalline and amorphous phases providing beneficial characteristics [44]. SNFs, as the fundamental building blocks of natural silk fibers, not only have the advantages of their natural origin but also may increase the printability of the ink owing to their higher aspect ratio and more functional groups. SEM image in Figure 1A demonstrated the formation of SNFs from silk fibroin with a wide size distribution ranging from nanometer to micrometer sizes. The advantages of our process are its simplicity and the fast nanofibril formation using an aqueous procedure, avoiding the use of any toxic material. FTIR spectrum of SNF in Figure 1D also confirmed the successful fibrillation of silk fibroin. The spectrum of SNFs consisted of amide I (C=O stretching) and amide II (N-H) at 1630 cm −1 and 1515 cm −1 , respectively, confirming the fibroin structure [34]. Three absorption bands were detected at 820 cm −1 , 890 cm −1 , and 942 cm −1 in the area of primary aliphatic amines (-N 2 H) in the FTIR spectrum related to SNF. FTIR spectrum of alginate also consisted of the distinct bands of COOH (1420 cm −1 ), O-H stretching (1034 cm −1 ), and the asymmetric and symmetric stretching of carboxylate -COO-at 1610 cm −1 and 1408 cm −1 [45]. After the formation of 3D-printed Alg-SNF hydrogel, the distinct bands of both alginate and SNF were detected, including the band at 1416 cm −1 that is attributed to COOH. Moreover, the intensity of the characteristic band of SNF at 1630 cm −1 related to C=O was improved, and the characteristic carboxyl groups of alginate were also shifted from 1420 cm −1 to 1416 cm −1 , confirming the interaction between SNF's amide groups and Alginate's carboxyl groups based on intermolecular hydrogen bonds, without posing any alteration to the structure [30,46,47]. It has been reported that physical interactions such as hydrogen bonding play significant roles in 3D printing, including endowing the materials with shear responsiveness, enhancing their processability, improving interlayer adhesion and mechanical strength, modulating the viscosity, and providing constructs with self-healing and shape memory properties [48]. Inspired by these studies, the hydrogen bonding between the N-H group of SNF and the COOH group of the alginate could lead to an enhanced shape fidelity. In addition, according to Equation (1), the β-sheet content of SNF was 48 ± 1%, which was comparable with previous studies [49][50][51]. For instance, Farasatkia et al. [48] found that the β-sheet content of SNF synthesized using the acid-salt method was 53 ± 2%. It could be concluded that the synthesis method and the use of acid-salt solvent instead of LiBr did not have a significant effect on the β-sheet content. However, β-sheet content was decreased to 33 ± 2% after hybrid hydrogel formation, indicating that the alginate could also vaguely improve the random coil formation. Similarly, Xue et al. [51] also found that the β-sheet content of SNF decreased with an increase in magnetic content, while the random coil content improved.
XRD pattern of Alg-SNF, compared to that of SNF and Alg ( Figure 1E), also confirmed the presence of both components after the 3D printing process. XRD pattern of alginate consisted of the characteristic peaks related to alginate [52]. The XRD pattern of SNF also consisted of a peak at 2θ = 22 • , which was similarly reported in previous studies. It could be attributed to the fibroin structure [53]. After the formation of hybrid Alg-SNF, a slight shift to the higher angles was detected, which might be related to the effect of cross-linking process and the interaction between the matrix and the additive. The alginate matrix consists of -COO groups providing hydrogen bonding. It forced chains to come closer, leading to a decreased inter-planar distance and a change in the peak position [54,55].
The Interaction between the alginate matrix and SNFs also controlled the size stability in hydrogels. According to Figure 1F, the ionic crosslinking using Ca 2+ resulted in dimensional changes in Alg hydrogel. The incorporation of SNF changed the size stability of hydrogel, depending on the SNF content. Noticeably, while the dimension change of pure alginate hydrogel was 48.8 ± 0.6%, it was reduced to 35.2 ± 1.0% after the incorporation of 3 wt.% SNF. The improved dimension size stability could be due to the reduced water plasticization due to the fibroin's hydrophobic functional groups. As a result, the distance between polymer chains is reduced, leading to less dimensional change before and after the crosslinking process [56,57]. Similarly, Marksted et al. [58] found that the incorporation of CNFs could also affect the dimensional changes of hydrogel matrices. In addition, Aarstad et al. [14] concluded that the syneresis of Alg-CNF reduced compared to the alginate, revealing that gels contracted less after saturation with calcium when CNFs were added to Alg hydrogel.
Rheological Behavior of 3D-Printed Alg-SNF Scaffolds
The strong interaction between the carboxyl of alginate and amide groups of SNFs also significantly controlled the viscosity of the solution leading to improved stability of hydrogels before crosslinking ( Figure 2A). It needs to mention that the inks of extrusionbased printing should have flow and shape retention characteristics. Inks should flow through nozzles with minimal internal resistance. After the material has been distributed, the properties should be reversed, with urgent flow discontinuation, accumulation of internal forces opposing deformation, and elastic shape retention. The ability to show viscous flow and elastic shape retaining is identified as viscoelasticity. [7,59]. We examined the viscoelastic behavior of the hybrid hydrogels. The dynamic elastic modulus (G ) and loss modulus (G ) of all hydrogels at different frequencies are provided in Figure 2B,C. Results indicated all samples had potential printability and indicated a dominance of elasticity since the storage modulus was superior to the loss modulus (G > G ), especially at higher frequencies. The elastic-dominated trait (G < G ) at a low-frequency sweep could result in a rigid construction after printing [60]. However, the G and G were the function of the angular frequency (Hz), depending on the sample type. While Alg-0SNF and Alg-1SNF showed a liquid-like behavior, especially at a low frequency, both G and G were significantly enhanced with the angular frequency at higher SNF content samples. At a high frequency (100 s −1 ), Alg-2SNF and Alg-3SNF revealed a noticeable elasticity or more rigid-like construction after the crossover moduli. The increased modulus of these samples might be due to the structural entanglement, which could be originated from the strong interconnecting networks between carboxylic groups of alginate and amide groups of SNF. Markstedt et al. [15] reported a similar trend for the Alg-CNF hydrogel. The complex modulus (G + iG ) demonstrated in Figure 2D reveals that the addition of SNFs resulted in a significant improvement in the elastic and loss moduli of alginate, leading to an improved complex modulus. This behavior could be helpful for the printability of hydrogels since polymer solution acts similarly to a viscous material, and the printed material supported its shape. However, the storage modulus of Alg-3SNF was lower than Alg-2SNF in lower frequencies. It might be due to the agglomeration of SNF in Alg-3SNF, which was not conducive to stress transfer, and thus resulting in a decrease in the elastic modulus. Similarly, Jiang et al. [61] reported that agglomeration of CNFs could lead to modulus reduction. The tan δ value ( Figure 2E (was also calculated from G /G to evaluate how gel-like inks were. Tan δ values below 1 at the measured frequencies indicate that the inks are more gel-like than liquid [40]. The results showed that the samples became more elastic as the SNF concentration increased and were elastically dominated (tan δ < 1) over the frequency range. However, it was indicated that Alg-0SNF had tan δ > 1 in lower frequencies meaning the viscous behavior in the relaxing time.
1SNF showed a liquid-like behavior, especially at a low frequency, both G' and G" were significantly enhanced with the angular frequency at higher SNF content samples. At a high frequency (100 s −1 ), Alg-2SNF and Alg-3SNF revealed a noticeable elasticity or more rigid-like construction after the crossover moduli. The increased modulus of these samples might be due to the structural entanglement, which could be originated from the strong interconnecting networks between carboxylic groups of alginate and amide groups of SNF. Markstedt et al. [15] reported a similar trend for the Alg-CNF hydrogel. The complex modulus (G' + iG") demonstrated in Figure 2D reveals that the addition of SNFs resulted in a significant improvement in the elastic and loss moduli of alginate, leading to an improved complex modulus. This behavior could be helpful for the printability of hydrogels since polymer solution acts similarly to a viscous material, and the printed material supported its shape. However, the storage modulus of Alg-3SNF was lower than Alg-2SNF in lower frequencies. It might be due to the agglomeration of SNF in Alg-3SNF, which was not conducive to stress transfer, and thus resulting in a decrease in the elastic modulus. Similarly, Jiang et al. [61] reported that agglomeration of CNFs could lead to modulus reduction. The tan δ value ( Figure 2E ) was also calculated from G"/G' to evaluate how gel-like inks were. Tan δ values below 1 at the measured frequencies indicate that the inks are more gel-like than liquid [40]. The results showed that the samples became more elastic as the SNF concentration increased and were elastically dominated (tan δ < 1) over the frequency range. However, it was indicated that Alg-0SNF had tan δ > 1 in lower frequencies meaning the viscous behavior in the relaxing time. According to Figure 2F, the incorporation of SNF also altered the viscosity of the solution. It might be related to the electrostatic interactions between the N-H group of SNF and the -COOH group of the alginate matrix. Our results demonstrated that Alg-SNF hydrogels had greater viscosity than Alg-0 SNF, and the viscosity of the hybrid hydrogels decreased with the shear rate, demonstrating the viscoelastic property of Alg-SNF hydrogels. Other studies similarly found that the electrostatic interactions between the functional groups available in the inks may lead to enhanced elastomeric behavior [62]. In addition, Figure 2F revealed that the viscosity reduced with the increasing shear rate for both Alg and Alg-SNF, confirming shear-thinning behavior. However, the incorporation of SNF in Alg solutions (especially Alg-2SNF and Alg-3SNF) dramatically improved its According to Figure 2F, the incorporation of SNF also altered the viscosity of the solution. It might be related to the electrostatic interactions between the N-H group of SNF and the -COOH group of the alginate matrix. Our results demonstrated that Alg-SNF hydrogels had greater viscosity than Alg-0 SNF, and the viscosity of the hybrid hydrogels decreased with the shear rate, demonstrating the viscoelastic property of Alg-SNF hydrogels. Other studies similarly found that the electrostatic interactions between the functional groups available in the inks may lead to enhanced elastomeric behavior [62]. In addition, Figure 2F revealed that the viscosity reduced with the increasing shear rate for both Alg and Alg-SNF, confirming shear-thinning behavior. However, the incorporation of SNF in Alg solutions (especially Alg-2SNF and Alg-3SNF) dramatically improved its shear-thinning behavior. For instance, when the shear rate increased from 10 −2 to 10 2 s −1 , the viscosity of Alg-2SNF decreased from 8.2 × 10 3 to 2.8 × 10 0 Pa.s. Nevertheless, the viscosity of Alg-0SNF only decreased from 3.4 × 10 1 to 1.8 × 10 0 Pa.s in this shear rate range.
It could be realized samples with higher SNF concentrations exposed to stress could restore their first shape after stress release. Compared to Alg-CNF [15], Alg-2SNF represented enhanced shear-thinning behavior with the same change in the viscosity observed in a lower range of shear stress. Data regarding the experimental results were also fitted into Equation (2). The (n − 1) values were estimated at about −0.35, −0.571, −0.842, and −0.594 for Alg-0, 1, 2, and 3 SNF samples, respectively. Accordingly, the exponent n proposed that the SNF content considerably affected the shear-thinning behavior of the inks. In addition, it is worth mentioning that Alg-2SNF had better shear-thinning behavior than other inks.
Structural Properties of 3D-Printed Alg-SNF Scaffolds
To increase the shape fidelity of the alginate, SNF was incorporated as the filler in ink compositions. The effect of SNF on the morphology and micro-and macro-porous structure of 3D-printed alginate hydrogel was studied (Figure 3). Figure 3A,B represents the optical and SEM images of the five-layer grid pattern of hybrid hydrogels, respectively. Comparing small grids printed with Alg-0SNF and Alg-2, three SNF demonstrated how the low viscosity of alginate limited the printing resolution. For instance, according to Figure 3C,D, the strut width of the printed grid decreased, while the macro pore size increased after the incorporation of SNFs. Noticeably, the average strut width of Alg-0SNF was 952 ± 40 µm, which was significantly reduced to 205 ± 14 µm at Alg-3SNF, confirming the viscosity characterization presented in Figure 2F. The increase in the viscosity of inks enhanced the stability of the extruded strands before cross-linking. In addition, SNFs could significantly control the microporous structure of alginate scaffolds. According to Figure 3B, all hydrogels discovered a highly porous network with interconnected porosity. The presence of silk nanofibrils could be identified within the alginate matrix, especially at high SNF content samples (Alg-3SNF). These interconnected porous and fibrous networks could mimic natural ECM matrix and increase the transportation of nutrients and waste products, enabling effective cell functions [63]. However, the pore size and uniformity of hydrogels were significantly changed depending on the hydrogel composition. According to Figure 3E, the average pore size of Alg (100 ± 80 µm) was reduced to 38 ± 2 µm after the incorporation of 2 wt.% SNF (Alg-2SNF) and was then enhanced to 77 ± 50 µm at Alg-3SNF. Anguiano et al. [64] also similarly found a decrease in the pore size of hydroxypropyl cellulose hydrogels after the incorporation of molybdenum disulfide. The reduced pore size of hydrogels could be related to the interaction between alginate and SNF. Interactions between amide and carboxylic groups of the resulting hydrogels led to a much denser and more compact structure, leading to smaller pores [65]. However, the pore size of Alg-3SNF was significantly enhanced compared to other samples. During the freeze-drying step, ice crystals were nucleated and pushed alginate and SNF into interstitial portions of ice crystals. In the Alg-3SNF sample, SNF became conspicuous, and the viscous forces of solutes reduced, so creating bigger ice crystals to develop larger pores [66,67]. In addition, SNF agglomeration and less interaction with the matrix may also result in the non-uniform distribution of pores.
One of the benefits of employing hydrogels in tissue engineering is their capacity to absorb water and degrade in biological environments. The swelling behavior of different hydrogel compositions was monitored after 1 h, keeping in PBS at 37 • C. According to Figure 4A, while all hydrogels showed significant swelling ability, SNF could modulate the swelling ratio of the scaffolds. Among various samples, Alg-0SNF showed the maximum swelling ratio of 1273 ± 230% owing to the superior water preservation ability of polysaccharides. During the first times of the swelling process, water was absorbed by capillaries that were presented in the alginate hydrogel. At this condition, the hydrophilic groups (-OH/-COOH) were combined with water molecules to create a hydration layer. However, this value was significantly reduced to 814 ± 35% after the incorporation of 1 wt.% SNF. It could be due to the robust interactions between the available hydrophilic groups of the alginate matrix and SNF. It needs to be mentioned that while both the SNF and Alginate have hydrophilic groups on their molecular chains, SNF is a relatively hydrophobic protein that may decrease the swelling ability. However, the swelling ratio increased to 1046 ± 80% when SNF concentration enhanced to 3 wt.%. It might be due to the agglomeration of SNF at Alg-3SNF, which enhanced the available hydrophilic side chains of alginate for water interaction [68]. One of the benefits of employing hydrogels in tissue engineering is their capacity to absorb water and degrade in biological environments. The swelling behavior of different hydrogel compositions was monitored after 1 h, keeping in PBS at 37 °C. According to Figure 4A, while all hydrogels showed significant swelling ability, SNF could modulate the swelling ratio of the scaffolds. Among various samples, Alg-0SNF showed the maximum swelling ratio of 1273 ± 230% owing to the superior water preservation ability of polysaccharides. During the first times of the swelling process, water was absorbed by capillaries that were presented in the alginate hydrogel. At this condition, the hydrophilic groups (-OH/-COOH) were combined with water molecules to create a hydration layer. However, this value was significantly reduced to 814 ± 35% after the incorporation of 1 wt.% SNF. It could be due to the robust interactions between the available hydrophilic groups of the alginate matrix and SNF. It needs to be mentioned that while both the SNF and Alginate have hydrophilic groups on their molecular chains, SNF is a relatively hydrophobic protein that may decrease the swelling ability. However, the swelling ratio increased to 1046 ± 80% when SNF concentration enhanced to 3 wt.%. It might be due to the agglomeration of SNF at Alg-3SNF, which enhanced the available hydrophilic side chains of alginate for water interaction [68].
The degradation rate of hybrid Alg-SNF hydrogels was also evaluated in PBS (pH = 7.4) for 14 days ( Figure 4B). Results showed that the incorporation of 2 wt.% SNF within the alginate hydrogel significantly decreased the weight loss and improved physiological stability, while the swelling ratio was still sufficient for cellular behavior. It might be due to the less swelling ability of hydrogels with increasing SNF content. However, the degradation rate of Alg-3SNF hydrogel was significantly enhanced due to the agglomeration of SNF and its heterogeneous structure. Gharasoo et al. [69] suggested a model confirming that pore-scale heterogeneity could consistently promote degradation rate. According to The degradation rate of hybrid Alg-SNF hydrogels was also evaluated in PBS (pH = 7.4) for 14 days ( Figure 4B). Results showed that the incorporation of 2 wt.% SNF within the alginate hydrogel significantly decreased the weight loss and improved physiological stability, while the swelling ratio was still sufficient for cellular behavior. It might be due to the less swelling ability of hydrogels with increasing SNF content. However, the degradation rate of Alg-3SNF hydrogel was significantly enhanced due to the agglomeration of SNF and its heterogeneous structure. Gharasoo et al. [69] suggested a model confirming that pore-scale heterogeneity could consistently promote degradation rate. According to previous studies, the degradation rate of 3D-printed alginate is not in an appropriate range for various tissue engineering applications, such as cartilage. In this study, alginate and SNF were combined at different ratios to provide finer control over the rate of bioink degradation. Our results demonstrated that the degradation rate of Alginate was significantly reduced after the incorporation of 2 wt.% SNF. This degradation rate was comparable with the results of other 3D-printed scaffolds proposed for cartilage tissue engineering [70], making it promising for this application.
Mechanical Properties of 3D-Printed Alg-SNF Scaffolds
One of the main issues associated with alginate hydrogels is their poor mechanical performance. Here, we investigated the role of SNF on the tensile and compression properties of alginate hydrogel. Figure 5A shows a tensile specimen in a custom-designed gripper adapter. The formation of an aligned broken strut after the tensile test in Figure 5A confirmed that SNF randomly distributed in the hydrogel matrix was aligned with the tension direction. This behavior could result in the improvement of the tensile performance of Alg-SNF hydrogels. The representative stress-strain curves of 3D-printed hydrogels in Figure 5B presented a linear trend until 10% strain, followed by a non-linear behavior. The slope of the linear section was used to determine Young's modulus [71]. According to these curves, the average tensile strength ( Figure 5C), elastic (Young's) modulus ( Figure 5D), and elongation ( Figure 5E) were estimated. According to these values, it could be concluded that the SNF content had a significant effect on the tensile performances of the hybrid hydrogels. The tensile strength of alginate hydrogel was significantly enhanced (2 times) with increasing SNF content up to 2 wt.% (p < 0.05). Moreover, incorporation of SNF up to 2 wt.% significantly enhanced (about 2 times) the elastic modulus from 324 ± 8 kPa (in Alg-0SNF) to 643 ± 15 kPa (p < 0.05) and then reduced with increasing SNF content up to 3 wt.%. This behavior probably originated from the hydrogen bonding between Alg and SNF, which led to the creation of a stiffer hydrogel matrix [72]. It needs to mention that the tensile behavior of alginate hydrogels is highly dependent on alginate type, formulation, gelling conditions, incubation, and strain rate [73]. However, the improvement of tensile performances after the incorporation of SNF was similarly reported in previous studies. Liling et al. [74] and Barros et al. [75] demonstrated that the mechanical properties of sodium alginate hydrogels were mainly influenced by the type and concentration of the cross-linker agents and the presence of additives. In addition, the tensile performances of silk-incorporated scaffolds are influenced by β-sheet content. β-sheet crystals in SNFs created using strong supramolecular interactions and afforded mechanical stiffness and stable features. When SNFs are bared tensile forces, β-sheet nanocrystals and chains could partially orient by the creation of interlocking parts transferring the load between chains. However, the incorporation of 3% SNF significantly reduced the mechanical strength and elongation. Generally, the mechanical properties of hybrid hydrogels strongly depend on not only the intrinsic characteristics of reinforcements but also the good distribution of these reinforcements. Therefore, the poor dispersion and aggregation of SNFs in the alginate matrix was a significant barrier to the formation of uniform Alg-3SNF leading to the formation of a heterogeneous structure containing large pores with large size distribution [49]. A similar result was reported for silk fibroin and methacrylate gelatin (GelMA) hydrogels [76,77]. It was found that the incorporation of CNFs within the fibroin matrix significantly improved tensile performances, owing to the strong interaction between SF protein and CNFs. strongly depend on not only the intrinsic characteristics of reinforcements but also the good distribution of these reinforcements. Therefore, the poor dispersion and aggregation of SNFs in the alginate matrix was a significant barrier to the formation of uniform Alg-3SNF leading to the formation of a heterogeneous structure containing large pores with large size distribution [49]. A similar result was reported for silk fibroin and methacrylate gelatin (GelMA) hydrogels [76,77]. It was found that the incorporation of CNFs within the fibroin matrix significantly improved tensile performances, owing to the strong interaction between SF protein and CNFs. The compressive stress-strain curves were provided, up to 60% strain ( Figure 5F). The stress-strain curves at 0-10% strain were also presented with higher magnification. Depending on the SNF content, the hybrid hydrogels were destroyed in 70-90%. Our results revealed while the pure alginate sample failed at about 70%, all hybrid hydrogels could bear more than 90% strain before failure. Moreover, the compressive strength of hybrid hydrogels was significantly changed depending on the SNF content. It was also The compressive stress-strain curves were provided, up to 60% strain ( Figure 5F). The stress-strain curves at 0-10% strain were also presented with higher magnification. Depending on the SNF content, the hybrid hydrogels were destroyed in 70-90%. Our results revealed while the pure alginate sample failed at about 70%, all hybrid hydrogels could bear more than 90% strain before failure. Moreover, the compressive strength of hybrid hydrogels was significantly changed depending on the SNF content. It was also found that there was no significant difference between the compressive stress of the constructs at lower strains (<20%), followed by an increase in stress. In the first linear section, during the stress application, the hydrogel experienced a stressed status and started to an elastic deformation to preserve energy and repel the compressive stress. This deformation might be related to the free water loosening, which was not wholly captured in the hydrogel matrix. The considerably improved stress after 20% strain might be related to the reaching deformation to its limit value, making the following deformation more difficult [39]. For example, at 50% strain, the compressive strength enhanced from 201 ± 42 kPa (for the Alg-0SNF) to 286 ± 83 kPa (for the Alg-2SNF) ( Figure 5G). This trend was similarly observed in compressive modulus. According to Figure 5H, the incorporation of SNF significantly enhanced the compressive modulus and ranged from 305 ± 50 kPa at Alg-0SNF to 800 ± 170 kPa at the Alg-2SNF sample. The exfoliated SNF displayed a fibrous structure, and if uniformly dispersed in an aqueous solution, according to the interaction between SNF and alginate, SNFs could transfer the stress in the alginate matrix [39]. Our results revealed the uniform distribution of SNFs could efficiently reinforce the alginate scaffold, which might be related to the high mechanical performances of SNFs and the strong interfacial interactions between the SNF and alginate. This behavior was similarly reported in other biopolymer nanofibrils-reinforced hydrogels [78,79]. Another parameter controlling the compressive performances of the hydrogel was pore size ( Figure 3E). The strength of hydrogels reduced with the increase in pore size in agreement with swelling data. This behavior was similarly reported in previous studies [49]. Compared to similar studies, the compressive behavior of Alg-SNF hydrogels was more significant. For instance, the compressive modulus of optimized Alginate-CNFs was reported to be about 230 KPa [39]. Generally, the stiffness of a material plays a critical role as it affects the cell-material interaction [80]. Handorf et al. [81] found that the stiffness of the ECM affects cell proliferation, the direction of cell migration, cell adhesion and cell differentiation. According to Thiele et al. [82], soft matrices (0.1-1 kPa) are neurogenic and, therefore, mimic brain tissue. Stiffer matrices (8)(9)(10)(11)(12)(13)(14)(15)(16)(17) are myogenic and mimic muscle tissue, while rigid matrices (25-40 kPa) are osteogenic and, therefore, mimic collagenous bone. Due to the differences in microstructure between the different cartilage zones [83], the compression modulus varies between 230 kPa-790 kPa [84]. In this study, it was found that Alg-SNF hydrogels with stiffness in the range of 130-346 kPa could be appropriate for cartilage tissue engineered.
Cell Culture Investigation
To study the role of SNF on the biological properties of the 3D-printed alginate hydrogel, fibroblasts were seeded on the samples. According to Figure 6A, a substantial increase in cell survival after 5 days of culture was detected, confirming all samples were cytocompatible. Moreover, the addition of SNF to alginate hydrogel resulted in a 1.7-fold increase in cell survival (from 91 ± 5% control (for Alg-0SNF) to 168 ± 9% (for Alg-2SNF)). However, the incorporation of more SNF content up to 3 wt.% decreased the cell survival, which might be due to the enhanced degradation rate and lack of sufficient mechanical properties [85]. The improved cell viability after the incorporation of SNFs was similarly reported in previous studies. For instance, Nikam et al. [86] reported cell adhesion to the silk nanofibril was significantly enhanced. According to the optimized shear thinning, mechanical properties, and physiological stability of Alg-2SNF, this sample was selected for further biocompatibility evaluation.
To study the cell viability in contact with Alg-2SNF and Alg-0SNF, a Live/Dead assay was performed. Figure 6B shows the representative fluorescence microscopy images of cells seeded on the samples, showing both live (green) and dead (red) cells cultured for 1 and 5 days. According to Figure 6C, while cell viability was more than 90%, the density of live cells significantly enhanced in contact with the Alg-2SNF sample. The role of nanofibrils in enhancing cell adhesion and viability was confirmed in the literature [87]. The improved cell viability and proliferation could be related to nanofibrous architecture providing anchor sits for the cells. Nanofibrous scaffolds improved the adsorption of some types of proteins, including fibronectin, vitronectin, and laminin, admitting the cells to anchor more tightly to the matrix. Consequently, it could result in a higher number of cells attached [88].
providing anchor sits for the cells. Nanofibrous scaffolds improved the adsorption of some types of proteins, including fibronectin, vitronectin, and laminin, admitting the cells to anchor more tightly to the matrix. Consequently, it could result in a higher number of cells attached [88]. To evaluate the role of 3D-printed hydrogel composition on the cell spreading, the Factin and nuclei were stained ( Figure 6D), and the cell area on various hydrogels was measured ( Figure 6E). Our results showed that while the fraction of surface hydrogel covered with fibroblasts after a day of culture was similar to fibrillated and pure alginate, the fraction of hydrogels covered with fibroblasts significantly improved from 51.4 ± 4.1% (at pure Alginate) to 96.1 ± 3.2% (at Alg-2 SNF), showing the effective role of SNF on the improved cell proliferation and spreading. Generally, the cell fate could be controlled using various chemical and physical parameters of the microenvironment, called cell niche [85,89]. Significantly, the mechanical properties of the cell matrix, such as the stiffness, can act as powerful signals for the control of cell functions, including cell proliferation, migration, and differentiation [90,91]. Our results showed that the incorporation of SNF within alginate hydrogel resulted in the stimulation of the natural ECM and improved mechanical properties leading to enhanced cell adhesion and spreading. Our results demonstrated The cells were stained with Calcein AM (green) and EthDII (red) exhibiting live and dead cells, respectively; (D) Fluorescence images; and (E) the spreading, the fraction of area covered with cell clusters, of fibroblasts after 5 days of culture. The actin cytoskeleton and nuclei of cells were stained with rhodamine-phalloidin (red) and DAPI (blue), respectively. Data are presented as the mean ± SD (n = 3) (*: Significant differences, * p < 0.05.).
To evaluate the role of 3D-printed hydrogel composition on the cell spreading, the F-actin and nuclei were stained ( Figure 6D), and the cell area on various hydrogels was measured ( Figure 6E). Our results showed that while the fraction of surface hydrogel covered with fibroblasts after a day of culture was similar to fibrillated and pure alginate, the fraction of hydrogels covered with fibroblasts significantly improved from 51.4 ± 4.1% (at pure Alginate) to 96.1 ± 3.2% (at Alg-2 SNF), showing the effective role of SNF on the improved cell proliferation and spreading. Generally, the cell fate could be controlled using various chemical and physical parameters of the microenvironment, called cell niche [85,89]. Significantly, the mechanical properties of the cell matrix, such as the stiffness, can act as powerful signals for the control of cell functions, including cell proliferation, migration, and differentiation [90,91]. Our results showed that the incorporation of SNF within alginate hydrogel resulted in the stimulation of the natural ECM and improved mechanical properties leading to enhanced cell adhesion and spreading. Our results demonstrated that Alg-2SNF bioink with significant shear thinning behavior could be promising for developing a 3D-printed scaffold with desired mechanical properties, physiological stability, and biological properties. Alg-2SNF bioink was also successfully used to create 3D-printed complex shapes resembling ear cartilage and the IUT logo ( Figure 7B,C). The ability to easily control the pore size and layer numbers using Cura parameters in contrast to other studies [92] are attractive properties of this ink making it appropriate for 3D printing complex tissues. that Alg-2SNF bioink with significant shear thinning behavior could be promising for developing a 3D-printed scaffold with desired mechanical properties, physiological stability, and biological properties. Alg-2SNF bioink was also successfully used to create 3Dprinted complex shapes resembling ear cartilage and the IUT logo ( Figure 7B,C). The ability to easily control the pore size and layer numbers using Cura parameters in contrast to other studies [92] are attractive properties of this ink making it appropriate for 3D printing complex tissues. Generally, the biocompatibility of tissue substitutes is essential in order to exclude short-and long-term health impairment. Furthermore, there is growing evidence that mechanical properties are fundamental for cellular behavior and consequent tissue functionality [24]. According to the intrinsic properties of SNF, it could mimic the structural and, subsequently, mechanical behavior of functional tissues such as tendons, ligaments, and menisci, having immense strength and stiffness [24]. By modifying the nanofibril volume fraction in the alginate matrix, the mechanical performances of Alg-SNF were in the near range of native human soft tissues such as auricular cartilage [93]. Therefore, 3D-printed Alg-SNF hydrogel could have the potential to be used for cartilage tissue engineering.
Conclusions
In this study, a 3D-printed hydrogel based on alginate-silk nanofibril (Alg-SNF) was introduced for soft tissue engineering. SNF significantly changed the injectability of alginate by improving its shear-thinning behavior and shape retention before ionic crosslinking. Furthermore, the rheological and mechanical properties, as well as the physiological stability of Alg-SNF hydrogels, were significantly modulated depending on the SNF content. Noticeably, the incorporation of 2 wt.% SNF significantly enhanced tensile strength (5 times) and compressive strength (3 times) while reducing degradation rate (1.6 times Generally, the biocompatibility of tissue substitutes is essential in order to exclude short-and long-term health impairment. Furthermore, there is growing evidence that mechanical properties are fundamental for cellular behavior and consequent tissue functionality [24]. According to the intrinsic properties of SNF, it could mimic the structural and, subsequently, mechanical behavior of functional tissues such as tendons, ligaments, and menisci, having immense strength and stiffness [24]. By modifying the nanofibril volume fraction in the alginate matrix, the mechanical performances of Alg-SNF were in the near range of native human soft tissues such as auricular cartilage [93]. Therefore, 3D-printed Alg-SNF hydrogel could have the potential to be used for cartilage tissue engineering.
Conclusions
In this study, a 3D-printed hydrogel based on alginate-silk nanofibril (Alg-SNF) was introduced for soft tissue engineering. SNF significantly changed the injectability of alginate by improving its shear-thinning behavior and shape retention before ionic crosslinking. Furthermore, the rheological and mechanical properties, as well as the physiological stability of Alg-SNF hydrogels, were significantly modulated depending on the SNF content. Noticeably, the incorporation of 2 wt.% SNF significantly enhanced tensile strength (5 times) and compressive strength (3 times) while reducing degradation rate (1.6 times after 14 days of incubation) and swelling ratio (1.5 times), compared to 3D-printed alginate hydrogel. Moreover, 3D-printed Alg-SNF hydrogel could maintain the attachment and proliferation of cells in vitro. Finally, hybrid Alg-SNF hydrogel with desired physical and mechanical properties could be 3D-printed in complex shapes such as ear cartilage. Taken together, our results proposed the potential of Alg-SNF ink for the engineering of soft tissues.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2023-03-01T16:22:06.926Z | 2023-02-24T00:00:00.000 | {
"year": 2023,
"sha1": "ddfa0da2b911006f882e64713550c9579886f7db",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/15/3/763/pdf?version=1677239622",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30b190cdee01d988b8784526229e15c40b7942c0",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239113697 | pes2o/s2orc | v3-fos-license | Evaluation of the impact of enhanced virtual forms and gamification on intervention identification in a pharmacist-led ambulatory care clinic
Background Adoption of healthcare technology in the ambulatory care setting is nearly universal. Clinical decision support system (CDSS)2 technologies improve patient care through the identification of additional care opportunities. With the movement from paper-based to electronic clinical intake forms, the opportunity to improve identification of gaps in care utilizing CDSS in the ambulatory care setting exists. Objective To evaluate the impact of CDSS-enhanced digital intake forms, with- and without aspects of gamification, on the identification of intervention opportunities in an ambulatory care pharmacy setting. Methods Patients were invited to complete visit intake paperwork via virtual forms as part of a CDSS-enhanced mobile application designed to identify potential interventions based on patient age, sex, disease state(s), and user-provided information. Patients were randomized to receive optional patient-specific health questions 1) with or 2) without elements of gamification. Gamification elements included trivia questions, fun facts, and the chance to win a prize. A retrospective review was used to assess interventions identified for a random sample of patients seen within the same time frame who did not utilize the mobile application. Interventions were compared across groups utilizing ANOVA. t-tests were used for a subgroup analysis. Results From January to May 2019, 353 potential interventions were identified for 220 study participants. 0.44 (±0.82), 1.8 (±2.0) and 2.1 (±1.8) interventions per participant were identified for the control, virtual forms, and virtual forms + gamification groups, respectively. Significant differences in intervention identification across groups were found using a one-way ANOVA (F = 17.46, p < .001). Post hoc analysis demonstrated a significant difference in interventions identified for those completing 50–100% (n = 32) and those completing less than 50% (n = 18; p < .001) of the optional health questions in the virtual forms + gamification group. Conclusions Utilization of CDSS-enhanced clinical intake forms increased identification of potential interventions, though gamification did not significantly impact this identification.
Introduction
The American Society of Health-Systems Pharmacists (ASHP) Practice Advancement Initiative (PAI) began in 2010 with the mission to provide the tools and resources needed to support the growth of the profession of pharmacy. 1 The 2030 PAI includes recommendations for pharmacists to "leverage and expand their scope of practice…to optimize patient care," share accountability for patient outcomes, use technology to advance patient care, and develop and apply technologies in areas such as risk assessment to identify patients needing care. 2 The near universal adoption of electronic medical record (EMR) systems 3 has opened the door for the introduction of applications that work with the EMR to support the patient care process. These include digital clinical decision support systems (CDSS), which are designed to improve the clinical decision-making process through the integration of individualized clinical, health, and patient information. CDSS is generally employed to provide patient-specific recommendations in real time to the provider for evaluation via reminders, reports of aggregated patient data, templates for the completion of documentation, workflow support, displaying clinical guidelines, and other means. 4 Use of CDSS has been shown to increase adherence to clinical guidelines for specific disease states. 5 Exploratory Research in Clinical and Social Pharmacy 4 (2021) 100068 Despite the availability of evidence-based recommendations for preventative services provided and updated regularly by national health organizations, less than 10% of adults over 35 years of age received the recommended services in 2015. Five percent did not receive any of these services. 6 Poor adherence to screening and management guidelines is further reflected by the HealthyPeople 2020 objectives which include underperforming metrics such as "increase the proportion of adults with diabetes who have a glycosylated hemoglobin measurement at least twice a year" (D-11), "increase the proportion of adults with hypertension who meet the recommended guidelines" (HDS-10), "increase the proportion of adults who get sufficient sleep" (SH-4), "increase the proportion of adults who meet current Federal physical activity guidelines for aerobic physical activity and for muscle-strengthening activity" (PA-2), and "increase the percentage of adults who are vaccinated against pneumococcal disease" (IID-13). 7 Through CDSS utilization, the opportunity exists to identify and address non-adherence to guideline recommendations such as these.
One major barrier to successful implementation of CDSS is that needed information is often lacking in the EMR. According to Pharmacy Forecast:2020 Patient-Centered Care, it is anticipated that patient-reported data will become a valuable resource for making care decisions. 8 Previous studies have demonstrated that applications can be utilized to collect patient-specific information necessary for improving patient care. Gray et el. developed a tool to support collection of health information and goal setting for patients with chronic disease and disability. This application allowed patients to enter information between visits, which was then used to assess their health behaviors and status. Patients reported that the application helped them feel engaged in their healthcare and improved self-care, while providers reported that the application helped to guide clinic visits. 9 An advantage to use of these tools is enhanced care consistency. Previously, a mobile health application was demonstrated to improve the standardization of care by rural providers for a series of common disease states. 10 Other work has demonstrated improved consistency in EMR documentation, 4 and tailoring the application to the individual patient, practice, and program has been found to improve provider and patient experience. 9 Focus on the patient experience is important for maximizing engagement with digital applications. One method for increasing engagement is gamification, using game-like features in a non-game context. Elements of gamification include use of avatars, leaderboards, completion awards, accomplishment badges, ladders (answers compared to national averages), competitions, timers, and trivia. 11 The impact of gamification on application utilization and its ability to impact user outcomes is well illustrated by a study of the Pokémon Go application. This study found that the more users were engaged with the application, the more it increased their physical activity. 12 These findings suggest that implementing a CDSS application designed to identify potential patient-specific healthcare interventions may increase the identification of guideline-based interventions due to its ability to gather needed health information. Additionally, research suggests potential value of gamification as a tool to engage users of digital applications. This study sought to evaluate the impact of clinical decision support system (CDSS)-enhanced digital intake forms, with-and without aspects of gamification, on the identification of intervention opportunities in an ambulatory care pharmacy setting.
Location
This study took place at a university that is a self-insured employer that offers pharmacy services to employee health plan subscribers, including a dispensing pharmacy and ambulatory care clinic services. On campus at the university is a pharmacist-led clinic that provides a variety of services to university employees and their dependents. Services include immunizations, biometric screening, medication therapy management (MTM), women's health services, smoking cessation, dietetics counseling, and disease state management including hypertension, dyslipidemia, diabetes, prediabetes, asthma, and overweight and obesity. Each week, the clinic averages 150 individual appointments. The clinic staff includes clinical pharmacists, pharmacy technicians, pharmacy interns, a dietician, and dietetics students. This study was approved by the institutional review board.
Study design
All adult patients with a regularly scheduled clinic appointment were eligible to participate. Patients with an appointment for vaccine or medication administration only were excluded. Patients were invited to complete visit intake forms using a mobile application. A link to complete visit forms via the application was included in an appointment reminder email sent to patients with an upcoming clinic appointment. Patients who did not complete their intake paperwork prior to their scheduled appointment were invited to complete the forms via the application using a clinicprovided mobile device or via paper forms. Upon completion of the virtual forms or notification that the forms had been completed prior to the appointment, the clinic receptionist accessed a PDF copy of the completed forms within the patient's EMR and printed the forms for use by the clinician during the patient visit. During the second month of recruitment, technical difficulties with the application platform causing delays in PDF generation led the team to scale back recruitment to only include patients who completed the virtual forms online prior to their appointment in order to decrease interruptions in the clinic workflow. All clinic patients were given the opportunity to complete their visit forms via the application irrespective of their decision to participate in the study.
The study application was developed in collaboration with the clinic EMR provider. Over several months, a branded mobile application was developed that incorporated the clinic patient intake forms enhanced with an algorithm that individualized additional health questions for patients based on information captured by the intake forms. The algorithm was designed to identify potential interventions based on patient age, sex, disease state (s), and other user-provided information and clinical practice guidelines published by national health organizations including the U.S. Preventative Services Task Force, Centers for Disease Control, American Diabetes Association, and American Medical Association [13][14][15][16][17] (Table 1). The application randomly assigned participants to receive the virtual forms and additional questions only (virtual forms group) or virtual forms and additional questions enhanced with elements of gamification (virtual forms + gamification group). Elements included trivia questions, fun facts, and the chance to win a prize ( Table 2). The fun facts and trivia questions presented were related specifically to the questions asked. Participants could opt out of the additional questions at any time. Upon completion of the virtual forms and extra questions, the application generated a PDF of potential interventions within the participant's EMR. This PDF could then be accessed during the patient appointment and each intervention could be assessed for appropriateness. The control group consisted of a retrospective, random sample of patients who attended a clinic appointment within the study period but opted not to utilize the mobile application. A selection of 300 random patients seen in the clinic during the study period was reviewed. Individuals who had used the mobile application to complete their intake forms or who were seen for an ineligible appointment type were excluded. (Supplemental fig. 1).
A post hoc analysis was performed to determine the relationship between the rate of completion of the additional questions and identification of intervention opportunities. Data were only available for the virtual forms + gamification group.
Statistical analyses
An a priori alpha of 0.05 was set. The number of potential interventions identified was compared across groups utilizing ANOVA. A subgroup analysis of participants in the virtual forms + gamification group was performed to determine if the rate of completion of the additional questions impacted the rate of potential intervention identification. Completion
Results
A total of 346 individuals participated in this study, but the allocation of 126 was unknown due to an application error, resulting in their exclusion. The remaining 220 participants included in the analyses had an average age of 44.1 ± 12.4 years. The majority were female (53.6%) and Caucasian (75.9%). Demographics did not differ significantly between groups (Table 3). A total of 353 interventions were identified, with an average of 1.6 interventions per participant. Interventions were identified at a rate of 0.44 ± 0.82, 1.8 ± 2.0, and 2.1 ± 1.8 per person in the control, virtual forms, and virtual forms + gamification groups, respectively (Table 4). Though the rate of intervention identification was nearly identical in the virtual forms and virtual forms + gamification groups, there was a significant difference from the rate of intervention in the control group (p < .001) ( Table 5). The most commonly identified intervention opportunities were physical activity counseling, dietary counseling, referral for thyroid screening, need for vaccination, and MTM for cholesterol management (Fig. 1). The identification of opportunities for intervention were higher in the virtual forms and virtual forms + gamification groups compared to the control group for all interventions except referral for cholesterol management and referral for hypertension management.
A post hoc analysis demonstrated a significant difference in interventions identified for those completing 50-100% (n = 32) and those completing less than 50% (n = 18; p < .001) of the optional health questions in the virtual forms + gamification group (Table 6).
Discussion
This study demonstrated a significant difference in the number of potential interventions identified among virtual form users compared to paper forms. However, incorporating elements of gamification into the virtual forms did not significantly impact the average number of interventions identified. The subgroup analysis demonstrated that the number of additional questions answered was directly related to the number of interventions identified. Taken together, these findings suggest that participants completed similar proportions of the additional questions in both the virtual forms and virtual forms + gamification groups. Thus, completion of the additional questions increased identification of intervention opportunities, though the elements of gamification included in this study did not increase the rate of completion.
Integration of a tool that aggregates screening recommendations into a single, user-friendly platform in an ambulatory care practice has the potential to improve patient care by enhancing identification of non-adherence to these screening recommendations. Eighty percent of the most commonly identified potential interventions in this study required completion of the additional health questions. For example, "referral for thyroid screening" was the third-most identified intervention. Patient entry of an age of 30 years or greater into the virtual form triggered display of an additional question to evaluate patient thyroid screening status (e.g. "Have you had your thyroid checked within the last 5 years?") ( Table 1). "Referral for thyroid screening" would appear on the potential intervention list for all patients opting out of the question or selecting "no." Assessment of the need for screening could then be completed within the visit. Interestingly, two of the most commonly identified interventions were not identified in any patients utilizing paper forms ("referral for thyroid screening" and "need for vaccination") ( Fig. 1). Not only could this tool improve the rate of intervention identification but also the consistency of that identification. Study limitations include the ability to collect percent completion data for the additional questions from only participants in the virtual forms + gamification group and the lack of patient diversity. Modification of the study protocol to recruit only patients completing the virtual forms prior to their appointment increased the risk for selection bias. The majority of clinic patients completed the virtual forms prior to their scheduled appointment, so the impact on recruitment was thought to be negligible.
Future studies should determine the impact of intervention identification and patient education on the completion of recommended interventions and should include dissemination into additional ambulatory care populations. Though this study focused on a pharmacist-led clinic, similar applications would have utility in a physician clinic due to the focus on guidelines-based health recommendations. Since this application was designed to integrate specifically with the EMR of the study clinic, a similar development process would be needed for interoperability with other EMR systems.
Conclusion
Use of digital forms enhanced with a CDSS for identifying intervention opportunities and gathering additional patient-specific information significantly increased the rate of identification of opportunities for intervention in an ambulatory care clinic. However, the elements of gamification utilized in this study did not significantly impact this identification. Further, rate of completion of individualized additional questions was directly related to the rate of intervention identification.
Disclosure
The authors have nothing to disclose regarding real or potential conflicts of interest.
Funding
This work was supported by the ASHP Foundation New Investigator Award, Bethesda, Maryland. The funding agency had no role in the study design, data collection, analysis, or interpretation, writing of the report, nor the decision to submit the article for publication.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Table 6
Sub-group analysis of intervention identification in the virtual forms + gamification group based on percent of additional questions completed. | 2021-10-22T15:21:41.616Z | 2021-09-04T00:00:00.000 | {
"year": 2021,
"sha1": "cde91ecfc11be3e8ad00a5b9280e7ae4a91925d3",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.rcsop.2021.100068",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4055fffd1013c02f4e6fbe212dcedf8f50a67102",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271928301 | pes2o/s2orc | v3-fos-license | 完全生物可降解室间隔缺损封堵器植入的中期随访结果
目的 室间隔缺损(ventricular septal defect,VSD)是一种常见的先天性心脏畸形,改良封堵器、选择更优的路径等方法拓展了VSD封堵术的适应证并降低了相关并发症发生率,其中完全生物可降解封堵器的使用有望彻底解决VSD封堵造成的传导阻滞的问题。本研究旨在比较完全生物可降解封堵器与金属封堵器在经食道超声心动图引导下胸骨下段小切口VSD封堵术的中期随访结果,并分析术后发生心电图及瓣膜异常的危险因素。 方法 回顾2019年1月1日至11月7日在中南大学湘雅二医院进行VSD封堵术的27例患者术后3年内的随访资料,通过心电图及超声心动图结果评估手术的安全性及有效性,并通过Logistic回归分析探讨术后发生心电图及瓣膜异常的危险因素。 结果 分别有12例和15例患者采用金属封堵器和完全生物可降解封堵器进行VSD封堵,患者随访期间均存活,未发生房室传导阻滞、较大残余分流、封堵器过快吸收、明显瓣膜反流等严重并发症。金属封堵器组与完全生物可降解封堵器组术后1、2、3年的心电图和彩色多普勒超声检查结果比较,差异无统计学意义(均P>0.05)。VSD封堵器大小是影响术后2、3年三尖瓣反流的危险因素,封堵器大小与缺损大小的差值是影响术后2年三尖瓣反流的危险因素(P<0.05)。 结论 完全生物可降解封堵器在小型VSD封堵中的中期安全性和有效性较高,并具有与传统镍钛合金封堵器相同的术后效果。
©Journal of Central South University (Medical Science).All rights reserved.
VSD closure can be broadened while minimizing associated complications.The utilization of fully biodegradable occluder holds promising potential in resolving conduction block issues encountered during VSD closure.This study aims to compare the results of the fully biodegradable occluder with the metal occluder in transoesophageal echocardiographyguided VSD closure via lower sternal level minor incision at the interim follow-up, and to find risk factors for the occurrence of electrocardiographic and valvular abnormalities postoperatively.Methods: We reviewed the postoperative and 3-year follow-up data of all patients who underwent the randomized controlled study of VSD closure from January 1 to November 7, 2019 in the Second Xiangya Hospital of Central South University.The safety and efficacy of the procedure were assessed and compared between the 2 groups by electrocardiogram and echocardiography results, and the risk factors for the occurrence of postoperative electrocardiogram and valve abnormalities were studied with Logistic regression analysis.Results: Twelve and fifteen patients underwent VSD closure with the metallic occluder and the fully biodegradable occluder, respectively.All patients survived during the follow-up period without major complications such as atrioventricular block, significant residual shunt, too rapid absorption of the occluder, and significant valvular regurgitation.There were no significant differences in the results of electrocardiograph and color Doppler ultrasonography the metal occluder group and the fully biodegradable occluder group 1, 2, and 3 years after operation (all P>0.05).The size of the occluder were risk factors for tricuspid regurgitation at 2 and 3 years postoperatively, and the difference between the occluder size and the VSD defect size were risk factors for tricuspid regurgitation at 2 years postoperatively (P<0.05).
Conclusion:
This study adequately demonstrates the safety and efficacy of fully biodegradable occluders in small VSD closure and shows the same postoperative effects as conventional nitinol occluders.
Figure 1
Figure 1 Preoperative and postoperative echocardiography of patients with fully biodegradable occlude for VSD closure A: Preoperative echocardiogram (white arrow indicates VSD); B: Echocardiogram on the third day after surgery (white arrow indicates absorbable occluder); C: Echocardiogram in the second year after surgery (white arrow indicates that the fully biodegradable occluder has been completely degraded and the ventricular septum is intact).VSD: Ventricular septal defect.
Table 1 Comparisons of general data and postoperative follow-up results between the patients with fully biodegradable oc- cluder and metal occluder for VSD closure
©Journal of Central South University (Medical Science).All rights reserved. | 2024-08-24T06:15:56.775Z | 2024-05-28T00:00:00.000 | {
"year": 2024,
"sha1": "d64ce776959f1520c864fecb1074bf078b15d4d3",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "cc7f0d0d5a285b2d46e9b604d70326475567abe6",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
3119722 | pes2o/s2orc | v3-fos-license | Human Cardiosphere-Derived Cells from Patients with Chronic Ischaemic Heart Disease Can Be Routinely Expanded from Atrial but Not Epicardial Ventricular Biopsies
To investigate the effects of age and disease on endogenous cardiac progenitor cells, we obtained right atrial and left ventricular epicardial biopsies from patients (n = 22) with chronic ischaemic heart disease and measured doubling time and surface marker expression in explant- and cardiosphere-derived cells (EDCs, CDCs). EDCs could be expanded from all atrial biopsy samples, but sufficient cells for cardiosphere culture were obtained from only 8 of 22 ventricular biopsies. EDCs from both atrium and ventricle contained a higher proportion of c-kit+ cells than CDCs, which contained few such cells. There was wide variation in expression of CD90 (atrial CDCs 5–92 % CD90+; ventricular CDCs 11–89 % CD90+), with atrial CDCs cultured from diabetic patients (n = 4) containing 1.6-fold more CD90+ cells than those from non-diabetic patients (n = 18). No effect of age or other co-morbidities was detected. Thus, CDCs from atrial biopsies may vary in their therapeutic potential. Electronic supplementary material The online version of this article (doi:10.1007/s12265-012-9389-0) contains supplementary material, which is available to authorized users.
Introduction
Cardiovascular disease remains the leading cause of death in the Western world [1]. Cardiac stem/progenitor cells, identified in the heart in 2003 [2,3], are primed to repair damaged myocardium. To provide sufficient cells for therapy, cardiac stem/progenitor cells can be expanded in vitro, by selection using cell surface markers such as c-kit [2] or sca-1 [3], or from explanted biopsies via the formation of cardiospheres [4,5]. Stem/progenitor cells isolated using each of these methods have improved cardiac function in animal models [2,3,5,6].
Cardiosphere-derived cells (CDCs) are an heterogeneous population, comprising c-kit+/CD105+ cells, CD90+/ CD105+ cells and a small number of CD31+ and CD34+ progenitor cells [7]. In 2009, Andersen et al. suggested that CDCs did not contain cardiac stem cells but were a combination of cardiac fibroblasts and CD45+ blood-borne cells [8]. However, this was rebutted by Davis et al. [7], who demonstrated that c-kit+, CD31+/CD34+ and CD90+ explantderived cells (EDCs) could be cultured from human, mouse, rat and pig hearts and that rat CDCs were clonogenic and exhibited multilineage potential. Furthermore, they showed that human CDCs, when expanded from endomyocardial biopsies and transplanted into the infarcted mouse heart, differentiated into cardiomyocytes, endothelial and smooth muscle cells [9]. However, it is still uncertain whether biopsy location, increasing grades of cardiac failure or the presence of co-morbid risk factors, such as diabetes or hypertension, can affect the number and characteristics of the CDC population. Markers of cell senescence increase with increasing age and with type 1 diabetes in c-kit+ cells from human and mouse hearts [10][11][12], suggesting that this population of the cardiac stem/progenitor cells may be susceptible to damage.
CDCs have recently been tested in a phase 1 clinical trial, in patients 2-4 weeks after acute myocardial infarction (CADUCEUS trial) [13]. Selected c-kit+ progenitor cells have also been tested in patients with ischaemic cardiomyopathy (SCIPIO trial) [14]. As the heart contains few c-kit+ cells (approximately 1 in 10,000 myocytes [2]), it took 3 to 4 months to culture 1 million c-kit+ cells for the SCIPIO trial [14]. The cells were tested to confirm high expression of c-kit and low indicators of senescence. In contrast, for the CADUCEUS trial, 25 million CDCs were cultured in 36 days. The CDCs were assessed by flow cytometry to confirm high expression of CD105 and low numbers of CD45+ cells. The proportion of c-kit+ cells was not reported. Despite the higher number of cells administered in the CADUCEUS trial, and a significantly reduced infarct size at 6 months, there was no significant improvement in left ventricular ejection fraction (LVEF). Conversely, the SCIPIO trial showed an improvement in LVEF in patients after 4 months, suggesting that selected c-kit+ cells may be more effective than the heterogeneous CDC population.
Here, we cultured CDCs from the atria and ventricles of patients undergoing cardiac bypass surgery to characterise the cell population. We also assessed the effect of increasing severity of cardiac failure, and the existence of co-morbidities, such as diabetes and hypertension, on the number and characteristics of the CDC population obtained.
Biopsy Collection
Full-thickness right atrial biopsies and left ventricular epicardial biopsies were obtained at the John Radcliffe Hospital, Oxford, from coronary artery bypass graft patients, with informed written consent. Ethical approval was granted by the relevant Research Ethics Committee to obtain cardiac biopsies and conduct this study (REC reference: 07/H0607/95), which was carried out in accordance with the Helsinki Declaration of 1975, as revised in 2000. All human tissue samples and cells were handled, processed and stored under a Human Tissue Authority licence.
Culture of CDCS
Biopsies were placed in Complete Explant Medium (CEM, see Electronic Supplementary Material) on ice and processed within 2-3 h. Gross connective and adipose tissues were removed to leave only cardiac tissue. After washing twice with phosphate buffered saline (PBS, Invitrogen, UK), the sample was cut into 5-mm segments and digested in 0.05 % trypsin (Invitrogen) for 3 min at room temperature. The segments were further minced into 1-mm fragments that were washed again in PBS and plated out as explants onto fibronectin-coated (Sigma, USA) 60-mm Petri dishes (Corning, UK) containing 0.5 mL of CEM. Explants were incubated for 1 h at room temperature to allow adhesion of explants to the fibronectin coating, before a further 1 mL of CEM was added. Explants were cultured at 37°C in 5 % CO 2 , with the CEM replaced every 4 days. A layer of long thin fibroblast-like cells spontaneously emerged from edges of adherent explants, followed by overlying round phase bright cells. Phase bright cells were harvested once confluent by washing explants with PBS, with 1 mL 0.53 mM EDTA (Versene, Invitrogen), then treating enzymatically with 1 mL trypsin for 5-7 min at 37°C. An additional wash of PBS ensured complete removal of phase bright cells in addition to some fibroblasts. Explants could be harvested twice, allowing 1 week between harvests. Harvested cells were seeded into poly-D-lysine-coated wells at a concentration of 3×10 5 cells in 500 μL of Cardiosphere Growth Medium (see Electronic Supplementary Material). Fully formed, loosely adherent cardiospheres were harvested by gentle pipetting and plated onto fibronectin-coated T75 flasks (Corning) for expansion as CDCs to passage 2.
Flow Cytometry
Once confluent at passage 2, CDCs were harvested using trypsin (5 min at 37°C) after washing three times with PBS and once with Versene. Non-specific binding of antibodies was blocked using human FcR block (Miltenyi Biotec., Germany) at a concentration of 100 μL/1×10 6 , incubated on ice in the dark for 30 min. Cells were washed, suspended in PBS to a final concentration of 2×10 6 cells/mL and incubated with the appropriate antibody (see Electronic Supplementary Material) for flow cytometric analysis using a BD LSRII flow cytometer (BD Biosciences, UK) equipped with UV, blue and red lasers.
Cardiomyogenic Differentiation
Cardiomyogenic differentiation was induced using cardiomyocyte differentiation medium (CDM) (2 % FBS ESQ (embryonic stem cell-qualified; Invitrogen), 1 % insulin transferrin selenium in IMDM:DMEM/F12 (1:1, Sigma) supplemented with 1 mM dimethyl sulphoxide (DMSO). The DMSOsupplemented CDM was changed every 2 days for 6 days. Then, all cells were aspirated with PBS to remove the dead cells, and 2 ml CDM supplemented with 0.1 mM ascorbic acid was added to the plate. The medium was changed every 2 days for the following 6 days.
Immunocytochemistry
Conditioned CDCs were grown on Nunc Lab-Tek® 4-well chamber slides precoated with 10 μg/ml fibronectin and fixed with 4 % paraformaldehyde (Sigma) for 10 min on ice. Fixed cells were blocked with 10 % donkey serum (Biosera, UK) in 0.1 % PBS-tween for an hour at room temperature and then incubated with the primary antibody (see Electronic Supplementary Material) diluted in PBS, overnight at 4°C in a humidified chamber. Cells were incubated with the appropriate secondary antibody for an hour at 4°C and the immunofluorescence detected using a confocal microscope (Zeiss Confocal LSM 520 META).
For filamentous actin staining, a rhodamine phalloidin probe was used. Cells were permeabilised with 0.1 % Triton after fixation and blocking for 30 min using a solution of 2 % FBS+2 % BSA in PBS. Rhodamine phalloidin (Invitrogen) was diluted in PBS and added to the cells for 15 min before washing and mounting.
Statistical Analysis
Data are expressed as means±standard error. Analysis of variance, with a Tukey post hoc test, T tests and Pearson correlations were performed using Excel and SPSS software. Statistical significance was assumed to be p<0.05.
Patient Demographics
Full-thickness right atrial biopsies (n022; 468±40 mg) and left ventricular epicardial biopsies (n022; 164±29 mg) were taken from patients undergoing coronary artery bypass surgery (40-83 years of age), three of whom underwent concurrent valve replacement. Details of the patient population are given in Table 1.
Cell Culture
Biopsies were explanted onto fibronectin-coated dishes for culture of explant-derived cells (EDCs). A bed of fibroblastlike cells grew out from the explants, over which phasebright cells migrated. Once confluent, EDCs could be harvested for culture as cardiospheres.
The time taken for each stage of the cell culture process and the number of cells produced varied considerably (Fig. 1a, Table S1). Cardiospheres grow slowly when EDCs are seeded at a low density [15], so at least 40,000 EDCs need to be harvested for successful cardiosphere culture. All atrial biopsies generated sufficient EDCs for cardiosphere formation, over 7 to 55 days, but it was only possible to culture cardiospheres from eight ventricular biopsies (denoted group AV).
There was a significant correlation between the number of EDCs produced from atrial and ventricular biopsies from the same patients (Fig. 1b). The time taken to culture confluent atrial EDCs inversely correlated with the doubling time of the resultant CDCs, in that fast growing EDCs generated fast-growing CDCs (Fig. 1c). The low sample There were no correlations between the rate of growth or the number of EDCs or CDCs with age or disease ( Table 2).
EDC and CDC Characterisation
Cell surface markers on all CDCs (n022) and a subset of EDCs (n 03) were characterised using flow cytometry (Fig. 2a, b; Tables 2 and 3). EDCs and CDCs comprised predominantly of CD105+ cells, with a wide variation in expression of CD90 (atrial EDCs 26-71 %, ventricular EDCs 38-70 %; atrial CDCs 5-92 % CD90+; ventricular CDCs 11-89 % CD90, Table S1) and with low expression of c-kit, CD31 and CD34. There were significantly more c-kit+ cells in EDCs than CDCs, from both atrial and ventricular biopsies, and ventricular EDCs contained more c-kit+ cells than atrial EDCs ( Fig. 2b; Table 3). EDCs contained 1 % CD45+ cells, which were not detected in the CDC population. There were no other significant differences in expression of cell surface markers in EDCs or CDCs from atrial tissue compared with those from ventricular tissue (Fig. 2b).
The percentage of CD90+ CDCs inversely correlated with the time taken to culture confluent EDCs, indicating that where biopsies produced confluent EDCs relatively rapidly, these EDCs contained more CD90+ cells (Fig. 2c). Predominantly, the atrial biopsies with rapid outgrowth came from hearts from which insufficient ventricular EDCs were produced (denoted group A). CDCs from group A contained 21 % more CD90+ cells than those from group AV (Table 3). Furthermore, the percentage of CD105+ CDCs inversely correlated with the CDC doubling time (Fig. 2d), suggesting that the doubling time of CD105+ cells is faster than that of CD105− cells.
Atrial CDCs from diabetic patients (n04) contained significantly more CD90+ cells (79±8 %) than those from nondiabetic patients (50±5 %; n018; Fig. 2e), but there was no other correlation between age or disease and CDC numbers, doubling time or cell surface markers (Table 2). To further investigate differences between CDCs from diabetic and non-diabetic patients, we treated CDCs from nondiabetic (n02) or diabetic patients (n02) with cardiomyogenic differentiation medium for 2 weeks. Untreated and treated CDCs were stained for CD90, the fibroblast marker discoidin domain receptor 2 (DDR2), smooth muscle actin (SMA) and troponin T (TnnT) (Fig. 3). Confirming the flow cytometric analysis, untreated CDCs from diabetic patients contained more CD90+ cells than those from non-diabetic patients and also contained more cells positive for DDR2. Untreated CDCs also contained cells expressing smooth muscle actin (SMA) but few cells positive for TnnT. Following treatment with cardiomyogenic differentiation medium, there was a decrease in the proportion of cells expressing CD90 and SMA and an increase in the number of cells positive for TnnT, but possibly to a lesser extent in the CDCs from diabetic patients.
Discussion
Here, we show that CDCs cannot be cultured routinely from ventricular epicardial biopsies from patients with ischaemic heart disease. Furthermore, both atrial and ventricular epicardial CDCs contain variable numbers of CD90+ cells and few c-kit+ cells.
As has been seen with studies in bone marrow stem cells, there is a disparity between results observed in the single human clinical trial using CDCs [13] and those in animal models [4][5][6]16] where cells were predominantly isolated from young, healthy animals rather than those with heart disease. Additionally, in animal models, there are no potentially confounding pharmacological or surgical treatments, whereas in the clinic, ethical practice mandates that cells be administered in addition to current 'best practice' therapy. Here, we found that CDCs from patients with ischaemic heart disease contained few c-kit+ cells and an increased proportion of CD90+ cells than reported originally [5] (Fig. 2). There are now at least 12 papers reporting cell surface markers on cells expanded from human biopsies using the cardiosphere protocol (Table 4), although not all give details of patient age or disease. While there is consensus that the majority of cells are CD105+, the proportions of c-kit+ and CD90+ cells vary considerably. The effect of comorbid risk factors such as diabetes and hypertension on the types and proportions of cells obtained has not been reported, although it is likely that at least some of the variation results from differences in culture protocols between laboratories, as small changes to the length of time before harvest of EDCs and of culture of cardiospheres may affect the resultant CDC population, even within laboratories [17,18]. It is well documented that c-kit+ cardiac stem/progenitor cells are clonogenic and able to differentiate into cells of the cardiac lineage [19][20][21]. Human CDCs containing few c-kit+ cells are also capable of differentiation towards the cardiomyocyte lineage, as has been shown here and by others [22][23][24] (Fig. 3). However, the increased proportion of Fig. 2 Cell surface markers on EDCs and CDCs. a Representative flow cytometry plots for CD117 (c-kit), CD90 and CD105 (with isotype controls in grey) in CDCs from atrial (top) and ventricular (bottom) biopsy samples. b Expression of cell surface markers by EDCs and CDCs from atrial and ventricular biopsies (n03 for atrial and ventricular EDCs, n022 for atrial CDCs and 8 for ventricular CDCs; *p<0.05 compared with atrial EDCs, †p<0.05 compared with ventricular EDCs). c The time taken for culture of confluent EDCs inversely correlated with CD90 expression in CDCs and d the doubling time of CDCs inversely correlated with CD105 expression in CDCs. e Atrial CDCs from diabetic patients (n04) contained significantly more CD90+ cells than those from non-diabetic patients (n018; *p<0.05 compared with non-diabetic patients). Error bars show standard errors CD90+ cells seen in CDCs from diabetic patients and in fastgrowing CDCs may indicate that these cells contain a greater proportion of cardiac fibroblasts, as suggested by staining for DDR2 (Fig. 3). Increased fibrosis is observed in the ischaemic [25] and diabetic heart [26] and with age [27], increasing the likelihood of culturing fibroblasts within the CDC population. Here, we found no correlation between age and the proportion of CD90+ cells, but this may be because we did not isolate CDCs from younger, non-ischaemic hearts. Mishra et al. expanded CDCs from the hearts of children with congenital, non-ischaemic heart defects [28] which contained 55-70 % CD90+ cells and showed a decline in the number of c-kit+ cells with age (Table 4). They also showed that administration of CDCs to the infarcted mouse heart improved cardiac function compared with administration of cardiac fibroblasts, emphasising the need to minimise the cardiac fibroblast population in CDCs. Although the formation of cardiospheres was proposed to enhance the stem cell population of EDCs, many cell types have now been shown to form spheres, including myofibroblasts and bone marrow and dermal mesenchymal cells [24]. However, Zakharova et al. found that EDCs from atrial biopsies obtained from patients undergoing cardiac bypass surgery contained high levels of vimentinpositive cells, but that the proportion of these cells decreased after cardiosphere culture [29]. All atrial biopsies yielded CDCs, but we found that only 8 out of 22 ventricular epicardial biopsies yielded sufficient EDCs for CDC culture. Although the epicardium contains progenitor cells, epicardial biopsies require stimulation for significant outgrowth to occur [30]. It may be that the ventricular biopsies from which EDCs were cultured here contained significant amounts of myocardial tissue. Atrial CDCs from hearts from which no ventricular CDCs were cultured contained more CD90+ cells, as did CDCs from rapid-growing EDCs. Atrial tissue contains more fibroblasts than ventricular tissue, and atrial fibroblasts proliferate more rapidly than those from the ventricle [31]. More work is required to establish whether rapid-outgrowth EDCs contain more fibroblasts than those that take longer to migrate from the explant. It is thought that cardiac stem cells predominantly reside in the atria and apex of the heart [32], although we found significally more c-kit+ cells in the small subset of EDCs from the ventricle than in those from the atrium. For the CADUCEUS trial [13], 25 million CDCs were cultured within 36±6 days from biopsies taken from the endomyocardial septum. Interestingly, here, no atrial or epicardial biopsy yielded that number of CDCs and the calculated time to reach 25 million CDCs ranged from 47 to 278 days (Table S1). For the SCIPIO trial [14], c-kit+ cells were isolated and expanded from atrial biopsies. The improvement in LVEF, measured in that study, indicated that c-kit+ cells may be more potent than other cells in the EDC or CDC populations. Here, we saw a decrease in the number of c-kit+ cells between the EDC and CDC stages of expansion. Expression of c-kit has been shown to vary with time in EDCs, peaking at about 21 days after plating in rat EDCs [33], and cardiosphere culture has been reported to increase the proportion of ckit+ cells [4]. Our data suggest that this may be lost again during expansion as a monolayer of CDCs.
Clearly, further careful characterisation of CDCs expanded from human patients is essential to establish transferable and reproducible cell culture techniques necessary for large multi-centre clinical trials. Lessons from the clinical trials using bone marrow cells have shown that changes to the conditions used for isolation and storage of cells can adversely affect their therapeutic potential [34,35]. Thus, modification to the expansion protocol may be necessary to optimise the cell population produced from both diabetic and non-diabetic patients for maximum therapeutic effect. | 2016-05-14T08:41:24.940Z | 2012-07-03T00:00:00.000 | {
"year": 2012,
"sha1": "4d11ff1e9c9aa9e026d039f729da4667fa8e3425",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12265-012-9389-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d11ff1e9c9aa9e026d039f729da4667fa8e3425",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260114545 | pes2o/s2orc | v3-fos-license | Intraindividual variability in sleep among athletes: A systematic review of definitions, operationalizations, and key correlates
Via systematic review with narrative synthesis of findings, we aimed to document the ways by which researchers have defined, operationalized, and examined sleep variability among athletes. We identified studies in which scholars examined intraperson variability in sleep among athletes via a search of six databases (Web of Science, Embase, Medline, PsycINFO, CINHAL Plus, and ProQuest Dissertations and Theses Global) using a protocol that included keywords for the target outcome (sleep*), population (athlet* OR sport*), and outcome operationalization (variability OR variation OR “standard deviation” OR fluctuate OR fluctuation OR stability OR instability OR reactivity OR IIV OR intraindividual). We complemented this primary search with citation searching of eligible articles. Assessments of study quality captured eight core elements, namely aims/hypotheses, sample size justification, sample representativeness, number of days sleep assessed, measures of sleep and its correlates, missing data, and inferences and conclusions. From a total of 1209 potentially relevant papers, we identified 16 studies as meeting our eligibility criteria. Concept definitions of variability were notably absent from this work and where available were vague. Quantitative deviations from one's typical level of target sleep metrics reflected the essence by which all but one of the research teams operationalized sleep variability. We assessed the overall quality of empirical work as moderate in nature. We propose a working definition of sleep variability that can inform knowledge generation on the temporal, day‐to‐day dynamics of sleep functioning that is required for personalized interventions for optimizing sleep health.
| INTRODUCTION
Good sleep is essential for optimal human health and functioning, particularly elite athletes who experience arduous physical and psychological strain. 1 The American National Sleep Foundation recommends that young adults (18-25 years) and adults (26-64 years) accrue at least 7-9 h of sleep per 24-h cycle 2 and experience short time to fall asleep after lights out (0-30 min), few awakenings greater than 5 min in duration (<2), reduced wake after sleep onset (0-20 min), and sleep efficiency >85% 3 to reap the full benefits of this health preventative and restorative bodily function.The extent to which these recommendations generalize to elite athletes remains unknown 4 and is likely challenging to pinpoint because of the diverse, often multifactorial physical and psychological conditioning programs required for certain sports.It is generally accepted among the sport science community that many elite athletes accrue insufficient amounts of sleep 4 and the quality of their sleep is suboptimal. 5Much of the available work prioritizes evidence on differences in mean levels of sleep metrics between individuals (interindividual variability) for interventions or strategies, [6][7][8] with little consideration of intraindividual variability for optimizing sleep health.This gap in knowledge limits our potential to generate innovative tactics that might optimize sleep health, that is, multidimensional sleep patterns (e.g., duration, efficiency, and quality) contextualized to personal and contextual factors, which give rise to positive health and well-being. 9The development of robust interventions for optimizing sleep health requires knowledge on the temporal, day-to-day dynamics of sleep functioning in relation to personal (e.g., training load and psychological factors) and contextual (e.g., air travel and social dynamics) factors.We propose that this knowledge is best acquired via estimates of intraindividual variability (IIV) alongside mean levels in sleep metrics, yet such evidence is fragmented across the literature making it insufficient for theory and practice.
The scholarly literature on athlete sleep is relatively young yet burgeoning, with approximately 80% of total outputs published since 2011. 1 Scholars have published several systematic reviews and meta-analyses to summarize what is currently known about athlete sleep and its role with athlete functioning.Regarding the importance of sleep for athlete health and performance, one statistical synthesis (k = 77, 227 effects, N = 959) indicated that sleep loss is detrimental to exercise performance (mean %Δ = −7.56%,95% CI −11.9 to −3.13), with subgroup analyses clarifying the maladaptive nature of sleep deprivation, sleep restriction (combination of late and early), and late restriction (earlier than normal waking), but not early restriction or delayed sleep onset. 11Among the general population, meta-analytic evidence (k = 72, N = 8608) supports a causal relation between sleep and mental health and specific indices including depression, anxiety, rumination, and stress (≥g = −0.49),as well as a dose-response effect, whereby greater improvements in sleep lead to more adaptive experiences of mental health. 12Regarding factors that promote optimal athlete sleep, one systematic review of sleep interventions (k = 10, N = 218) found that sleep extension provided the most benefit for performance, with mixed results for napping, sleep hygiene, and post-exercise recovery. 6In another systematic review and meta-analysis (k = 27, N = 617), narrative synthesis supported the benefits of sleep hygiene, assisted sleep, and sleep extension interventions for sleep, performance, and mood; meta-analytic synthesis of randomized controlled trials (k = 12) supported the effectiveness of sleep interventions, irrespective of their type, on subjective sleep quality (g = 0.62, 95% CI [0.21, 1.02]), reduced sleepiness (g = 0.81, 95% CI [0.32, 1.30]), and decreased negative affect (g = 0.63, 95% CI [0.27, 0.98]), with no meaningful effects on device-assessed or self-reported sleep and aerobic or anaerobic performance. 7Napping as a specific sleep strategy is also potentially beneficial for physical and cognitive performance, as well as perceptual and psychological factors (k = 36, N = 3489). 11Collectively, the available evidence supports athletes' sleep as essential to recovery, training, and performance, making it a cornerstone of holistic intervention approaches for the modern athlete.
The modest outcomes of existing sleep strategies or interventions [6][7][8] suggests that our understanding of optimal approaches to maximizing athlete sleep health is underdeveloped.Among the limitations (e.g., small sample sizes, underrepresentation of females, precision in reporting of interventions) of past work on athlete sleep, we contend that the prioritization on mean levels for the operationalization of sleep is one area that requires immediate attention because such data provide limited insight into strategies that might optimize sleep health.The focus on mean levels of sleep duration within scientific research is unsurprising given the prioritization of this metric by the American National Sleep Foundation in their guidelines. 2trategies informed by mean levels of sleep indices are often based on the presumption that barriers to optimal sleep (e.g., travel across time zones, unfamiliar sleeping environments, and well-being) are static and enduring, and therefore a "one size fits all" approach to mitigating such problems is ideal.The one size fits all approach *We acknowledge there exists numerous ways by which to operationalize sleep consistency or variability, including others which focus on day-to-day variations in sleep episodes (awake/sleep) on consecutive days like the sleep regularity index 10 rather than withinperson fluctuations across multiple days.
erroneously assumes that specific strategies will be effective for all people and all types of barriers; this assumption is inadequate for athletes who experience numerous and diverse stressors across the various ecologies of their occupational context (e.g., training, competition, and organizational).For example, sleep hygiene tactics (e.g., regular sleep-wake cycle, optimal sleeping environment) might be possible when athletes remain in the same geographical location for their training and competition schedule, yet challenging or impossible when regular travel is characteristic of their sport (e.g., altering time zones, unusual sleeping environments, and competition scheduling).An alternative approach to sleep intervention is one that embraces complexity and therefore encompasses a personalized repertoire of tactics that can be activated reactively to unanticipated stressors, or proactively to known challenges.Resolving these gaps in knowledge and inadequacies with past work is important because sleep health interventions represent unrealised potential for optimizing athlete performance and health.
Intraindividual variability in sleep metrics provides rich information about the in/stability of person-situation dynamics over time and across contexts that is unavailable from mean levels alone.For example, two athletes might accrue roughly equivalent mean levels of total sleep duration (Figure 1) and other core metrics (e.g., onset), yet differ meaningfully regarding the quantitative deviations from their typical level across a finite period (e.g., low vs. high variability).Sleep variability provides unique information on physical and mental functioning beyond that which is explained by habitual estimates of sleep health. 13epending on the context, IIV might suggest resistance or maladaptive responses to barriers to optimal sleep, which might be missed by a myopic focus on mean levels alone.Within the context of elite military forces, for example, reductions in IIV in sleep duration and efficiency across the first 7 days following an intensive 3-week selection course provides insight into emergent resilience. 14Typically, high sleep variability increases the risk of numerous health and behavioral issues, including physical health conditions, body mass, psychopathology, and stress. 13Knowledge of sleep IIV and its determinants among athletes represents an untapped source of evidence to inform a new generation of individualized sleep interventions that address dynamic, intraindividual networks of barriers and enablers to optimal sleep health.
Sleep patterns are characterized by several metrics across dimensions of continuity (e.g., latency and efficiency), architecture (e.g., rapid eye movement and slow wave), and naps (e.g., duration and frequency per 24 h). 3 Guidelines and recommendations almost exclusively focus on mean levels of these sleep metrics, 1,2 despite evidence suggesting that IIV is salient for health and functioning. 13hus, via a systematic review, we aimed to document the ways by which researchers have defined and operationalized sleep variability among athletes.In so doing, we lay the foundations for an empirical and practical shift to one that considers sleep IIV among the conversation on sleep health for athletes.
| METHODS
We preregistered the protocol for this systematic review on April 6, 2022, via the Open Science Framework (OSF: https://osf.io/ajgsx)and report the results according to the 2020 version of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.and Theses Global) from inception to April 6, 2022, using a protocol that included keywords for the target outcome (sleep*), population (athlete* OR sport*), and outcome operationalization (variability OR variation OR "standard deviation" OR fluctuate OR fluctuation OR stability OR instability OR reactivity OR IIV OR intraindividual).We also completed a backward (reference lists) and forward (citations) search of articles identified as eligible via the primary database search on July 5, 2022.
| Selection Criteria
We considered studies eligible for inclusion if they (i) sampled athletes, defined as individuals who are behaviorally engaged in sport, that is, "involving physical exertion and skill as the primary focus of the activity, with elements of competition where rules and patterns of behaviour governing the activity exist formally through organisations and is generally recognised as a sport" 16 ; and (ii) assessed sleep metrics (e.g., duration, quality, and efficiency) daily for at least three nights utilizing either self-report or wearable devices.We excluded studies when (i) they assessed sleep using polysomnography only because such studies typically reflect controlled experimental environments rather than the complexities of everyday life; (ii) assessed sleep repeatedly other than daily measures (e.g., weekly, monthly); the (iii) article was written in any language other than English; (iv) full-text was unavailable via our university library subscriptions or directly from the corresponding author (i.e., two email requests/reminders, separated by 2 weeks); (v) results were published as an abstract rather than a full-text (e.g., dissertation); or (vi) article presented no new primary data on sleep variability among athletes (e.g., narrative or systematic review, commentary).
| Screening approach
SK and DG collaboratively reviewed eligible articles (titles and abstracts) using a web application (Research Screener [https://resea rchsc reener.com])that enables assessors to screen all research abstracts from scientific databases using machine learning to optimize the review process. 17Research Screener ranks the abstracts in order of relevance based on relevant existing articles known to the team for inclusion based on the screening criteria, and continuously updates the learning algorithm every 50 abstracts screened based on what is deemed in/eligible by the reviewer.[20] Typically, Research Screener is used to optimize the review process, that is, review no more than 50% of eligible articles e.g. 21,22; however, we used this application solely to screen articles because of the user-friendly interface.
| Data extraction
SK extracted data items from primary studies using a predetermined form, or requested such information from the corresponding author of eligible studies where that information was unavailable in the full text.DG assessed a random sample of 30% of data extraction forms to check accuracy and consistency.We captured information on the publication details (e.g., publication year, study location), nature of the scientific work (e.g., number of nights assessed), sampled participants (e.g., age, sport/athletic pursuit) including application of the tiered Participant Classification Framework, 23 outcome assessments (e.g., definition of sleep variability, method used to assess sleep), and study quality.Using an 8-item tool, we assessed study quality as good, fair, or poor regarding indicators relevant to examinations of sleep variability, namely aims/hypotheses, sample size justification, sample representativeness, number of days sleep assessed, measures of sleep and its correlates, missing data, and inferences and conclusions. 13he complete data extraction form is available on the OSF project page (https://osf.io/cyafj/).
| Protocol deviations
We deviated from our registered protocol in one way.We considered studies eligible for inclusion when authors explicitly stated, or it could be inferred from their narrative, that they were directly interested in sleep variability.
Operationalizations of Sleep Variability
Authors reported an explicit definition of sleep variability in only four of 16 eligible studies.The most common features of these definitions included "differences within individuals time" (n = 2) or variation across days (n = 1) or nights (n = 1).The most common operationalizations of sleep variability included the intra-individual standard deviation (n = 8) and coefficient of variation (n = 6); the other representation included within-individual z scores, calculated as ([individual player's score -individual player's average]/individual player's SD).One paper excluded any explicit information on how the authors operationalized sleep statistically.Researchers assessed sleep primarily via devices (n = 4; e.g., Actigraph), self-report (n = 5), or a combination of both approaches (n = 7).
| Key correlates of sleep variability
Regarding key correlates of sleep variability, researchers have examined demographic (e.g., type of athlete), contextual (e.g., time of competition or training session), biological (e.g., nocturnal cardiac activity), physical (e.g., load), psychological (e.g., well-being, perceived effort) factors as correlates of sleep variability.Among the four studies in which the authors explicitly defined sleep variability, two reported intraindividual variability estimates descriptively (9%-22% for sleep duration and 2%-11% for sleep efficiency), with no consideration of determinants or outcomes of variability.Two other studies examined differences in sleep variability between playing level and athletes categorized as ir/ regular sleepers.Among rugby league players, elite juniors demonstrated greater variability in sleep onset time, time in bed, and sleep duration than both elite seniors and subelite seniors, as well as greater variability in sleep efficiency and subjective sleep quality than elite seniors. 18Regarding differences among elite team sport athletes, regular sleeper displayed less variability in total sleep time, sleep efficiency, and sleep onset and offset. 5
| DISCUSSION
Via a systematic review, we narratively synthesized the literature on sleep variability among athletes, particularly regarding considerations of definition and operationalization.We found that this body of evidence is small relative to work on mean levels of sleep metrics among athletes where there exists expert consensus recommendations, 1 with all but one of these outputs generated since 2017.Concept definitions of variability were notably absent from this work; where available, definitions were vague and therefore insufficient for guiding robust operationalizations of sleep variability as well as conceptual development and integration of findings.Quantitative deviations from one's typical level of some sleep metric reflected the essence by which all but one of the research teams operationalized sleep variability; this feature is also characteristic of definitions of intraindividual sleep variability with non-athlete populations e.g., 'quantifies daily variation around the mean', 13 (p108).Finally, the overall quality of empirical work is moderate in nature.Current knowledge on key determinants or outcomes of sleep variability is best described as being in its infancy.
Precise and unambiguous definitions of concepts and their operationalization via methodological procedures are fundamental to evidence quality and accumulation, and the translation of knowledge into practice and policy. 25,26The absence of an explicit concept definition of sleep IIV within much of the existing research is a major weakness yet, given the infancy of this body of work, represents an opportunity to set the foundations for conceptually robust work in the future.Quality data depend on quality concept definitions, which in turn is a prerequisite for robust theory, measurement, and application.The reasons for the absence of a high-quality definition of sleep variability in existing scientific work on sleep variability are unclear (e.g., regularly spoken in everyday life, scientific infancy of the field).Irrespective of such reasons, the field is in urgent need of a scientific definition that provides a precise, clear, and cohesive understanding of the meaning and defining features of sleep variability. 26Thus, we propose a working definition of sleep variability as a quantitative approximation of the magnitude of temporally heterogeneous deviations for each 24-h sleep cycle from one's typical level of indicators of sleep across a finite period.This definition encompasses several requirements for high-quality concept definitions. 26We clarify the nature of the phenomenon (magnitude of temporally heterogeneous deviations for each sleep session) and event to which this property applies (some indicator of people's sleep, e.g., temporal elements like duration or perceptual elements like quality).We also clarify the conceptual theme that summarizes the nature of the necessary and sufficient conditions for IIV in sleep (deviations from one's typical level across a finite period).In this way, IIV reflects within-person fluctuations in metrics of sleep functioning for each sleep episode (typically each night within a 24-h period including napping) around some measure of central tendency, across a defined temporal period.
Our working definition incorporates key features among all eligible studies identified via our systematic review regarding how sport science researchers have operationalized sleep variability in terms of processes, tests, and measurements.The first consideration is the temporal period across which one assesses sleep functioning and for which they can make inferences regarding IIV of specific metrics (e.g., duration and efficiency).We found no clear consensus regarding the minimum or optimal temporal window for which to assess athlete sleep functioning when interested in intraindividual variability.
Others have recommended at least 7 days 27 are required to estimate sleep variability reliably, though this recommendation is derived solely from an empirical analysis of 166 older adults aged 60 years and over across a 14-day period.Conversely, others found that guidance for minimum number of nights depends on the sleep metric (e.g., duration, variability), measurement window of interest (e.g., weekly or monthly), and desired reliability threshold of interest. 28In the absence of robust empirical evidence (e.g., Monte Carlo simulations), we suggest that researchers justify their selection of temporal window according to the research questions and contextual factors.For example, Gucciardi et al. ( 2021) sampled the 7-day "recovery" period between a 3-week selection course for entry into elite special forces and the start of a subsequent 15-month training cycle because it permitted inferences regarding emergent resilience via reductions in within-person sleep variability during this window.Alternatively, one may need to assess sleep across several nights to capture sufficient information "on" (e.g., training, competition) and "off" days when interested in the phenomenon of social jet lag (e.g., 29 ).
The second consideration concerns the ways by which researchers quantify and statistically model IIV in sleep functioning.We found that researchers relied on quantifications that characterize the amplitude or amount of fluctuation, namely the intraindividual standard deviation or coefficient of variation, which are subsequently employed as an aggregate index of variability in statistical models that either ignore (e.g., general linear models) or incorporate (e.g., mixed effects models) dependency inherent within repeated measurements of sleep.Despite the simplicity and practical intuitiveness of the intraindividual standard deviation or coefficient of variation as quantifications of the amplitude or amount of fluctuation, they are characterized by several disadvantages (e.g., sensitive to systematic within-person change, correlated with overall mean) that limit their usefulness for operationalizing sleep variability, as reviewed elsewhere. 26Monte Carlo simulations also indicate that indexes of IIV such as intraindividual standard deviation or coefficient of variation often have poor reliability. 30Relatedly, utilizing intraindividual standard deviations or coefficient of variations as an aggregate index of variability in statistical models is fraught with danger because it excludes uncertainty in the variability estimate and therefore inflates Type I error. 31,32ggregate indices of variability also and prevents analysts' from incorporating predictors that vary across time, such as daily indices of physical workload and psychosocial stress, and correlations among random effects (e.g., variability and mean levels) that are likely of substantive interest. 31,32Thus, the two-step approach commonly employed within the sport science literature limits the congruence between concept and statistical modeling, and the repertoire of substantive questions regarding sleep variability that can be addressed.Mixed-effects location-scale models represent an alternative approach to alleviate these shortcomings of existing approaches and maximize congruence between concept and design and test combinations. 32ur assessment of study quality identified that the weaknesses (e.g., absence of sample size justification or missing data) outweighed the strengths (e.g., deviceassessed sleep, quality of key correlates) evident among existing research on sleep variability among athletes.Given the reliance on inferential statistics to test key research questions among this body of work, the absence of sample size justifications means readers are unable to judge the informativeness of the data, given the design and test combination.Sample size justifications are rarely reported in the sport science literature. 33Within the eligible work summarized here, the median sample size of 36.5 provides 80% power (α = 0.5) to detect moderate-tolarge effects (e.g., ~r = 0.44; between-group differences, ~d = 0.66) for statistical models that assume normal distributions, homogeneity of variances, and independence in the data.Accounting for nonindependence and/or relaxing the assumption of homogeneity of variances complicates statistical modeling because it involves several fixed and random effects across multiple levels, which is challenging to estimate in the absence of prior work to guide plausible population effects. 34,35Of course, under certain circumstances (e.g., equal cluster sizes), mixed effects modeling produces approximately identical results to summary-based statistics such as t-tests and linear regression, 36 which can simplify sample size justifications.We also observed an absence of information regarding missing data-whether present or not and, if so, to what degree-on key variables of interest within the eligible body of work.Particularly in applied settings, longitudinal monitoring of factors that occur and likely vary daily are inherently plagued by missing data e.g., training load. 37andling of missing data is potentially problematic, rather than the presence of it per se, because what we do to deal incomplete data can introduce bias into tical models and undermine statistical power. 38Reporting clear sample size justifications, 39 pre-registering method and data analysis protocols, 40 and maximizing transparency in research reporting represent key opportunities for an area of research in its infancy. 41trengths of our study include a preregistered protocol, open data, transparency regarding deviations from our intentions, and search of peer-reviewed literature and dissertations.Nevertheless, the findings reported here are best interpreted within the context of the limitations, including a traditional search approach only (e.g., no direct contact with individual researchers for unpublished work, such as preprints), capture of articles written in English only, and application of a bespoke methodological quality assessment tool. 13
| CONCLUSION
Most research on athlete sleep has prioritized the generation of data and knowledge on mean levels of indicators of sleep, which has largely overshadowed research on IIV across a finite period.We argue that this oversight and oversimplification of the essence of athlete sleep has serious implications for knowledge development (theory) and translation into evidence-based strategies (application).Essentially, differences between athletes in mean level estimates of sleep alone are informative only when IIV is small or trivial; as deviations from one's typical level of indicators of sleep across a finite period increase, which represent systematic rather than random error, mean level estimates alone likely offer flawed data for making inferences about differences between people and the effectiveness of strategies to optimize sleep health.Thus, the next frontier of strategies to optimize athlete sleep health demands knowledge of both mean levels and IIV of metrics of sleep functioning.
| PERSPECTIVE
Optimized sleep enables biological and psychological restoration to cost inflicted by everyday activities. 12,42,43Acute and chronic inadequate sleep quantity and quality can trigger negative effects on bodily functioning.For example, healthy individuals who obtain less than 6 h of sleep per night may be at risk of disturbances to bodily functions such as glucose metabolism, immune processes, and cognitive capacity. 44Thus, identifying ways to intervene before acute sleep issues develop into chronic considerations is imperative, especially for populations (e.g., elite athletes) and contexts (e.g., shift work) where achieving sleep health might be thwarted.Currently, the development and execution of personalized sleep health interventions typically relies primarily on mean-levels of sleep indicators rather than a complementary view that also considers withinperson variability in these estimates.As concept definitions underpin high-quality research and practice, we systematically reviewed the literature to examine how researchers have defined sleep variability to date, and leverage this knowledge to propose a new working definition that conforms to guidelines for high-quality concept definitions. 26
REVIEW REGISTRATION
We preregistered the protocol for this systematic review on April 6, 2022, via the Open Science Framework (OSF: https://osf.io/ajgsx).
F I G U R E 1
Hypothetical example of meaningful differences in intraindividual variability in sleep duration (minutes) across a 7-night period for three individual athletes who share the same mean.
There were both strengths and limitations to other study features, including aims and hypotheses (50% good ++, 50% fair +), sample representativeness (100% fair +), and number of days (nights of sleep) assessed (62.5% fair [≥7 & <14], 25% F I G U R E 2 PRISMA flow diagram.F I G U R E 3 Visual depiction of study quality assessments. | 2023-07-25T06:17:27.295Z | 2023-07-24T00:00:00.000 | {
"year": 2023,
"sha1": "b4747708ccc5dbf7d6e219ccf9f2172fe392c0f9",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/sms.14453",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "7bea99104f14137582ec8c47f4f256665d9cde30",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211036689 | pes2o/s2orc | v3-fos-license | Male circumcision and prostate cancer: A geographical analysis, meta-analysis, and cost analysis
Introduction: Attempts to find an association between male circumcision and prostate cancer risk have produced inconsistent results. Methods: Age-standardized prostate cancer incidence, life-expectancy, geographical region, and circumcision prevalence from 188 countries were compared using linear regression analysis. Following a systematic literature review, a meta-analysis was performed on studies meeting inclusion criteria with evaluations of between-study heterogeneity and publication bias. A cost analysis (discounted at 3% and 5% per annum) was performed using the meta-analysis’s summary effect and upper confidence interval (CI). Results: Univariate analysis revealed a trend for a positive association between country-level age-standardized prostate cancer incidence (per 100 000 person-years) and circumcision prevalence (β=0.0887; 95% CI -0.0560, +0.233) while multivariate analysis found a significant positive association (β=0.215; 95% CI 0.114, 0.316). Twelve studies were included in metaanalysis. The random-effects summary odds ratio was 1.09 (95% CI 0.95, 1.23; between-study heterogeneity χ212=22.92; p=0.029; I2=43.3%). There was no evidence of publication bias. Costanalysis found infant circumcision was prohibitively costly, returning only between 1.4¢ and 12.5¢ for each dollar expended. Conclusions: Circumcision may be a positive risk factor on geographical analysis, but not in case-controlled studies. Circumcision is not economically feasible for preventing prostate cancer. CUAJ – Review Van Howe et al Male circumcision and prostate cancer 2 © 2020 Canadian Urological Association Introduction Prostate cancer is a common malignancy in elderly men, the primary cause of which is not clearly known. Dietary habits, behavioral factors, and racial identification have been identified as potential risk factors.1 The notion that circumcision may reduce the risk of prostate cancer originated in 1942 with Abraham Ravich, who repeated these same “finding” over the next 25 years.2-4 Since then a handful of studies have evaluated the contribution of circumcision status to the risk of prostate cancer with mixed results. In their 1989, 1999, and 2012 policy statements on neonatal circumcision, the various circumcision Task Forces of the American Academy of Pediatrics (AAP) fail to mention prostate cancer.5-7 In its 2018 guidelines on neonatal circumcision, the Canadian Urological Association noted “a borderline association on univariate analysis between circumcision and prostate cancer risk.”8 Attention to this issue has generated an informal geographic analyses of the incidence of prostate cancer,9,10 a formal geographical analysis of prostate cancer mortality,11 two concurrent meta-analyses,12,13 and an informal cost analyses.14 Subsequently more epidemiological data have become available. This report will update and more thorough analyze of any association between male circumcision and prostate cancer.
Introduction
Prostate cancer is a common malignancy in elderly men, the primary cause of which is not clearly known. Dietary habits, behavioral factors, and racial identification have been identified as potential risk factors. 1 The notion that circumcision may reduce the risk of prostate cancer originated in 1942 with Abraham Ravich, who repeated these same "finding" over the next 25 years. [2][3][4] Since then a handful of studies have evaluated the contribution of circumcision status to the risk of prostate cancer with mixed results. In their 1989, 1999, and 2012 policy statements on neonatal circumcision, the various circumcision Task Forces of the American Academy of Pediatrics (AAP) fail to mention prostate cancer. [5][6][7] In its 2018 guidelines on neonatal circumcision, the Canadian Urological Association noted "a borderline association on univariate analysis between circumcision and prostate cancer risk." 8 Attention to this issue has generated an informal geographic analyses of the incidence of prostate cancer, 9,10 a formal geographical analysis of prostate cancer mortality, 11 two concurrent meta-analyses, 12,13 and an informal cost analyses. 14 Subsequently more epidemiological data have become available. This report will update and more thorough analyze of any association between male circumcision and prostate cancer.
Geographic analysis
Country level epidemiological data of the age-standardized incidence of prostate cancer was obtained from the International Agency for Research on Cancer (IARC) 2012 GLOBOCAN report. 15 Circumcision prevalence by country were procured from previously published estimates. 16 World Health Organization 2016 estimates of life-expectancy by country were used. 17 The weight assigned for population for each country was calculated by taking the number of cases reported in the IARC report for that country and dividing it by the age-standardized incidence (per 100 000 person-years). Univariate and multivariate linear regression models were developed using age-standardized incidence (per 100 000 person-years) as the dependent variable and circumcision prevalence, male life-expectancy, and the region in which the country presides (Middle East, sub-Saharan Africa, Europe, Asia, North America, South America, Australia/New Zealand, Central America/Caribbean, and Pacific Islands) as independent variables. (See Supplemental Table) Analysis of residuals was performed.
Meta-analysis
Following the recommendations of Stroup et al. for the meta-analysis of observational studies, 18 studies were identified in a MEDLINE search using the search terms "circumcision" and "prostate cancer" on December 16, 2018. Additional studies were identified using the bibliographies of studies identified in the search and by surveying researchers in the field. Inclusion criteria included randomized clinical trials, cohort studies, cross-sectional studies, and 19 indicating the potential for confounding. Consequently, comparisons between Jewish men and other populations were excluded.
When available, the primary analysis was performed using raw data. In one case, the raw data were obtained by contacting the study's lead author. 20 If within a study a clear distinction was evident between strata, each stratum was included separately.
Dersimonian and Laird random-effects summary results and between-study heterogeneity were calculated using the Mantel-Haenszel method. 21 To test for potential outliers, the dataset from each publication was individually excluded from the analysis to measure the impact measures of between-study heterogeneity. The exclusion of a study would be justified by a statistically significant reduction of the between-study heterogeneity 2 . Sensitivity analysis was performed with each of these studies excluded. The number of studies and the percentage of participants excluded to reach I 2 thresholds of 50% and 25% were estimated. 22 Meta-regression of study characteristics was performed. 23 Publication bias was assessed using funnel graphs and linear regression analysis, 24 funnel plot regression, 25 and an adjusted rank correlation test. 26 Any adjustments for publication bias were performed using the "trim and fill" method. 27 Linear regression analyses and assessment of publication bias and meta-regression analyses were performed using SAS version 8.02 (SAS Institute, Cary, North Carolina). All reported P-values are two-sided.
Cost analysis
A hypothetical model of one million men was constructed using the age-standardize incidence of prostate cancer (96.6 per 100,000 person-years) and the life-expectancy (78.3) in Finland going from a circumcision rate of zero to 100%. The cost of an infant circumcision has been estimated to be $285. 28 The average cost of treating prostate cancer was assumed to be $20,000. The average age of detection of prostate cancer and initiation of treatment is 70 years. The attributable proportion was calculated ((summary odds ratio -1)/summary odds ratio) to determine the number of cases of prostate cancer averted through circumcision. The summary random-effects odds ratio and upper 95% confidence interval from the meta-analysis were both put into models that employed 3% and 5% per annum discount rates.
Geographical analysis
The results of the univariate and multivariate linear regression are seen in Table 1. These results indicate that circumcision prevalence is positively associated with the incidence of prostate cancer. A significant positive correlation was found between life-expectancy and prostate cancer incidence (β= 2.46, 95%CI: 1.71, 3.21, t=6.44, p<0.0001), as well as a negative correlation
CUAJ -Review
Van Howe et al Male circumcision and prostate cancer 4 © 2020 Canadian Urological Association between life-expectancy and circumcision prevalence (β=-0.0512, 95%CI: -0.0766, -0.0258, t=-3.98, p=0.0001). This indicates that men in countries with a higher circumcision prevalence did not live as long. This association did not persist when adjusted for the various regions of the world (β=-0.0249, 95%CI: -0.0594, +0.00441, t=-1.70, p=0.09). There was no significant interaction (effect modification) between circumcision prevalence and life-expectancy (p=0.84). Region of the world was significantly associated with life-expectancy, circumcision prevalence, and the incidence of prostate cancer (data not shown). An examination of the residuals found they were normally distributed with Mexico, the United States, China, and India as significant outliers (rstudent > 4).
Meta-analysis
Fifty-one publications were identified using the MEDLINE search. Of these, only seven met the inclusion criteria. 20,[29][30][31][32][33][34] An additional three studies were identified using bibliographies. [35][36][37] A further two studies were identified through contacting researchers in the field. 19,38 All studies identified were case-controlled studies whose characteristics are listed in Table 2.
Two studies were identified as a potential outliers. 30,36 When either of these studies in removed, the I 2 drops below 25%. When both these study was removed from the meta-analysis the I 2 is zero and the summary effect odds ratio was 1.10 (95%CI: 1.06, 1.16).
For each study, the natural logarithm of the odds ratio was plotted in the x-axis against the inverse of variance in the y-axis in Figure 1. This funnel plot appears symmetrical. There is no evidence of publication bias in Begg and Mazudar's adjusted rand correlation test (original: p=0.81, alternate: p=0.58), or the linear aggression regression analysis of Egger and associates (unweighted: p=0.82; weighted p=0.93) or Macaskill, Walter, and Irving (unweighted: p=0.99; weighted: p=0.67). A "trim and fill" evaluation found no evidence of a missing study.
Meta-regression evaluated the impact of publication after the introduction of screening with the prostate-specific antigen (PSA) test and found no statistically significant difference (t=0.42, p=0.68). There was no statistically significant difference between studies that were population-based as opposed to institution-based (t=0.89, p=0.39).
Cost analysis
Based on the meta-analysis, with the summary random-effects odds ratio estimated at 1.09 and the upper 95% confidence limit at 1.23, the attributable proportion would range from 8.26% to 18.70%. In the population of one million over 78.3 years, we would expect 75,638 men to develop prostate cancer. Going from a circumcision rate of zero to 100% would theoretically prevent between 6,245 to 14,141 cases of prostate cancer, with a savings of between $125 and The opportunity costs of expending $285 million on a neonate would be $2.26 billion (3% discount) to $8.67 billon (5% discount) at 70 years of age. For every dollar spent on circumcision one would expect to save 5.5¢ to 12.5¢ (3% discount) or 1.4¢ to 3.3¢ (5% discount). To be less costly either a circumcision need to cost less than $15.78 to $35.72. (3% discount) or $4.11 to $9.30 (5% discount) or the average cost of treating a case of prostate cancer would need to exceed $159,819 to $361,890 (3% discount) or $613,111 to $1,388,311 (5% discount).
Discussion
The geographical analysis discovered a positive association between circumcision prevalence and the incidence of prostate cancer. The meta-analysis of 12 studies failed to document a significant association between male circumcision and prostate cancer incidence. The costanalysis found infant circumcision to be a prohibitively costly option to avert prostate cancer. This is fourth published geographical analyses of county-level data of prostate incidence and circumcision prevalence. Morris and colleagues reported an analysis attributed to Waskett of 51 countries that documented a significant negative association between circumcision prevalence and prostate cancer incidence (p=0.02). 9 This analysis was subsequently expanded to 181 countries with similar results (p<0.0001). 10 The source of the circumcision prevalence data and the methods of calculation were not provided in either of these reports. An analysis of the current dataset that does not weigh each country's datapoint for population size was able to replicate their results (β=-0.311, 95%CI: -0.205, -0.418, t=-5.75, p<0.0001), but such an unweighted model is not informative as such an analysis attributes Comoros as much influence on the final estimate as China.
Wachtel, Yang, and Morris published a geographical analysis looking at the prevalence of male circumcision (stratified into rates less than 20%, 21% to 80%, and more than 80%) on prostate cancer mortality. 11 Although their specific methodology is not stated, it appears that they used a Poisson regression model adjusting for gross per-capita national income, religion, and WHO region. It is unclear from their report how the calculations were made, or what the results mean. No account is given for life-expectancy. More importantly, the results of the analysis do no support the conclusions reached by the authors. 39 The report focused only on mortality, which may be impacted by a number of factors, including life-expectancy.
Two previous meta-analyses have been published. A 2015 meta-analysis of seven studies found a non-significant reduction of prostate cancer risk in circumcised men (OR=0.88, p=0.19, I 2 =65%). 12 A 2016 meta-analysis of six case-control studies found a significantly lower prevalence of circumcision in prostate cancer patients compared with controls (OR=0.90, 95%CI: 0.82, 0.98). 13 While one of these analyses suggested difference before and after the introduction of prostate specific antigen (PSA) testing, the meta-regression analysis in the current study failed to
CUAJ -Review
Van Howe et al Male circumcision and prostate cancer 6 © 2020 Canadian Urological Association find a difference. The 2015 meta-analysis failed to include one study, 32 and the 2016 failed to include three studies 29,30,32 that should have been identified in even the most cursory PUBMED search.
Because prostate cancer has been associated with sexually transmitted infections (with the speculation that these infections ascend to the prostate, resulting in local irritation and malignancy), and because sexually transmitted infection have been purported to be more common in intact men, circumcision advocates have speculated that circumcision reduces the risk of prostate cancer. 9,11,14 This theory, which follows in the footsteps the theory that smegma causes prostate cancer, 4,40 has several deficiencies. First, prostate cancer usually arises in the posterior lobe, which is the furthest away from the urethra. 19 If the ascending infection theory were true, one would expect the portion of the prostate cancer closest to the urethra to host more carcinomas.
Second, the irritation theory is not well-supported by the inconsistent empirical evidence. [41][42][43][44][45] A 2014 meta-analysis found men with a history of any sexually transmitted infection at a significantly greater risk for prostate cancer. When broken down into specific infections, a history of gonorrhea was associated with a significant increased risk of prostate cancer, but no significant differences were seen for Treponema pallidum, Chlamydia trachomatis, Trichomonas vaginalis, Ureaplasma urealyticum, Mycoplasma hominis, herpes simplex virus type 1 or type 2, human herpes virus 8, or cytomegalovirus. 46 Circumcision has no association with a history of gonorrhea. 47 While one study reported an association between prostate cancer and a positive serology for HPV16 and HPV18 but not other strains of HPV, 45 the same group of researchers subsequently failed to confirm any association. 48 Other studies have been unable to detect HPV DNA in human prostatic malignancies. 49,50 Morris and colleagues 9 emphasized the role of Moloney murine leukemia virus in prostatic cancer in patients with a genetic variant of HPC1 that encodes RNaseL. 51,52 Hohn and colleagues were unable to replicate this findings. 53 Lee and colleagues subsequently found that the virus, which is not a naturally acquired human infection, was a contaminant. 54 The publications purporting this theory 51,52 were in turn retracted. 55,56 It noteworthy that Wright, Lin, and Stanford "believe there is strong biological plausibility for a relation between circumcision and the risk of [prostate cancer]," yet their own study failed to confirm this (OR=1.05, 95%CI: 0.87, 1.27). 33 Third, the speculative infection theory does not explain the increased prostate cancer mortality among Roman Catholic priests. 41 Fourth, men infected with HIV are at greater risk for other sexually transmitted infections and at greater risk for infection-related cancers, yet large-scale prospective studies of the risk of prostate cancer in HIV-infected men have found that these men have significantly lower
CUAJ -Review
Van Howe et al Male circumcision and prostate cancer 7 © 2020 Canadian Urological Association incidence of prostate cancer than the general population. 57 If prostate cancer risk was related to infection, we would expect the incidence to be greater in HIV-infected men. 58 Finally, the assertion that intact men are at greater risk of sexually transmitted infections is debatable. Urethritis is more common in circumcised men, HPV infects intact and circumcised men equally, and the studies on Trichamonas vaginalis are inconsistent. 46 Furthermore, if circumcision was effective in lowering prostate cancer, one would have expected a drop in the incidence of prostate cancer as the circumcision rate of men entering their seventh and eight decade of life increased in the United States. Instead between 1987 and 1992 the agestandardized incidence rate of prostate cancer increased from 102.9 per 100,000 person-years to 189.4 per 100,000 person-years. 59 Weaknesses in these analyses include reliance on estimates, rather than actual data collection, to determine circumcision prevalence in each country. These estimates were determined by individuals with a strong history of circumcision advocacy. One estimate stands out: the 14% circumcision prevalence in China. Because of China's large population, the linear regression was recalculated using a circumcision prevalence of 2.7%. This change slightly altered the overall finding of the geographical analyses (univariate: β=0.136, 95%CI: 0.000754, 0.274, t=1.96, p=0.0513; multivariate: β=0.236, 95%CI: 0.142, 0.330, t=4.96, p<0.0001).
Population-sized weighted country-specific datapoints avoids allowing small countries to have the same influence on the estimate as larger countries. The caveat with weighting each datapoint is that the quality of the data collected in each country may vary. If poor quality data is collected in a populous country, this weakness may be amplified because of the country's size. Similarly, well-collected data from a smaller country would be not receive the weight it may deserve. This concern also applies to studies included in a meta-analysis: the weight given to a study in a meta-analysis is based on variance, which is based on the number of participants in a study not on the quality of the data collection.
Average life-expectancy was included in the regression model because prostate cancer incidence had a significant positive association and circumcision prevalence had a significant negative association with life-expectancy. If a country has a low life-expectancy, fewer men will reach an age at which prostate cancer presents itself. This phenomenon should be captured by using calculated age-standardized rates. The positive association between age-standardized rates and life-expectance indicates that age may have either have a multiplier effect on prostate cancer incidence or it may be marker for favorable socio-economic conditions. Life expectancy remained a significant factor on both bivariate and multivariate analyses.
The studies included in the meta-analysis had a number of methodological weaknesses. Nearly all relied on patient-report to determine circumcision status. Some studies used patients with other types of medical conditions as the control group, 19 For the future, studies that determine circumcision status based on physical examination are needed. We also need national estimates of circumcision prevalence that are not based merely on speculation.
The link between circumcision status and the risk of prostate cancer is tenuous at best. A 2016 review of prostate cancer in The Lancet, fails mention circumcision. 60 It was ignored by the American Academy of Pediatrics in 1989, 1999, and 2012, 5-7 but was resurrected by the Centers for Disease Control and Prevention in 2014 61 Raising the possibility that circumcison may reduce the risk of prostate cancer is attractive to circumcision advocates because prostate cancer is a relatively common cancer in men. Demonstrating even a limited association between circumcision and prostate cancer risk can result in substantial numbers across a population. When asked why he robbed banks, Willie Sutton quipped, "Because that's where the money is." Given the vested interests and the potential financial payout, circumcision proponents are unlikely to back away from using prostate cancer as an excuse to perpetuate the practice. | 2020-02-06T09:07:27.404Z | 2020-02-04T00:00:00.000 | {
"year": 2020,
"sha1": "77e96ecfa5fd88a0c2edab7613e874c997f73b3f",
"oa_license": null,
"oa_url": "https://cuaj.ca/index.php/journal/article/download/6126/4313",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "df4aa96769609258760bf645d8fdf327a2f72f4c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218797079 | pes2o/s2orc | v3-fos-license | Research on New Macro-Moving 3D Mouse Based on 3-UPS Space Parallel Mechanism
This paper describes a new macro-moving 3D mouse with STC89C52 chip as the control core, combined with A/D conversion chip, sensors, USB interface and using 3-UPS space parallel mechanism. The 3D mouse has obvious motion perception, good real-time performance, high reliability, excellent technical performance, convenient operation, low cost and excellent performance, which meets people’s needs and has a broad application prospect.
Introduction
With the rapid development of 3D technology, 3D operating system emerged at the right moment, so it is necessary to develop a multi-dimensional mouse with space operation ability to match it. The main problems are complex operation, low control accuracy and lack of operation reality for the most existing six-dimensional input devices on the market [1]. In order to overcome the above problems, this paper designs a macro-moving 3D mouse with space six-dimensional operation ability, which enables the operator to have a more real sense of movement/rotation within the macro-moving range [2][3]. New type macro dynamic 3D mouse has spatial three-dimensional displacement variables and three dimensional angle of synchronous detection function, and it is applied broadly in the industrial multi-dimensional virtual assembly, robotics, computer-aided design, 3D games and the future of 3D operating system in high technologies. besides the research achievements of our country with independent intellectual property rights of the macro dynamic type 3D mouse transition and the marketization process also will have a great role in promoting. figure 2 show the schematic diagram of aircraft simulation and simulated flight. The user can freely use the 3D space mouse. The movement of the flight simulator in the control system is realized by using virtual reality technology to realize the three-dimensional movement and rotation of the aircraft in the virtual terrain environment, so that the operator can truly feel as if he controls the aircraft in flight, and thus realize the simulation of the aircraft movement in the real world [4].
Circuit design
The 3D mouse is mainly composed of three modules, namely SCM control module, communication interface and computer module. MCU control module uses low power consumption, high-performance 8-bit microcontroller STC89C52 chip, and it has 8K in the system programmable flash memory, which can directly use serial port download, greatly saving program debugging time; Sensors use potentiometer which converse the resistance change into A voltage change, then through A/D transformation chip send to the single chip microcomputer, due to the angle of the potentiometer value or displacement and voltage one-to-one correspondence, so without direction of the rotation of the potentiometer, and the A/D conversion chip is multi-channel acquisition, high resolution, so the circuit design is simple, low cost, and the mouse is high resolution; It chooses CH375 as USB communication chip, in the local end it has 8-bit data bus and read, write, chip selection control line and interrupt output function, which can be easily connected to the SCM/DSP/MCU/MPU controller system bus; It selects TLC2543 of 12 bit output as A/D chip.
Structural design
2.2.1 Multi-dimensional parallel mechanism design. The mechanical body of the new macro moving 3D mouse adopts a spatial parallel mechanism. By collecting the relevant displacement signals of each guide rail in real time, the kinematic solution of the parallel mechanism calculates its real spatial six-dimensional input, and divides the working area into three parts. As shown in figure 3, the parallel mechanism of the six-dimensional position and attitude sensor is composed of a moving platform, a base, and multiple branches connected to the moving platform and the base. The moving platform can move or rotate relative to the base. By changing the number of branchlets and the structural form of the branchlets themselves, the moving platform can have motion characteristics of different dimensions. Therefore, it is necessary to study the construction method of parallel mechanism with a given motion dimension (such as 2-6 dimensions). According to the multidimensional space mouse with six degrees of freedom of movement in the space, choose 3 UPS (3 represents the movable platform and base with three identical branch chains, U represents the hook hinge, P represents the moving pair, and S represents the ball joint) parallel mechanism, the motion platform as a ball head operation, to ensure that the appearance of multidimensional space mouse is compact, small, the advantages of easy operation [5][6]. This paper mainly studies the parallel mechanism with 6D motion, based on 3UPS parallel mechanism, to meet the needs of different applications, different degrees of freedom, different dexterity. Therefore, "six-dimensional position and attitude sensor" is taken as the core mechanism, the kinematics model between the moving platform and the base is established, the dexterity of the mechanism and the minimum appearance volume are taken as the comprehensive optimization objectives, and the key parameters of the mechanism are determined through multi-objective optimization[7].
Optimization design of key parameters of multidimensional parallel structure.
The main structure of the six-dimensional position and attitude sensor shown in figure 4 is a 3-UPS parallel mechanism, which is composed of a moving platform, a static platform and three identical UPS branches. Hooke hinge U is composed of two rotating pairs whose axes are respectively perpendicular to and parallel to the plane of the static platform, and the two axes intersect at a point [8].
Kinematic analysis of 3-UPS parallel mechanism includes two aspects: on the one hand, the kinematic positive solution of 3-UPS parallel mechanism is to solve the position and attitude of the moving platform relative to the reference coordinate system under the condition that the geometric parameters and joint variables of the mechanism are known. On the other hand, when the geometric parameters of the mechanism and the position and attitude of the moving platform relative to the reference coordinate system are known, the joint variables needed to reach the position and attitude are solved, which is called the inverse kinematic solution of the 3-UPS parallel mechanism. The position of the moving platform solution is relatively simple, and the attitude to solve more complex, the commonly used methods such as rotation matrix and a twisted angle and euler angle, because of the moving platform adopts the rotation matrix to describe attitude when need nine parameters, and a twisted horn and euler Angle only need three parameters, at the same time all euler angle of rotation is relative to the moving coordinate system to describe, in the computer programming to realize 3D virtual control when the mouse is very convenient, so this design uses the euler angle relative to the moving platform to describe the attitude of the reference frame.
Shown in diagram as shown in fig.4 institutions, including A, B, C for hook hinged U two axis of rotation center point, C, D, E as the center of the ball joint S, and their respective triangle is an equilateral triangle formed by two equilateral triangle center respectively O 1 and O 2 , Dx, Ex, Fx,D, E, F respectively in A, B, C to determine the plane projection, h1, h2, h3 to AD, BE, CF distance between two points, α1, α2 and α3 are the acute angles between the projection of AD, BE and CF on the plane determined by A, B and C and AO 1 , BO 1 and CO 1 ; β1, β2 and β3 are the angles between AD, BE and CF and the plane determined by A, B and C. In order to facilitate analysis and calculation, the static coordinate system O 1 X 1 Y 1 Z 1 and the moving coordinate system O 2 X 2 Y 2 Z 2 as shown in fig.4 are respectively established. O 1 and O 2 are the origin of the static and moving coordinate systems respectively.
Since the joint variables of the six-dimension controller are obtained through the sensors installed at the two rotating pairs of hooke hinge U and the moving pair P, the inverse kinematics solution of the 3-ups parallel mechanism is selected in this paper. The coordinates of the center points of the moving platform (x, y, z) and alpha, beta, and gamma are known variables, while the institutional joint variables α1, α2, α3, β1, β2, β3, h1, h2 and h3 are unknown. According to the coordinate transformation formula, the midpoints O 1 D, O 1 E and O 1 F of the stationary coordinate system O 1 X 1 Y 1 Z 1 have the following relations with the midpoints O 2 D, O 2 E, O 2 F and O2 O2 of the moving coordinate system If the length of side ABC of equilateral triangle is a and the length of side DEF is b, the coordinates of D, E and F in the moving coordinate system are:(-b/2,-b/6,0), (b/2,-b/6,0), (0,b/3,0). By substituting the coordinates into equation (1), the coordinates of D, E and F in the static coordinate system The Angle between ADx, BEx, CFx and the x 1 axis and its projection on the x 1 axis can be solved as follows: Formula 2-8 is the inverse kinematic solution of 3-ups parallel mechanism. When the pose of the moving platform relative to the reference coordinate system is known, the variable size of each moving joint can be determined.
Software design
The upper computer program adopts MFC based interface display and control program to transmit the nine sensor signals collected by the single chip microcomputer to the upper computer program display to control the movement of space objects. The program flow chart is shown in figure 5. Firstly, CH375 is initialized in the specific workflow, if the upper computer does not send a signal, the workflow will be terminated. Otherwise, the MCU will close the interrupt, which call a function to collect the signal of the sensor, send buffer to write to the USB breakpoint, and send the number to the PC through CH375. After that, the PC will get the data of the MCU and send it to the interface for display. and the two are similar in principle because they have six degrees of freedom, so the 3D mouse is suitable for manual control parallel robot, under the control of the manipulator can realize up and down around six directions of translation, but also it can turn in space, the operator's hand movements accordingly reflected in the parallel mechanical hand. The aircraft simulation program interface is shown in figure 7, which is divided into three areas: aircraft simulation area, 3D mouse movement comparison area and data display area. The user can observe the flight situation of the aircraft controlled by 3D mouse in the simulation area of the aircraft in the virtual environment, and the aircraft can realize various movements that can be realized by the real aircraft in space, such as forward, left and right turns, climb, dive, etc., giving people a more realistic feeling.
Concludes
The new macro moving 3D mouse designed in this paper uses the space parallel mechanism as the mechanical body. By collecting the relevant displacement signals of each guide rail in real time, the kinematic solution of the parallel mechanism can calculate its real space six-dimensional input, which can greatly simplify the complex operation of ordinary mouse and keyboard, and has strong practicability. | 2020-04-23T09:13:14.374Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "df1a51600b0ea9a2fbe171ac58f2e8b3403638f8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1486/7/072024",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4a8a947d8514bbc69e630d1bc8205b342aa018ad",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
211355715 | pes2o/s2orc | v3-fos-license | IDENTIFICATION OF CRITICAL COMPONENTS OF RESILIENCE DURING AND AFTER ECONOMIC CRISES: THE CASE OF WOMEN FOOD OPERATORS IN KUALA LUMPUR
In the past, there has been a preponderance of studies on entrepreneurs or small and medium enterprises (SMEs) in Malaysia; however, very few studies concentrated on women entrepreneurs. Women entrepreneurs are known to be persistent and resilient in running their business. However, it may be interesting to focus on factors or components that contribute towards their resiliency. Hence, this study explores the critical components of entrepreneurial resiliency which play a significant role in the business survival of women entrepreneurs during and after undergoing economic crises. The term resilience comprises three components, namely hardiness, resourcefulness, and optimism. Hardiness refers to adaptive capacity, and not easily discouraged by failures. Resourcefulness, on the other hand, relates to cash flow, investment, relational networks, material assets, and the ability to adapt to changes, while optimism means the preparedness to make decisions, take action, and the ability to see the humorous side of things. The sample of the study consisted Nazatul Shima Abdul Rani et al. 112 of 100 women entrepreneurs, mainly food operators in Klang Valley who were selected randomly. Most of the women entrepreneurs were aged above 30 years, and more than half have more than 10 years’ experience in running their business. A set of questionnaire with items on entrepreneurial resilience and its components—hardiness, resourcefulness, and optimism—was used to gain information among women entrepreneurs. The findings show that the most critical components for resilience during crises was resourcefulness, while optimism emerged as the most important component after undergoing crises.
INTRODUCTION
Small and medium enterprises (SMEs) play an important role in the Malaysian economic growth and rapid industrial development towards achieving the objectives of the Eleventh Malaysia Plan. SMEs act as a player and backbone of industrial development in the economy of Malaysia, whereby 98.5% establishments and 59% employment are derived from SMEs in 2015 (Eleventh Malaysia Plan 2016-2020. The Malaysian government has always been giving strong support towards the expansion and sustainability of SME activities; nevertheless, it is surprising that the attrition faced by SMEs is still high. Previous studies have been conducted on entrepreneur resilience (Weick & Sutcliffe, 2001;Gunasekaran, Bharatendra & Griffin, 2011;Bhamra, Dani & Burnard, 2011;Ayala & Manzano, 2014); however, research carried out on women entrepreneurs or SMEs are surprisingly scarce. The Malaysian government has rendered a lot of support to improve the performance of SMEs. Research works conducted in this area have mostly focused on lack of resources or support towards SMEs (Coyte, Ricerri & Guthrie, 2012;Dassi, Iborra & Safon, 2015;Kundu & Katz, 2003;Muscio, 2007). However, the provision of financial support alone is deemed inadequate to further strengthen the performance of SMEs in Malaysia, and the implementation of the Goods and Services Tax (GST) led to a few SME operators to commit suicide due to fines imposed on them for non-compliance as well as other reasons such as reduction of profits (The Malay Mail Online, 11 May 2015;1 June 2015). Hence, the purpose of this study is to identify the critical components of resilience possessed by women entrepreneurs in order to sustain their business operations during crises and recovery.
The research objectives are as follows: 1. To investigate the critical components of resilience for business sustainability among women food operators in Klang Valley during the economic crises.
2. To investigate the critical components of resilience for business sustainability among women food operators in Klang Valley after the economic crises.
Research questions: 1. What are the critical components of resilience for business sustainability among women food operators in Klang Valley during the economic crises?
2. What are the critical components of resilience for business sustainability among women food operators in Klang Valley after the economic crises ?
LITERATURE REVIEW
This section discusses the overview of women entrepreneurs in Malaysia, economic cycle, and also resilience as a factor for business survival.
Overview of Women Entrepreneurs in Malaysia
Despite the increasing number of female graduates, the number of women entrepreneurs are still far lower than male entrepreneurs. Each year, the number of women graduates entering institutions of higher learning is high as compared to their counterparts. These scenarios have been on-going for quite some time (Yusof, Alias & Habil, 2012). According to the Department of Statistics, in the year 2016, the female labour participation rate was 54.3%, which translated into 54% of the labour force for every 100 women. If this percentage is compared to the percentage of female student intake by public institutions of higher learning, that was 68% in 2013/14, there was still about 14% of educated and professional women who did not enter the labour market. Hence, it is vital that women who are educated and classified as professionals be involved in entrepreneurship and one way to encourage this is via economic empowerment.
In Malaysia, women have been involved in the entrepreneurship field for many decades. More recently, it has been illustrated that women entrepreneurs have become increasingly important in the entrepreneurship segment in Malaysia. Various types of programs and activities have been organised by a number of agencies to promote and improve the rate of participation of women entrepreneurs in the entrepreneurship sector. Subsequently, women entrepreneurs have made a great contribution towards the economic development of Malaysia (Alam, Jani & Omar, 2011). According to Kang (2016), government agencies and programmes have been established to assist women entrepreneurs including those from Secretariat for Advancement of Malaysian Entrepreneurs (SAME)'s women talentship initiative, SME Corporation's Skills Upgrading Programme, and Malaysia External Trade Development Corporation (MATRADE)'s Women Exporters Development Programme. Moreover, more programmes and training have been organised to empower women entrepreneurs especially single mothers, housewives, and women in the category of B40, the lowest income group.
Besides, according to Datuk Seri Rohani Abdul Karim, the former Women, Family and Community Development Minister, about RM200 million has been allocated for Amanah Ikhtiar Malaysia to provide micro-credit facility to women entrepreneurs in Malaysia as announced by the Prime Minister during the 2018 Budget tabling (The Star Online, 29 October 2017). The advantage of becoming a business owner is to help women escape from the poverty trap. Thus, it cannot be denied that the incentives created for women entrepreneurs are huge, and most women entrepreneurs venture into business due to economic reasons.
Theory of Resilience and Entrepreneur
Holling (1973) coined the word "resilience" in the study of ecology, where he defines resilience as the measurement of "persistence" in a system that absorbs change and disturbance but still persists after all the changes and disturbances have occured. In other words, the system is considered stable when it is able to return to an equilibrium state after temporary disturbance; hence the faster it returns with the least fluctuation, it is considered more stable.
Thus, the Resilience Theory discusses the impact of challenging events on individuals and how well individuals adapt to the changes or traumatic experiences, and later return to an equilibrium state or recover from any setback (Ayala & Manzano, 2014;Gunderson, 2000;Zautra, Hall & Murray, 2010). Hence, one of the key traits of entrepreneurs is resilience. Entrepreneurial resilience may include forming networks, accepting that changes are normal, and avoiding crises as much as possible.
The definition of resilience by Zautra, Hall and Murray (2010) states that it is a by-product of individuals' interactions with the environment, and the processes that either promote their well-being or protect themselves from overcoming risky situations. To be a sustainable and successful entrepreneur, resilience capacity is required to overcome critical situations that emerge from failures and crises faced previously (Duchek, 2018;Gunasekaran, Bharatendra & Griffin, 2011).
Entrepreneur resilience includes hardiness, resourcefulness, and optimism (Asgary, Azimi & Anjum, 2013;Bhamra, Dani & Burnard, 2011). Hardiness refers to the adaptive capacity such as the ability to change in response to new pressure. Resourcefulness is described as cash flow, investment finance, relational networks, and material assets (Torstensson & Pal, 2013). Optimism is the preparedness of entrepreneurs to make decisions and take actions to reduce vulnerability and the impact of facing disaster (Weick & Sutcliffe, 2001). All these three components, namely resourcefulness, optimism, and hardiness make up the composite factor of resilience. Thus, this study aims to identify the most critical component(s) during and after economic crises for women food operators. This is crucial because the findings can be used as a guide for future women food operators in sustaining their business operations.
Economic Cycle and Impact to Businesses
The economic cycle is divided into five phases which include the recovery phase, early upswing phase, late upswing phase, slow economy or initial recession phase, and finally, the recession phase. The first phase, the recovery phase is characterised by stimulating the economic policy or fiscal policy, followed by the increase in confidence level of investors and entrepreneurs, while the rate of inflation rate decreases. Normally, the policy is more about expansionary policy such as reducing tax, lowering interest rate of banks, and encouraging spending by the public. The second phase is the early upswing phase whereby the confidence level of investors and businessmen increases, and there is healthy economic growth while inflation remains low. The third phase is the late upswing phase which is characterised by mentality boom whereby the investors' confidence level is progressing very high. Moreover, the inflation rate becomes high due to high employability and higher spending power of the public. As such, the policy becomes more restrictive to reduce the spending power among the public. The fourth phase is economic slowdown or initial recession which is characterised by a sudden drop of confidence level among investors and businessmen due to the high inflation rate and therefore, inventory correction begins. Finally, the recession phase is reflected by the weak confidence level among investors and businessmen, very high inflation rate, and significant reduction in production due to high operational or production costs (Fischer, 2016;Dustmann, Glitz & Vogel, 2010). This study investigates resilience strategies adopted by women food operations during (recession phase in the economic cycle) and after the economy crises (recovery phase in the economic cycle) to sustain their business operations.
Sampling and Data Collection
A set of questionnaire was distributed to 100 women food operators selected randomly in areas around Kuala Lumpur including Kampung Baru. The questionnaire was divided into two parts. Part A comprises questions on respondent profiles, and Part B comprises items on each of the components of resilience, namely resourcefulness, optimism, and hardiness. There are three columns provided for responses by the sample. The first column indicates responses during the economic crises (recession phase), while the third column indicates responses after the economic crises (recovery phase) (Likert scale: 5 -Strongly Agree while 1 -Strongly Disagree). The second column refers to the items on each factor of resilience. The data collection was done over a period of two months from October to November 2017.
Respondent Profiles
Most of the samples were business owners aged above 30 years (89%), while the rest were under 30 years of age. About 63% of the business owners have been running their business for more than 10 years, while the remaining owners had less. In terms of education background, about 60% of the proprietors of the business were educated up to the secondary level, 25% with a diploma, 7% with a bachelor's degree, and 8% with a master's degree or PhD. Most of the respondents were sole proprietors (85%), 10% owned businesses based on partnership, and 5% had private limited businesses. Only 5% of the respondents had at least five employees, 75% with around 6 to 10 employees, and 20% employed about 20 employees. Correlation analysis between loans received and business performance or profits shows that there is no significant relationship. However, there is a significant relationship between those who earned profits in 2016 and 2017 as shown in Table 1.
Descriptive Analysis
In this study, descriptive analysis was conducted and the results are shown in the following tables. The findings are described in the discussion section.
Comparing Means between Hardiness, Optimism, and Resourcefulness
The findings as shown in
Resourcefulness
The findings in Table 7 shows the most important items related to resourcefulness during the crises which include controlling resources (Mean
DISCUSSION
This study highlights that the most important component of resilience during crises for women food operators is resourcefulness. Figure 1 summarises the findings of this research. The findings support previous research on components that are critical during crises which include the ability to find alternative resources to ensure that the business will be able to face turbulence and bounce back from disruption (Dahles & Susilowati, 2015;Williams & Vorley, 2014 However, the most important components of resilience after crises for women food operators include being optimistic, resourceful, and hardiness as shown in Figure 2. The findings that optimism and resourcefulness are the most critical components for resilience after crises support earlier studies by McInnis-Bowers, Parris and Galperin (2017), and Ayala and Manzano (2014). Hence, it is highly critical for women food operators to be optimistic about their business venture after crises. They are willing to run their business although they may obtain small profits, able to make decisions in a more stable environment, always consider long-term effects in making decisions, willing to serve many customers at one time, able to make quick decisions for long-term solutions, able to take orders anytime, and will operate the business although they are facing losses or are cheated by employees.
IMPLICATIONS AND RECOMMENDATIONS
The study has implications to existing and potential women entrepreneurs to adopt the characteristics/traits necessary, that is, resourcefulness and optimism for survival of business especially during and after crises instead of giving up altogether when their profits decrease. In this study, the findings show that hardiness is not an important component. However, efforts by the relevant authorities should be made by providing training to new women entrepreneurs by equipping them with the relevant skills and knowledge on how to be resilient by adopting characteristics/ traits such as resourcefulness, optimism, and hardiness. By possessing the three characteristics as reflected in the components, the extent to which they may become resilient in times of hardship may increase. Thus, entrepreneurs may have a higher probability of succeeding in their business during and after crises by incorporating new marketing strategies in accordance with the situation faced.
The study is also important as successful women entrepreneurs would mean that they would contribute effectively towards the gross national income, hence raising the productivity of the nation. It is recommended that further research concentrates on other exemplary successful women entrepreneurs and comparisons be made with the traits possessed by less successful women entrepreneurs to identify the missing traits so that appropriate training can de designed to help more women entrepreneur.
CONCLUSION
The findings of the study suggest that during crises, women entrepreneurs claimed that being resourceful was the most critical component that helped them to be resilient to sustain their business. In other words, they were able to control resources well, with ample resources to maintain quality products and services, and also control human resources well in ensuring business survival. These findings are quite similar to the research conducted by Ayala and Manzano (2014) who found that being resourceful was the most critical factor for entrepreneurs.
On the other hand, being optimistic about their business emerged as the most important factor that kept women entrepreneurs confident to strive hard in their business. This means that they had the ability to be prepared mentally to make the right decisions and further take the most appropriate actions in combating the impact of disaster faced during crises. Thus, this finding also supports earlier research conducted by Ayala and Manzano (2014) who found that women or female entrepreneurs were more optimistic than male entrepreneurs.
Nevertheless, being resourceful is still the second most important characteristic that they need to have to ensure success in their business. Hence, being resourceful emerges as an important component during and after crises for women entrepreneurs to be resilient towards business survival. | 2019-10-31T08:59:39.790Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "af8fe9dfa16d5da419d3af1a12b6b376c82af857",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21315/aamj2019.24.s2.8",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d78b1170b1c04038ed97a3effa9dccceb1b253cc",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
119486223 | pes2o/s2orc | v3-fos-license | Self-driven nonlinear dynamics in magneto-optical traps
We present a theoretical model describing recently observed collective effects in large magneto-optically trapped atomic ensembles. Based on a kinetic description we develop an efficient test particle method, which in addition to the single atom light pressure accounts for other relevant effects such as laser attenuation and forces due to multiply scattered light with position dependent absorption cross sections. Our calculations confirm the existence of a dynamical instability and provide deeper insights into the observed system dynamics.
Since its first realization in 1987 [1] the magnetooptical trap (MOT) has become a standard technique for providing a robust source of large numbers of cold atoms. While multiple scattering of the absorbed laserlight is known as a major limitation for achieving Bose-Eintein condensation, it also leads to interesting collective effects which have been studied over the last years [2,3,4,5,6,7] and a variety of static structures has been observed and investigated by different theoretical approaches [3,5,8,9].
Only recently, experiments have revealed a so far unexplored dynamical instability in three-dimensional MOTs connected with the appearance of self-excited radial oscillations [11], which constitutes a complex nonlinear dynamics phenomenon. Understanding the observed effect turns out to be of broader interest, as it provides a clean laboratory realization of similar plasma-and astrophysical phenomena, such as, e.g., pulsating stars [12], which are generally difficult to access.
Here we develop a theoretical model, describing the observed instability and providing a physical picture of the underlying mechanism. As discussed in [7], sub-Doppler cooling mechanisms only affect a very small fraction of large trapped atom clouds. Hence, the overall behavior of large atomic ensembles is well described within a basic Doppler-cooling picture, where the resulting trapping force along each laser beam can be written as [13,14] where is the absorption cross section for the two laser beams (including a saturation by the 3 pairs of laser beams), σ 0 = 3λ/2π the on-resonance absorption cross section, λ the laser wavelength, Γ the transition linewidth, δ the detuning from resonance, µx determines the Zeeman shift of the atomic transition due to the MOT magnetic field and s ± = I ± /I sat denotes the saturation parameter of the respective laser beam of intensity I ± with I sat being the saturation intensity of the atomic transition. For the discussion below it is convenient to split the force according to In order to simplify our theoretical considerations we use the following spherical symmetric generalization of eq.(1) While experimental confinement configurations generally do not obey this symmetry, eq.(3) describes the important features of the resulting force in both the linear and nonlinear trapping regions. At higher densities, attenuation of the laser light inside the cloud, results in an additional effective confining force experienced by the atoms [15]. To account for this effect within our spherical symmetry assumption, the spatial intensity profile is obtained from where s 0 is the saturation parameter of the incident beam. Moreover, multiple scattering of the absorbed laser light inside the cloud leads to an additional outward directed pressure, caused by an effective interaction between the atoms [2]. Neglecting higher order scattering events, which are known to screen the atom-atom interaction [16], a photon scattered off an atom at position r 1 exerts an average force on an absorbing atom at r 2 according to [2,4] F rsc = 3I sat 4πc The reabsorption cross section σ (+/−) rsc is obtained by convolving the absorption cross section of the emitted light with the emission spectrum of the atom at r 1 in the presence of either left or right circularly polarized laser light. Note that σ rsc may depend on both coordinates via the space dependence of the local laser intensities as well as of the respective detunings. Previously [3,4,7,8,9,10], such coordinate dependencies have been neglected, which according to eq.(5) results in a Coulomb-like interaction with effective charges, again underlining the close analogy with plasma and gravitational physics problems. In large clouds, however, we find the position dependence of the effective charges to be important for the static and dynamics properties of the trapped atom cloud.
Starting from eqs.(2)-(5) the collective system dynamics is described by the following kinetic equation and M is the mass of the atoms. Heating by spontaneous emission and photon exchange [16,17] has been neglected, since for the densities considered in this work the corresponding thermal pressure is much smaller than the pressure resulting from the effective atomic repulsion. Note that eq.(7) goes beyond a local-density approximation [16], retaining the complete position dependence of σ rsc and the density dependence of F rsc . In fact, this nonlocal space dependence of all forces in eq.(6) in addition to their local dependence on the atom position renders a direct numerical solution of eq.(6) very demanding. Alternatively, we apply an efficient numerical procedure based on a test-particle treatment, similar to particlein-cell methods [18], frequently used for plasma physics problems. More specifically, we represent the atomic density by an ensemble of N t < 10 6 test particles, whose number is typically chosen to be less than the actual particle number to reduce the numerical effort. The respective absorption cross sections and masses of the test particles are adjusted, such that the results are independent of the number N t of test particles. By propagating every particle according to the forces eq.(3) and (7) we obtain the time dependent density from which we calculate the local intensities and the resulting forces to advance the next timestep.
To study its stationary properties we evolve the atomic cloud until it relaxes to the selfconsistent, stationary solution of eq.(6), which we found to exist only below a critical atom number N c . Fig.1a shows the calculated stationary density profile for N = 1.15 × 10 9 Rubidium atoms and typical MOT-parameters of I = 1.0mW/cm 2 , δ = −1.5Γ and Γ/µ = 4.7mm (corresponding to 9G/cm) [11]. As can be seen, the calculated density is well described by a truncated Gaussian profile. As the atom number is decreased the truncation radius R decreases relative to the rms-width of the corresponding Gaussian, ulti-mately leading to a transition into a uniform density profile. Similar changes in the density profile have also been reported in MOTs, where the nonlinearity of the potential arises from sub-Doppler trapping mechanisms [7,19]. In the present case the observed transition results from the nonlinearity and the position dependence of the reabsorption cross section and, hence, can not be found under the assumption of linear trapping forces and pure Coulomb-like interactions [3,4,10].
Let us now turn to the most striking result of our calculations. As we further increase the number N of atoms the cloud becomes unstable at a critical atom number N c , corresponding to a critical radius R c . By varying the various MOT-parameters, we find that the critical radius is uniquely determined by the relation R c = δ/µ (see fig.3a), confirming the conclusion reached in [11]. This fact is illustrated in fig.1b and 1c, where we show the radial dependence of the trapping and interaction force as well as the damping constant β(r, v = 0). The damping constant β(r, 0) reverses its sign at R c = δ/µ. Hence, any small velocity of atoms outside of R c will be enhanced. While inward moving particles will be damped again when entering the negative-β region, outward moving atoms around R c are further accelerated away from the trap center, since the single atom light pressure force is largely balanced by the interaction force around r = R c . Their motion around the fixed point (r = R c , v = 0) will become unstable and limited by the non linear terms of the force. At larger distances, however, the total force reverses its sign again, since the interaction force decreases much more rapidly than the trapping, due to the radially increasing Zeeman shift (see fig. 1b). Hence, if during the expansion, the atoms did not acquire a velocity beyond the capture range of the MOT, a stable limit cycle will be reached. In order to characterize the onset of the instability we analyze the clouds RMS radius σ = r 2 /3 and study its sensitivity against a small perturbation. More precisely, we start from a stationary density corresponding to some detuning δ 0 which is instantly increased to δ (closer to resonance), leading to damped oscillations of σ towards its new equilibrium value σ ∞ , as shown in fig.2a. From a fit to a damped harmonic oscillation σ = ∆σe −t/τ sin(ωt + φ) + σ ∞ we obtain the damping time τ and frequency ω corresponding to the real On the other hand, the frequency of the cloud oscillation evolves continuously through the instability threshold (see fig.2), indicating that the onset of the instability proceeds via an supercritical Hopf-bifurcation. A reduction of the system properties to a single quantity like the cloud's RMS-radius is clearly helpful for understanding the transition into the oscillating regime. On the other hand the fully resolved space-time evolution of the atomic density such as shown in fig.4 reveals much more detailed information about the complicated dynamics of the cloud. Indeed, the complex density patterns at larger atom numbers (see fig.4) shows that the oscillation dynamics is much more complex than a simple breathing mode, as suggested by the simple size oscillations close to the instability threshold (see fig.2b).
In fact, the oscillation is triggered by an outer fraction of atoms, which gain energy as they move in and out of the active region of r > R c , which is indicated by the horizontal white line in fig.4. When bouncing back on the low-energetic atoms, the gained energy is deposited by exciting a density wave just inside the region with β(r, 0) < 0. Subsequently, the formed nonlinear excitation propagates towards the trap center along the diagonal blue line drawn in fig.4 and thereby loosing energy, mostly due to the damping by the cooling lasers. As can be seen in fig.4 this not only leads to a flattening and broadening of the density wave until it disappears, but also to a deceleration as indicated by the deviation of the moving maximum from the blue line at smaller distances. At the same time, the edge region of the atomic cloud starts to relax, causing some atoms to be again accelerated away from the center and the whole process repeats itself. Although this scenario clearly provides the basic mechanism for the observed oscillations, our calculations reveal a number of finer details (see fig.4) still to be understood. Moreover, additional damping mechanisms, similar to Landau-damping of plasma waves might also play a role for the system dynamics, raising the interesting question of how the present nonlocal position dependence of the effective charges manifests itself in known plasma kinetic effects.
In conclusion, large clouds of magneto-optical confined atoms have been found to exhibit a very complex nonlinear dynamics. Our theoretical description has revealed the onset of a deterministic instability connected with self-sustained oscillations in agreement with recent experiments [11]. It has been found that a number of different effects, such as the attenuation of the trap lasers, rescattering of the absorbed laser light as well as the position dependence of the respective absorption cross sections are all necessary to explain the observed phenomenon. A stability analysis of the MOT size has shown that the transition proceeds via a supercritical Hopf-bifurcation. The obtained density evolution revealed the build-up of complex nonlinear excitations driven by the combined action of the light-pressure force and the effective atomic interaction, which results in an active atomic motion at large distances. Similar types of active or self-driven motion are currently discussed in a broad range of different applications, such as collective swarm dynamics [20], propagation of waves [21] or dissipative solitons [22] in reaction-diffusion systems or grain motion in dusty plasmas [23]. Hence, we believe that large clouds of magneto-optical confined atoms provide an ideal laboratory system for further exploration of the rich spectrum of selfdriven motion, including variable system geometries, effects of external driving and possibilities to control the system dynamics.
TP would like to thank for the kind hospitality during a stay at the Institut Non Linéaire de Nice where major parts of the work have been performed and acknowledges support from the ESF through the Short Visit Grant 595 and from the NSF through a grant for the Institute of Theoretical Atomic, Molecular and Optical Physics (ITAMP) at Harvard University and Smithsonian Astrophysical Observatory. | 2018-12-20T20:38:33.524Z | 2006-02-10T00:00:00.000 | {
"year": 2006,
"sha1": "23a30ca7313572c148bf0d278ce01f1d95e2fb29",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/physics/0602075",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "23a30ca7313572c148bf0d278ce01f1d95e2fb29",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
52876063 | pes2o/s2orc | v3-fos-license | A biomarker feasibility study in the South East Asia Community Observatory health and demographic surveillance system
Background Integration of biomarker data with information on health and lifestyle provides a powerful tool to enhance the scientific value of health research. Existing health and demographic surveillance systems (HDSSs) present an opportunity to create novel biodata resources for this purpose, but data and biological sample collection often presents challenges. We outline some of the challenges in developing these resources and present the outcomes of a biomarker feasibility study embedded within the South East Asia Community Observatory (SEACO) HDSS. Methods We assessed study-related records to determine the pace of data collection, response from potential participants, and feedback following data and sample collection. Overall and stratified measures of data and sample availability were summarised. Crude prevalence of key risk factors was examined. Results Approximately half (49.5%) of invited individuals consented to participate in this study, for a final sample size of 203 (161 adults and 42 children). Women were more likely to consent to participate compared with men, whereas children, young adults and individuals of Malay ethnicity were less likely to consent compared with older individuals or those of any other ethnicity. At least one biological sample (blood from all participants – finger-prick and venous [for serum, plasma and whole blood samples], hair or urine for adults only) was successfully collected from all participants, with blood test data available from over 90% of individuals. Among adults, urine samples were most commonly collected (97.5%), followed by any blood samples (91.9%) and hair samples (83.2%). Cardiometabolic risk factor burden was high (prevalence of elevated HbA1c among adults: 23.8%; of elevated triglycerides among adults: 38.1%; of elevated total cholesterol among children: 19.5%). Conclusions In this study, we show that it is feasible to create biodata resources using existing HDSS frameworks, and identify a potentially high burden of cardiometabolic risk factors that requires further evaluation in this population.
Introduction
There is a need for comprehensive data resources on population health and disease in low-and middle-income countries, where a large proportion of the global burden of morbidity and mortality is located (1,2). Biomarker data form an essential component of such endeavours, allowing objective assessment of a wide range of disease-related indices, facilitating validation of self-reported information, and allowing for greater statistical power of analyses. Integration of biomarker data with information on health and lifestyle provides a powerful tool to enhance the scientific value of health research.
Large-scale surveys in low-and middle-income populations, such as the Demographic and Health Surveys, have previously included biomarker modules (3). However, these have often been restricted to a narrow range of measures from limited samples, with variable capacity for long-term storage and later analysis (3). Importantly, they are unable to follow up individuals over time. Health and demographic surveillance system (HDSS) sites offer a valuable opportunity for efficient, large-scale collection and analysis of biomarker data. They provide pre-existing infrastructure to facilitate biological sample collection, and the potential to link biomarker data longitudinally to historical and future measures. This linkage allows for a detailed view of disease development across the life course (4).
We undertook a biomarker feasibility study embedded within the South East Asia Community Observatory (SEACO) HDSS, which covers approximately 45 000 individuals in Segamat, Malaysia (5). The SEACO HDSS conducts annual enumeration of individuals, and has also undertaken a population-wide health survey collecting questionnaire data and biophysical measurements, in its catchment area (5). Through this study, we explored the feasibility of building upon the previous survey work conducted by SEACO to include biological sample collection. This feasibility study aimed to recruit approximately 200 individuals aged seven years and above to assess the preparedness of individuals and families to participate, and to establish the procedures for the collection, analysis and storage of biological samples within a predominantly rural community setting. Here, we outline the developments in the procedures and examine the outcomes of this study to determine the potential to create a large-scale biodata resource within the full HDSS population.
Methods
A detailed profile of the SEACO HDSS, including the HDSS development, structure, and data collections, is presented in a recent publication (5).
Sampling
Adult (aged 18 years and over) and child (aged 7-17 years) participants for this study were recruited from the SEACO HDSS (5). Stratified random sampling was performed at the household level using data from the most recent enumeration (completed in 2016), aiming to achieve comparable proportions of individuals of Malay, Indian, Chinese and Orang Asli (indigenous) ethnicity. Sampling therefore covered all enumerated households within the SEACO catchment area (approximately 1250 km 2 ). SEACO has established strong community links through its community engagement strategy (6), and additional community awareness activities were undertaken to sensitise potential participants prior to this study.
Data and sample collection
Community-based data and sample collection was undertaken by two field teams between November 2016 and February 2017. Data were recorded on electronic tablets. Informed consent (adults) or informed assent with parental or guardian consent (children) was first obtained; individuals could only participate if they consented to providing all data and samples (Supplementary Methods). Following informed consent, along with questionnaire and biophysical data, capillary blood (via finger prick, for point-of-care glycated haemoglobin [HbA1c] measurement), and venous blood (four tubes from a single blood draw: up to 24 ml from adults, 12 ml from children; for serum, plasma and whole blood samples) were collected from participants. Hair and urine samples were also collected from adult participants. Following data and sample collection, participants were given their body mass index (BMI), blood pressure and point-for-care HbA1c results, and were provided referral to local clinics if these were above predetermined cut-offs. One session of data and sample collection took approximately 40-50 minutes for adult participants and 30 minutes for children (see Supplementary Methods for further details on sample collection purposes and procedures).
Measures and statistical analysis
Study measures to evaluate scale-up Literature on suitable measures or assessment frameworks to determine feasibility for population-based observational studies is scarce (7-10). We therefore identified and examined a range of study-related measures to gain a comprehensive picture of the potential for scale-up. This included indicators of efficiency, response from potential participants, feedback from participants, and completeness and quality of collected data and samples.
First, we summarised study operational data to assess operational efficiency and response to the study. This assessment included information on the number of days of data and sample collection; the number and demographic characteristics of households and individuals approached; proportions consenting, declining or absent; reasons for refusal among those declining participation; and post-study feedback among participating individuals. Study pace was calculated as the average number of participants recruited per day. Differences in demographic characteristics between consenting and non-consenting individuals were assessed using Pearson's chi squared tests or Fisher's exact tests (cell counts less than five).
Following this, we examined measures relating to quality and completeness of data and samples. We were particularly interested in measures relating to blood sample collection, availability of blood test data and availability of blood sample aliquots, as indicators of the success of sample collection, analysis and storage. We extracted relevant information from three datasets generated at the end of the study: (i) data recorded on the electronic questionnaire form, (ii) blood test results, and (iii) records of receipt, processing and aliquoting of biological samples at the central research laboratory. All three datasets were cleaned, merged and checked for consistency. The completeness of questionnaire data for each participant was assessed by examining a set of all questions and measurements collected from all participants. The number of participants with any questionnaire data, blood test data, collected samples and samples for storage (plasma, serum, whole blood and remnant cell aliquots, urine aliquots and hair samples) was examined, and differences by sex, ethnicity and obesity status were assessed. The number of participants with complete data and samples was similarly examined.
Sociodemographic, lifestyle and risk factor data Finally, sociodemographic characteristics of study participants and crude prevalence of key lifestyle, biophysical and blood-based risk factors in the population were examined; differences by sex were assessed using Pearson's chi squared or Fisher's exact tests (see Supplementary Methods for list of variables and corresponding definitions).
All data management and analyses were performed using Stata 14 (Statacorp, Texas).
Ethical approvals
Ethical approval for the study was obtained from the Monash University Human Research Ethics Committee (CF16/471-2016000227), and approval for the receipt and analysis of linked anonymised data at the University of Cambridge was obtained from the University Human Biology Research Ethics Committee (HBREC.2017.04) (Supplementary Methods).
Study measures to evaluate scale-up
Measures of study recruitment and response Overall, 203 participants (161 adults, 42 children) were recruited into the biomarker feasibility study, close to half (49.5%) of those responding to an invitation to participate ( Figure 1, Table 1). Table S1). A greater proportion of women (56%) versus men, individuals aged 50-59 years (70.1%) or 60 years and above (64.7%) versus younger individuals, and those of Orang Asli ethnicity (64.9% among adults, 70.5% among children) versus those of any other ethnicity were available during recruitment (Table 1). Of those available and subsequently invited, women (68.5%, P < 0.001) were more likely to consent to participate compared with men, whereas children (30.0%) and young adults (48.2%), and those of Malay ethnicity (adults: 41.3%, P < 0.001, children: 19.0%, P = 0.129) were less likely to consent, compared with older individuals or those of any other ethnicity (Table 1).
Of 170 (83.7%) participants providing post-study feedback, over 95% agreed with comments relating to a favourable experience, including comfort during questionnaire administration (99.4%), interest in the study results (100.0%), and willingness to encourage others to participate in the study (99.4%) (Supplementary Table S2).
Completeness and quality of data and samples
We then examined the availability of data and samples collected from participants. All participants had some available questionnaire information, with most having three or fewer missing variables ( Table 2, Supplementary Tables S3-S4). At least one biological sample (capillary blood, venous blood, hair or urine) was collected at the anticipated quantity from all individuals ( Table 2, Supplementary Table S5). Over 90% of participants had some blood test data, whilst approximately 70-80% had complete data (Table 2), with no systematic differences in data and sample availability by ethnicity ( Supplementary Figures S3-S4).
Given the potential to obtain detailed biomarker information from blood, the availability and quality of blood samples was of particular interest in this study. A capillary (finger-prick) blood sample was successfully collected from all participants, with successful point-of-care HbA1c measurement in almost all (99.0%) participants (Table 2). At least one venous blood sample of any volume was collected from over 90% of both adult and child participants, with 82.6% of adults and 95.2% of children having all four blood samples collected at any volume (Table 3; Supplementary Tables S6-S7). Notably, obese adults were less likely to have blood samples successfully collected (at least one blood sample at any volume: 100% among non-obese adults versus 79.5% among obese adults, P = 0.002) (Supplementary Table S8). Almost all collected blood samples passed as acceptable quality by the research laboratory, for processing, analysis and storage (Table 3; Supplementary Tables S6-S7). At least one storage aliquot was available from all collected and accepted blood samples among children, and over 96.2% of samples among adults (Supplementary Tables S9-S10).
Sociodemographic, lifestyle and risk factor data
In addition to a notable prevalence of lifestyle and biophysical risk factors, we found a high burden of blood-based cardiometabolic risk factors in this population. Close to one quarter of adults (23.8%) had elevated HbA1c, while 8.2% had elevated total cholesterol, 15.0% had low HDL cholesterol, and 38.1% had elevated triglycerides (Table 4). Risk factor prevalence was similarly high among children: 19.5% had elevated total cholesterol, 14.6% had low HDL cholesterol and 36.6% had elevated triglycerides (Table 4).
Discussion
Detailed, objective measures provided by biomarker information are fundamental to comprehensive data resources on population health and disease. In this study, we show the feasibility of biomarker collection within the context of the SEACO HDSS. Approximately half of invited individuals consented to participate in biological sample collection, with favourable participant feedback. Biological samples were collected from all participants. Outcome measures indicated that there was scope to increase study pace, and a need to improve blood sample collection from obese participants, both attainable through appropriate modifications to study design and training. A high prevalence of blood-based cardiometabolic risk factors was observed among both adult and child participants. These results indicate that creation of a large-scale biodata resource is both achievable and valuable in this population, with potential relevance to similar HDSS sites.
We demonstrate here that capitalising on existing HDSS frameworks to undertake biomarker collection is an efficient way to encourage community participation, and to enhance their value as data resources. We undertook biological sample collection by building upon the strong existing infrastructure, data, human and material resources, local knowledge and community and administrative links established by the SEACO HDSS (5). The proportion of consenting versus invited participants observed in this study is comparable to or greater than other large-scale biobank or biomarker collection studies based in high-income countries (11,12). Participants were willing to provide both capillary and venous blood samples, with successful capillary blood collection for all participating individuals. Blood test data and storage aliquots were available for the majority of participants, indicating the successful establishment of procedures from sample collection to analysis and long-term storage. Data and sample collection took under an hour, and participants providing feedback Global Health, Epidemiology and Genomics 3 responded favourably to the study. The community engagement strategy previously established by SEACO provided a mechanism through which individuals could raise and address concerns they had with participation in this study (6). Importantly, we have the capacity to link information obtained in this study with measures from both previous and future HDSS data collections, including later clinical outcomes, which will facilitate the creation of richer datasets that may be explored in future analyses. Compared with the growing focus on feasibility studies for randomised clinical trials (13-24), literature on operational outcomes of observational feasibility studies remains scarce, and restricted to a limited number of measures, such as the overall proportion of invited individuals ultimately participating (7-10). Few studies have directly assessed measures of sample collection feasibility, with none identified here that specifically examined blood sample collection (7,25). Here, we identified useful indicators relating to various aspects of study operation including sample collection, using these in the context of our study to obtain a clearer understanding of the feasibility of scale-up. Systematic assessment of such measures may be useful to researchers EDTA: ethylene diamine tetra-acetic acid. 1 At least one of: plain serum or EDTA (plasma) or EDTA (whole blood 1) or EDTA (whole blood 2). 2 All of: plain serum and EDTA (plasma) and EDTA (whole blood 1) and EDTA (whole blood 2).
Global Health, Epidemiology and Genomics
planning similar data and sample collections in other low-and middle-income populations. While most outcomes assessed here indicated successful establishment of study operations, we identified two areas requiring improvement, which may be successfully addressed through simple modifications to study design and training. This included the slow study pace relative to the number of field teams and time taken per session of data and sample collection. This survey design-related issue was likely a result of the notable proportion of houses empty upon approach, due to outmigration or unavailability of household members at the time of recruitment. This, along with the predominantly rural setting and large sampling area, increased the travel time between houses with consenting individuals. More suitable methods of recruitment to improve study efficiency could include approaching sampled households in a separate recruitment drive to establish availability, willingness to participate, and to arrange convenient time windows for data and sample collection. We also observed lower blood sample collection success among obese participants, an issue specific to biomarker collection which may be resolved by further directed training of study phlebotomists.
The proportion of participating individuals in this study, along with differential response to participation across demographic subgroups, may suggest implications for generalisability. Although the demographic profile of this study may not be fully representative of the wider population, analyses arising from this study have the capacity to produce internally valid results regarding aetiological relationships, with wider relevance to other populations (11). Nonetheless, our observations indicate an opportunity to further improve recruitment strategies overall and across specific subgroups, in future data and sample collections.
The high burden of cardiometabolic risk factors observed in the current study population is consistent with previous findings from the SEACO HDSS (26,27). Similar trends have been reported in other middle-income countries including those from Asia, and are thought to be a result of epidemiologic transitions occurring in these populations (28)(29)(30)(31). These observations reinforce the need for large-scale biomarker data from such populations to comprehensively assess disease risk and associated influences across the life course. We demonstrate here that existing HDSS resources can be successfully augmented to achieve this purpose.
We present a study undertaken within a specific context, with basic infrastructure and resources already in place through the SEACO HDSS and augmented by collaborating institutions. Given our context and particular interests, we made specific choices regarding study design, including biological samples of interest, consent structure, the collection of non-fasting blood samples, and test result feedback and onward referral of participants. Researchers planning biomarker collections in other settings must consider their specific contexts and aims to inform decisions relating to suitable study design. Importantly, the measures presented here may be applicable and useful to understanding the feasibility of such biomarker collections regardless of exact study methodology.
To conclude, we show that biological sample collections to create biodata resources using existing HDSS frameworks are feasible. Using this approach, we identify a potentially high burden of cardiometabolic risk factors that requires further evaluation in this population. Building upon existing HDSS resources in this way would greatly enhance their scientific value, and contribute towards addressing the need for comprehensive biomarker data from low-and middle-income populations. Classification of all risk factors is described in the Supplementary Methods. Differences in distributions between men and women or boys and girls were assessed using Pearson's chi squared or Fisher's exact (cell counts < 5) test. N was reduced due to missing observations for the following measures: (1) Low fruit and vegetable consumption among girls (N = 18); (2) Overweight, obesity, central obesity and elevated waist to hip ratio and elevated HbA1c among women (N = 101); (3) Elevated HbA1c in girls (N = 18); (4) All cholesterol and triglyceride measures among girls (N = 18), men (N = 58) and women (N = 89). 1 Measures for hypertension and elevated cholesterol prevalence included individuals who reported being told they had elevated blood pressure or cholesterol. | 2018-10-11T13:15:26.027Z | 2018-08-22T00:00:00.000 | {
"year": 2018,
"sha1": "fd0a172a671d91da0fd0b458d277f169e66be470",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/09E5931221BD2B3CE1D2994B0E92D605/S2054420018000131a.pdf/div-class-title-a-biomarker-feasibility-study-in-the-south-east-asia-community-observatory-health-and-demographic-surveillance-system-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd0a172a671d91da0fd0b458d277f169e66be470",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
} |
54980131 | pes2o/s2orc | v3-fos-license | FEAST OF FOOLS: THE CARNIVALESQUE IN JOHN KENNEDY TOOLE’S A CONFEDERACY OF DUNCES
Despite the fact that the action in John Kennedy Toole’s novel A Confederacy of Dunces has often been compared to a carnival, there is little that the main character, Ignatius Reilly, has in common with those participating in a true medieval carnival as described by Mikhail Bakhtin in Rabelais and His World. Ignatius tries to assert his superiority over others both with his speech and behavior, violating the principal rule of carnivalesque equality, and is aggressively opposed to sexuality, which was a deeply positive concept in the carnival culture, symbolizing fertility, growth, and new birth. A great source of humor in the novel is the difference between the highly educated speech used by Ignatius and the vernacular spoken by other characters. This difference was successfully transposed into Slovene by translator Nuša Rozman, who managed to capture the differences between social classes by using various degrees of colloquialisms and slang expressions, while opting to nevertheless transcribe the characters’ speech in a way that is grammatically correct; a practice that has long been present in both original and translated Slovene literature, which highlights the fact that despite an increase in the number of works written in the vernacular over the past years, a universal standard on how to transcribe spoken Slovene has yet to be established.
Unknown until 1980, when his comic masterpiece A Confederacy of Dunces was published after waiting for a publisher for over a decade and a half, John Kennedy Toole (1937Toole ( -1969 attained recognition by being posthumously awarded the Pulitzer Prize the year following the novel's publication, and is now recognized as the author of one of America's "most widely read southern novels" (Haddox 2005: 168). Due to his premature death by suicide at the age of thirty-one, Toole never wrote anything after having completed his most celebrated work. In fact, he wrote his only other novel, The Neon Bible, at the age of sixteen and considered it too juvenile for publication during his lifetime, although the work was eventually published following the success of Confederacy, and even made into a feature film. However, it is the Pulitzer Prizewinning novel, borrowing its title from a quotation by Jonathan Swift in his Thoughts on Various Subjects, Moral and Diverting, which has attracted most attention from both readers and critics, and which, according to Nevils, had 1.5 million copies in print by 2001 (214).
Set in New Orleans, the novel follows the adventures of one Ignatius Reilly, a misfit and slob of colossal proportions, who at the age of thirty is forced by his alcoholprone mother to leave the sanctuary of his room for the first time. Ignatius is a true anti-hero who would prefer to be scribbling a denouncement of the modern world and praising the orderliness of the Middle Ages to finding a job and assuming an active role in the society he has been so busily condemning-which is exactly what he is forced to do following his mother's car accident and the ensuing law suit. Unequipped as he is with the social skills required to function normally in everyday situations, Ignatius is whisked from one disaster to another and finds himself in situations which are at the same time both hilarious and sad. While taking up a series of low-paid jobs, first as a clerical worker in a disreputable pants factory and finally as a street vendor selling hot dogs, he crosses paths with an impressively wide array of characters of all ages, social positions, and professions, who together form a rich collage of life in New Orleans in the 1960s.
The very fact that the novel is set in New Orleans, a city widely known for its Mardi Gras celebrations, combined with the humorous tone of the book, the bizarre mishaps and misadventures of the main character, and the general tendency of the characters to be clad in costumes, naturally brings to mind the carnival as an appropriate term embodying the spirit of the work. Such descriptions of the plot as "a carnival of modern life" epitomized by "a krewe of Mardi Gras dunces" are therefore quite common when discussing the novel, both in academic and more general circles 1 (Simon 1994: 99). Critics are correct to stress the predominately carnivalesque aspect of this work, but, as we shall see, the world of Ignatius Reilly is only similar to a carnival on the surface; at its core it remains fundamentally different from the true carnival culture of the Middle Ages.
In his seminal work Rabelais and His World, Mikhail Bakhtin discusses the concepts of folk humor, carnival culture, and grotesque realism by drawing heavily on François Rabelais' sixteenth-century tales of two giants, Gargantua and Pantagruel. In fact he goes as far as to call Rabelais' novel "the most festive work in world literature" (275). Rabelais was still very much in touch with the spirit of the carnival as an event implying freedom and festivity, in line with the long-standing carnival tradition in the area of Southern France where the writer lived, and his work therefore represents "perhaps the purest form of carnivalesque literature" (Horton 1999: 56). If the sixteenth century was truly "the summit in the history of laughter", as Bakhtin claims, it becomes necessary to evaluate the carnivalesque in Toole's work against the yardstick of the medieval carnival, complete with its festive activities, the concept of universal laughter, and the language of the marketplace (101).
According to Bakhtin, the carnival "celebrated temporary liberation from the prevailing truth and from the established order" (10). The strict hierarchical organiza-tion of society in the Middle Ages meant that people were well aware of the social caste they belonged to and the strict rules governing such that they had to conform to. Throughout most of the year they lived in fear of authority and were weighed down by constant prohibitions and limitations. In order for such a system to remain sustainable, there had to be certain periods in the year when the barriers of social class were let down and free laughter reigned-these periods were feasts and the carnival. It was almost as if people in the Middle Ages had two separate lives: normal life and carnival life, and as if two aspects of the world existed side by side in their minds, that of seriousness and of laughter (Bakhtin 1984: 96).
In Confederacy, the normal existential mode of the main character is idleness, a state completely devoid of any rules and prohibitions and one in which he feels comfortable. Ironically, Ignatius sees the Middle Ages as a period when western man enjoyed "order, tranquility, unity and oneness with its True God" and blames the moral degradation (as he sees it) of the modern world on the loss of these ideals (Toole 1995: 25). The very order for which Ignatius longs is one in which he is unable to function, as even such a relatively simple task as finding a low-paid job and keeping it proves too much of a challenge. Having been forced by his mother to leave the confines of his room, Ignatius is thrust into the unsuspecting world and left with no other option but to join the carnival of life around him.
Owing to his uncouth appearance, his haughty and often abusive attitude to those around him, and his sophisticated speech, Ignatius is soon labeled a madman. Darlene, an employee at a strip club, the Night of Joy, is quick to point out that Ignatius looks like "a big crazyman" (20). Jones, the black, underpaid porter at the same night club, characterizes Ignatius as "one-hunner-percen freak" and adds that he "sound like a crazy white mother" when talking to one of his friends who works at Levy Pants, where Ignatius is getting ready to spark a revolution (115). A similar reaction is heard from a group of ladies exhibiting their still life paintings, which Ignatius, dressed in his hot dog vendor regalia, mercilessly criticizes: "He's mad. He's so common. So coarse" (210). Similarly as the clowns and fools of the Middle Ages, who were not just actors playing their parts on the stage, but retained their role at all times and wherever they went, Ignatius also seems to trigger the same reaction wherever he goes; but unlike the clowns who provoked laughter, Ignatius only provokes scorn and contempt. The society does not embrace him, but views him as an unwanted element, one which is causing trouble and needs to be eliminated.
The greatest problem for Ignatius and the reason why others find it difficult, if not outright impossible, to accept him, is the fact that he perceives himself to be superior to others, and in doing so violates the fundamental principle of carnivalesque equality. In fact, nothing could be further from the true spirit of the medieval carnival. According to Bakhtin, whoever is addressing the crowd during a carnival "is one with the crowd; he does not present himself as its opponent, nor does he teach, accuse or intimidate it" (167). Ignatius does all of these things. When talking to the ladies exhibiting their paintings, he first accuses them of knowing nothing of art: "You women had better stop giving teas and brunches and settle down to the business of learning how to draw," and then tries to impress on them that they "need a course in botany. And perhaps geometry, too" (210). He frequently takes on a hostile and aggressive attitude, as he does with Myrna Minkoff, his old college acquaintance, whom he characterizes as "a loud, offensive maiden from the Bronx" and in every letter sent by her claims to find "some reference to the sleaziness of [her] personal life" (107, 157). Mockery in Toole's work is not universal, it is derisive and expresses contempt. In medieval folk culture, praise was ironic and ambivalent, it was always on the brink of abuse and vice versa, so much so that it "was impossible to draw the line between them" (Bakhtin 1984: 165). There is nothing ambivalent in Ignatius' abusive remarks; they are meant to establish his superiority and show others how inferior they are.
The humor of the work therefore does not stem from the characters' awareness of their position in the world and from their ability to accept this position and adopt a carefree attitude towards it, as was the case with those participating in a medieval carnival. According to Bakhtin, carnivalesque laughter was "directed at the whole world, at history, at all societies, at ideology" (84). Further on, the culture of folk humor embraced all people and belonged to everybody; laughter was irresistible and could not be confined. In fact, the laughter of the carnival was so powerful that it completely overcame the fear instilled by the authoritarian figures; it was a true "victory of laughter over fear" (Bakhtin 1984: 90). This is far removed from the many embarrassing situations in which Ignatius either insults or criticizes others, situations that do not bring a smile to his face, and much less to the faces of his interlocutors. Indeed, instead of reveling in his adventures, Ignatius tries at every step to establish his authority and superiority. The only party able to laugh is the reader, who is acutely aware of the breach between the image of himself that Ignatius maintains, that of an intellectually superior human being who is perpetually right, and the way in which he is perceived by everyone elsean arrogant and pitiful lunatic.
The many inconsistencies between what Ignatius proclaims to be his ideals and his actual words and actions are quite striking, and also a superb source of humor. McNeil goes so far as to claim that Ignatius "epitomizes the very perversions against which he rages" (35). While he proclaims himself to be "the avenging sword of taste and decency", Ignatius is actually the one who walks around dressed like "a performer of some sort", in his green cap, lumber jacket, and suede boots, and his personal hygiene standards are so low that he receives a complaint from the Board of Health a few days after assuming a job as a hot dog vendor (Toole 1995: 213, 17). Moreover, while raging against the perversions and excesses of the modern age and advocating medieval asceticism, he obviously does not see it unfit to wolf down boxes of wine cakes and guzzle enormous quantities of Dr. Nut, his favorite drink. The same holds true for his moral standards: as Ruppersburg noted, Ignatius is exactly the opposite of the moral superiority he preaches (119). His motives are usually selfish and he can only think about how a certain action is going to affect him without considering other people-on a whim, he writes an offensive letter to a business associate of Levy Pants, which results in a law suit and jeopardizes the existence of the company and the jobs of its employees. In short, Ignatius behaves like a spoiled child.
However, it is difficult to perceive Ignatius in an entirely negative light. Despite often behaving in an arrogant and obnoxious manner, there is something about him which also makes him pitiful. With all his education, including a Master's degree, Ignatius is still living with his mother in a small, run-down house in a suburb of New Orleans at the age of thirty. He obviously does not have any friends, with the exception of Myrna Minkoff, and even she remains absent until the very end of the novel, when she appears in a deus ex machina fashion and whisks her college friend away to New York. Ignatius' superiority is merely a defense mechanism, the only way he knows how to cope with reality and to maintain a relatively respectable self-image. After all, he is convinced that it cannot be his fault that the world fails to recognize his brilliance and that, as a result, he is unable to find suitable employment. It is the fault of everyone else, of the dunces who are in a confederacy against him.
At least part of Ignatius' problem seems to be that he is over-educated, a fact which feeds his feelings of superiority and alienates him from the people around him. This is well noted by George, a truant teenager running shady errands for the proprietress of the Night of Joy, in describing Ignatius: "You could tell by the way that he talked, though, that he had gone to school a long time. That was probably what was wrong with him. George had been wise enough to get out of school as soon as possible. He didn't want to end up like that guy" (243-244). The same idea is also expressed by Mr. Robichaux, the elderly suitor of Mrs. Reilly: "Maybe your boy went to school too long" (175). In the Middle Ages, the representatives of institutions such as the church or the university system embodied authority and absolute truths. They took themselves very seriously and refused to laugh, considering all those who opposed them to be enemies of the eternal truth. Of course, this attitude is completely out of keeping with the carnivalesque spirit of equality and relativity. The main message of the carnival is that there are no eternal and divine truths, that the old order must always die to make way for a new, better order. However, these officials "do not see themselves in the mirror of time, do not perceive their own origin, limitations and end" (Bakhtin 1984: 213). Ignatius is just such a defender of scholastic truths, insisting that everyone should treat him with due reverence simply based on the fact that he personifies the old, established traditions exemplified by the university, and in doing so fails to see his own transience and the transience of the truths he is defending. By not being able to laugh at himself, he actually becomes the dunce he accuses everyone else of being.
Relativity was an especially important aspect of the carnival. For a brief time the differences between superiors and inferiors were eliminated and all hierarchies were cancelled, all classes and ages were equal. Bakhtin further points out that the very essence of the carnival was not "in the subjective awareness but in the collective consciousness of [the people's] eternity, of their earthly, historic immortality as a people" (250). An individual's fate was unimportant, it was the people or the crowd as a whole that mattered. When a person embraces the fact that he is just a minute element in the constant cycle of rebirth and regeneration, his own life is put into perspective and he becomes aware that he can take himself lightly, because ultimately an individual does not matter, it is the people collectively who matter. Ignatius is incapable of perceiving himself in this way. His everyday worries and frustrations occupy him so much that he is unable to see beyond them, unable to comprehend the laughable minuteness of his own existence and the relative unimportance of his life. The only relativity he does grasp is that of the upwards and downwards cycles of his own fate. True to his medievalist background, Ignatius believes "that a blind goddess spins us on a wheel" and that "our luck comes in cycles" (27). Upon finding out that he would have to get a job to pay off the law suit following his mother's car accident, he reflects: "Oh, what low joke was Fortuna playing on him now? Arrest, accident, job. Where would this dreadful cycle ever end?" (42). Despite acknowledging that he is for the present moment caught in a bad cycle which will sooner or later pass and be replaced by a good cycle, Ignatius is incapable of recognizing the relative insignificance of the events which befall him in the spirit of folk culture and humor.
The carnivalesque relativity and ambivalence are also reflected on other levels. Much like praise has always been ambivalent and on the brink of abuse, so a genius has always been on the brink of becoming a fool. Indeed, there is a thin line between genius and insanity. Ignatius is so well educated that he considers himself to be a genius, while other people consider him mad. Bakhtin explains that one of the basic elements of folk culture was the reversal of hierarchic roles-at a carnival the jester was proclaimed king and "a clownish abbot, bishop, or archbishop was elected" (81). The reversal of roles is in fact an example of degradation, but degradation in this case does not mean something negative; on the contrary, it is a chance for rebirth, for a new beginning. The participants at a medieval carnival were aware that in order for something new to be born, something old must die. The decrowning of a king was therefore a joyous event, and even the person who was degraded or dethroned had no other option but to laugh along, embracing the universal spirit of regeneration. Ignatius, of course, fails to see this ambivalence and stubbornly persists in his role of a learned scholar, he refuses to cast his scholarly gown aside and become a part of the crowd. Until the very end, Ignatius takes himself and his role in life seriously.
Degradation also refers to the bodily level and should be taken quite literally, according to Bakhtin. It means coming down to earth; and earth is the element which swallows up and gives birth at the same time. But to degrade also means to deal with the lower stratum of the body, relating to acts of defecation, copulation, conception, pregnancy, and birth. The destructive principle is closely followed by the regenerative one (21). The carnivalesque body is always exaggerated. The same seems to hold true for Ignatius. His body is not only physically exaggerated because of his obesity; there is also a great deal of talk about various bodily functions because of his preoccupation with his body. Ignatius explains his health problems to anyone who will listen, including his employer at Paradise Vendors, Mr. Clyde. "My digestive system has almost ceased functioning altogether. Some tissue has perhaps grown over my pyloric valve, sealing it forever," he tells him (181). Ignatius exaggerates any health problem just to convince his listener of his suffering. When walking down the street he complains to his mother, "Will you please slow down a bit? I think I'm having a heart murmur." (7) However, in this case the body is presented as a single, self-sufficient entity and whatever happens within it concerns it alone; it is not the grotesque body of the carnival, which is "cosmic and universal", constantly being renewed and never finished (Bakhtin 1984: 318). For the grotesque body, disease and death represented a chance for a new birth, but in the case of Ignatius this universal and positive aspect is lost-his health problems only pose a threat and danger.
Ignatius does not think only about his health, he also constantly thinks about food. One of the first scenes of the book features Mrs. Reilly buying cakes for her son at a department store. And when working as a hot dog vendor some years later, he manages to consume most of the products himself, which ultimately leads to his weight increasing even more and, of course, the dissatisfaction of his employer. Bakhtin stresses that feasting was part of every folk carnival and that it was included in all comic scenes. However, folk feasting in the Middle Ages was "a banquet for all the world in which all take part", it was not confined to the house or to private rooms, but instead happened in a public place such as the marketplace (302). Moreover, feasting was a joyful and triumphant event, it was an occasion where man "triumphs over the world, devours it without being devoured himself" (Bakhtin 1984: 281). There is nothing particularly joyful and triumphant in the way Ignatius devours two dozen jelly doughnuts, such that the cake box looks "as if it had been subjected to unusual abuse during someone's attempt to take all of the doughnuts at once" (Toole 1995: 35). Ignatius' bingeing sessions have nothing in common with the universal and merry character of medieval feasts; in fact, there is something infinitely sad about them, as Ignatius gorges on food in a futile attempt to overcome his loneliness, sadness, and sexual frustration.
Excessive eating also causes him to display other bodily actions such as belching and emitting gas. One such incident takes place when Ignatius offers hot dogs to ladies exhibiting their artworks and manages to belch violently during the uncomfortable silence that follows. When it suits him, Ignatius claims in a medieval spirit that the body with all its smells and sounds is something completely natural, responding to his mother when she is appalled by the smell of his room: "Well, what do you expect? The human body, when confined, produces certain odors which we tend to forget in this age of deodorants and other perversions" (41). Even Bakhtin tells us that images of food and drink are closely related to those of the grotesque body and procreation (279). However, this is yet another inconsistency between Ignatius' words and deeds-in reality he is terrified of any physical contact and nothing scares him more than sexuality. This is completely out of keeping with the sexual role of the body in carnival culture, where sexuality represents fertility, growth, and new birth. During the carnival, the sexual aspects of the body must not be hidden and concealed but rather emphasized and honored. Ignatius cannot even stand the thought of touching another person, much less engaging in sex. When his mother informs him that her elbow has to be massaged because of her arthritis, Ignatius replies, "I hope you don't want me to do that. You know how I feel about touching other people" (9). Later Mrs. Reilly suggests that her son should settle down with Myrna and have a baby or two, and Ignatius tells her, "Do I believe that such obscenity and filth is coming from the lips of my own mother?" (46). In failing to embrace sexuality, Ignatius fails to appreciate the very cornerstone of folk culture: regeneration, new birth, and growth.
Finally, we have to discuss perhaps the single greatest source of humor in Confederacy: the language spoken by the different characters. Toole was a master of reproducing local speech with all its colloquialisms and registers, ranging from the lingo of the black porter Jones, to the New Orleans dialect represented by Mrs. Reilly and her friend Santa Battaglia. In sharp contrast to them all is the academic and highly stylized speech of Ignatius. His manner of speaking is so grandiose that others often have a hard time understanding him. When organizing a protest rally at the Levy Pants factory, the black workers do not quite follow Ignatius' address: "Friends! /…/ At last the day is ours. I hope that you have all remembered to bring your engines of war." From the group around the cutting table there issued neither confirmation nor denial. "I mean the sticks and chains and clubs and so forth." Giggling in chorus, the workers waved some fence posts, broomsticks, bicycle chains, and bricks. "My God! You have really assembled a rather formidable and diffuse armory." (118) By maintaining a formal distance with his educated manner of speech, Ignatius once again asserts his superiority and places himself above his interlocutors. Such an attitude goes against the type of communication established during the period of the carnival, which was based on familiarity and permitted two people who had established friendly relations to address each other informally, and use abuses and mockery affectionately (Bakhtin 1984: 16). There is nothing affectionate about the abuses Ignatius unleashes on those around him; on the contrary, he does his best to not become friendly and familiar with others.
The many different variations of New Orleans vernacular as spoken by the characters from different social backgrounds undoubtedly prove to be one of the greatest challenges also for translators. The Slovene translation of Toole's novel was published in 2007, twenty-seven years after Confederacy was first published in the USA, and joins a long list of translations of the novel into other languages 2 . It seems almost impossible to capture all the intricate nuances of the New Orleans dialect spoken by the characters from different ethnic groups (for example the black porter Jones and the Latino waitress selling drinks during the last episode in the Night of Joy) and from different social classes (the speech of upper-middle class Levys differs from the speech of lower-middle class whites, such as Patrolman Mancuso and Santa Battaglia). For historic reasons and due to Slovenia's relatively mono-ethnic situation, it is hard to transpose the dialects spoken by ethnic groups and social classes in New Orleans into Slovene. Of course, different dialects could be used, but it would seem inappropriate to assign the various characters different Slovene dialects, for example the dialect of the Gorenjska, Primorska, and Štajerska regions, not only because Confederacy has a strong local character and all the people in it come from one city, but also because such a decision would leave readers wondering what characters from different regions of Slovenia were doing in New Orleans in the 1960s. There is also the problem of translating Ignatius' academic diction, because in Slovene, despite the many foreignisms that normally permeate academic and scientific papers, the distinction between general written language and the language used by scholars and scientists is not as pronounced as in English, where numerous words of Latin origin can be effectively used to create the effect of scholarly discourse.
Translator Nuša Rozman solved these challenges well and introduced several effective solutions. She chose the neutral, written language as the basis of the translation (the characters do not speak any particular Slovene dialect), and sprinkled it with colloquial and informal expressions to an appropriate degree, depending on the speaker. For example, Mrs. Reilly and her friend Santa Battaglia, representatives of the lower-middle class of whites, speak a relatively neutral language in which indices such as short infinitive forms and spoken words are used every now and then to point to the unofficial character of their speech. On the other hand, the language of Jones, the black porter, includes more slang words, curse words, and expressions that could be characterized as 'low colloquial', emphasizing the fact that he comes from a lower social class. However, in keeping with an established tradition in Slovene translated literature 3 , Jones' speech (for all its slang expressions and colloquial nature) is still written is a way that is grammatically correct-that is, the spelling is correct and the words are not contracted to reflect how characters from a lower social class truly speak, as is the case in Toole's novel. Here is an example of Santa Battaglia's speech in the original and in Slovene translation: "Don't be ashamed, babe. It ain't your fault you've got a brat on your hands," Santa grunted. "What you need is a man in that house, girl, to set that boy straight. I'm gonna find that nice old man ast about you." (150) "Nič naj ti ne bo nerodno, mila moja. Saj nisi sama kriva, če imaš razvajenega otročeta na skrbi. Ti rabiš moškega pri bajti, punca, da bo spravil v red tega poba. Našla bom tistega prijetnega starega gospoda, ki je spraševal zate." (231) Alongside the established contractions used to transcribe spoken English, such as 'ain't' and 'gonna', Toole also uses omissions ("nice old man [who] ast about you") and introduces new contractions ('ast' instead of 'asked') in order to remain as true to the characters' vernacular as possible. While the Slovene translator opted for a perfectly valid and effective solution by using selected jocular and colloquial expressions ('otroče' and 'pob' for 'brat' and 'boy', respectively, and the word 'bajta', a colloquial expression for 'house') to reflect the overall tone of the text, the other option would be to attempt transcribing contracted forms, for example 'nč' for 'nič' and 'maš' for 'imaš', or forms that would otherwise reflect the vernacular used ('sej' for 'saj', 'spravu u red' for 'spravil v red', etc.) This becomes even more critical in Jones' speech: "Since we cuttin off the orphan chariddy and we not extendin it to the porter help, maybe we oughta give a little to a po, strugglin gal gotta hustle on commission. Hey!" (147) "Če smo že nehali dajat vbogajme sirotam, ne da bi na ta račun malo pomagali čistilcu, bi mogoče lahko kaj padlo ubogi punci na začetku kariere, ki mora gurat za procente, mater duš!" (226) As can be seen from the above quotation, Jones' speech is very colloquial indeed. There is a marked difference in register between the original and the Slovene translation, with the former abounding in colloquial expressions, contracted words and omissionsalmost every word is transcribed to reflect the actual vernacular of the black porter-and the latter predominately using standard literary language with some colloquial expres-sions ('gurat' for 'hustle' and 'mater duš!', an exclamation used to substitute 'Hey!'). Unlike with Santa Battaglia, where the vernacular is not as pronounced, translation of Jones' speech would benefit from using selected contracted words.
More radical attempts to transcribe real spoken language in original Slovene literature have appeared more frequently during the last decade or so 4 , but translators still seem to be hesitant about transcribing the vernacular in a way that would reflect the actual speech. In order to contrast the speech of such characters as Jones and Battaglia with the highly educated language spoken by Ignatius, Nuša Rozman has used literary expressions and foreignisms, but the difference between the general language of the narration and Ignatius' stylized speech is not as pronounced as it is in the English original, which is mainly due to the previously mentioned characteristics of written Slovene. Nevertheless, the translation captures the different registers, and, most importantly, retains the humor originating from the language used in the original text. | 2018-12-11T14:15:12.633Z | 2010-12-31T00:00:00.000 | {
"year": 2010,
"sha1": "00faf9688c3f73533d437db874b905c8ba8b1f9d",
"oa_license": "CCBYSA",
"oa_url": "https://revije.ff.uni-lj.si/ActaNeophilologica/article/download/2839/2502",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "00faf9688c3f73533d437db874b905c8ba8b1f9d",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
} |
44008124 | pes2o/s2orc | v3-fos-license | Bacterial-Degradation of Pesticides Residue in Vegetables During Fermentation
Investigation on potential of isolated lactic acid bacteria on the degradation of agricultural pesticides residue in fermentation process was carried out in present study. Selected pesticides were malathion and diazinon. The highest population of bacteria i.e. Lactobacilli, Streptococci, Leuconostoc and Pediococci , was observed in first 24 h of fermentation. After this period the cfu/mL decreased remarkably. The pH of the product reached to 4.0 after 24 h of fermentation and then decreased slowly, as after 72 h did not exceed 3.6. The mixture of lactic acid bacteria showed good capability of decreasing the malathion concentration from 3.5 to 0.5 mg kg -1 after 48 h of fermentation. Adapted microorganisms were able to grow in the culture media containing malathion and diazinon. Finally, isolated lactic acid bacteria were considerably more capable of decomposing the pesticides and reducing the pH when applied in mix culture.
INTRODUCTION
An application of large quantity of agricultural pesticides in rural area is a common practice in order to increase the productivity and yield, to protect the agricultural crop from pests and prevent products lost due to insect and bacterial contamination is a common practice. Resistance and mutation of some pests to chemicals are the causes of using larger quantity of pesticide in the developing countries. According to the rate of degradation of chemicals, pesticides can be categorized as sensitive-or tolerant to decomposition. Their destruction might be occurred under exposing to the normal atmospheric conditions or by biological activity of the soil microorganisms such as Pseudomonas, Flavobacterium, Alcaligenes, Rhadococcus, Gliocladium, Trichoderma and Penicillium. These microorganisms use the pesticides as their carbon and energy sources 1 . Because agricultural pesticides are mostly artificial synthetic compounds without any identical in the nature, they are substantially tolerant towards degradation in natural conditions. In many cases, stability of these pesticides to the biological destruction arises from their insolubility in water, as the microorganisms are incapable of decaying such materials. Malathion, carbamate, pyrethriod, diazinon, dichloropicolinic acid and phenylalkanbic are sensitive pesticides to the hydrolytic activity of microorganism enzymes. Extracellular enzymes of the bacteria are capable of cleavage broad range of chemical pesticides. Apart from the natural structure of the pesticides, their volatility and adsorption ability to the soil compounds are also important factors affect sensitivity to the biological cleavage. These factors themselves are dependent on temperature, light, soil moisture and pH. The more fugacity of the pesticides, the more transfer of them to the atmosphere. Higher moisture of the soil ease degradation rate of the water soluble pesticides by the microorganisms, while reduce their volatility. Some of the pesticides such as diazinon are very sensitive to the low pH range and their degradation at this range dramatically occur [2][3][4][5][6][7][8][9] . Because organophosphorous compounds are decomposed faster and easier compared to the organochlorine compounds, their application have been increasing day by day. Consumption of fruits and vegetables containing organochlorine components residue causes undesirable health disorders especially on the nerve system 5,8,10 . This danger is more acute in Iran because of the improper attitude that excessive application of the pesticides leads to the more efficient deterioration of the pests.
Because of the important impact of the microorganisms in the degradation of the pesticides, numerous researches have been done regarding qualitative and quantitative aspects of this phenomenon. Navab et al. 11 studied the effects of isolated Pseudomonas spp. from soil on the DDT, DDD, DDF and HCH under the laboratory conditions. The bacteria were able to partially degrade the pesticides. It has been reported that Flavobacterium and Spingomonaspaucimobil spp. decayed some types of the pesticides during 48 h of fermentation process 12 . Comprehensive research done by Peric et al. 13 about the biological decomposability of the pesticides revealed that DDT (mainly) and HCH (partially) degraded by the species of Debaromyces, Micrococcus and Lactobacillus. Lactobacilli showed the lowest effect. Peric et al. 13 also perceived that adding mentioned microbial mix to the fermented sausage led to the significant decrease in HCH concentration. Similar degradation of DDT and DDE in the Roquefort blue cheese by the using different species of gram positive Lactobacilli, Streptococci and yeasts were reported by Ledford and Chen 14 and Mirna and Coretti 15 . Therefore, isolation, identification and screening of the microorganisms which are capable of pesticides residue degradation in food materials are important issues. No research has been done about the isolation and identification of the indigenous pesticides decomposing microflora in vegetable materials. Therefore, this study intends to investigate the effects of isolated indigenous microflora from Iranian vegetable source on the degradation of containing pesticides residue during the fermentation process.
EXPERIMENTAL
In order to produce vegetable with desirable pesticides residue, broadcasting precise quantity pesticide and conducting systematic experiments, experimental farm with a surface area of 1000 m 2 was prepared. The land preparation stages namely: land excavation, land leveling, land grading, primary and secondary tillage operations, sowing, irrigation and cultural practices were carried out.
Types of pesticides used in cultivated vegetables: Diazinon and malathion, which are the most consuming agricultural pesticides in Iran were selected for this study (the pesticides were obtained from Plant Production Department, Ministry of Agriculture, Tehran, Iran). The pesticides were sprayed on the vegetables in a concentration of 0.002 g L -1 . The cultivated vegetables were tomato (Super Queen), celery (Tail Utah), green bean (Sun Ray), pea (Green Arrow), cabbage (Space Star) and cauliflower (Globe Master). The original seed were obtained from seed and plant production institute of Iran.
Preparation of vegetable samples: Cultivated vegetables were harvested at maturation stage and were immediately transferred to the laboratory. The external waste and damaged leaves of vegetables were removed then, piled, trimmed, washed and cut to the desired sizes. Cut vegetables were mixed together and filled into glass jars in and covered with 2 % (w/v) hot brine (95 °C). The aim of adding hot brine was to destroy the heat labile-anaerobic-nonspore forming microorganisms such as coliforms, improve colour and texture of final product, accelerate the acidification rate and improve nutritional property of the final product. Finally, about 4 mL of vinegar was added to the top of samples in order to prevent activity of unwanted microflora. The mouth of the jars were covered with nylon film having low permeability to water vapour and oxygen, tied with thread and kept at room temperature. Fermentation process was immediately started. In order to isolate and identify microorganism involved in fermentation of mixed vegetable in different stages, sampling was conducted every 12 h. Fermentation was continued till the pH of the product reached about 4.0. Fermentation process was stopped by opening the jars. Vegetables were removed from jars washed and stored at -4 °C until pesticide detection experiments were done.
Culture media used for the isolation, screening and enumeration of the microorganisms: MRS broth/agar and LSDM broth/agar media (Merck, Darmstadp, Germany) were used for the isolation, partial identification, screening and enumeration of lactic acid bacteria. Media compounded, heated in a flask and boiled in a thermostatically controlled heater till cleared. Then pH of the medium was adjusted to 6.20. The media solutions were distributed in 250 mL flasks. The flasks were autoclaved at 121 °C for 15 min and kept in refrigerator until used. In order to evaluate fermentation rate of the different carbohydrates, MRS broth without glucose and beef extract containing 0.05 % chlorophenol red was used as a base medium. The ingredients of the medium were separately prepared from Merck (Darmstadp, Germany) and mixed carefully under controlled conditions. All the carbohydrates were sterilized using membrane filtration and added to the basal media to have final concentration of 1 % in media. After inoculation of the microflora to the media, incubation process was carried out at 37 °C for 7 d and colour variations within this period were studied.
Isolation and identification of microorganisms involved in fermentation of vegetables: Fermentation of vegetables started after few hours, subsequent to sealing the jars. Lactic acid bacteria were isolated from fermenting vegetables using MRS agar and LSDM at 12, 24 and 48 h of fermentation. The contents of the jars were mixed thoroughly and 10 mL of brine was withdrawn under sterile condition using a syringe. Fermenting brine 1 mL was serially diluted in saline and was plated on MRS agar and LSDM agar. The streaked plates were incubated at 30 °C for 72 h. At the end of the incubation period, bacteria colonies were counted. Individual colonies were isolated based on morphology, gram reaction, cell morphology, catalyses production, presence of spores and aerobic and anaerobic growth. Isolated cultures were purified by repeated streaking on MRS agar and isolation. The microflora at the end of 12, 24 and 48 h were isolated and examined for gram reaction, morphology, catalase production, presence of spore and growth under aerobic and anaerobic conditions. The general key used for the identification of gram positive bacteria was done according to the procedure given in Beergey's manuals determinative bacteriology 16 . The general morphological and biochemical characteristics of lactic acid bacteria were determined according to the procedure of Sharpe 17 .
In the course of identification of microorganism, to identify microorganism capable of standing presence of two pesticides malathion and diazinon in MRS and LSDM media, the amount of carbohydrate in the media was reduced and same quantity of malation and diazonin was added. They were inoculated with the desired purified cultures. Finally, the isolates were washed under aseptic conditions and centrifuged at 3000 g for 10 min to separate the cells from culture medium. Recovered cells were washed with sterile distilled water several times and kept in a 1 % sterilized brine solution for further studies.
Identification procedure was adopted based on the classical biochemical and morphological tests. Biochemical tests included fermentation patterns of the sugars (arabinose, fructose, esculin, glucose, galactose, lactose, mannose, maltose, manitol, ramnose, raffinose, ribose, salicin, sucrose, sorbitol and xylose), capability of hydrolyzing casein and gelatin, indol production, ammonia from argenin, catalaze and pseudocatalaze tests, gas formation from consumption of glucose, VP and MR tests, being homofermentative or heterofermentative, reaction to the molecular oxygen, organic acid production under aerobiosis and anaerobiosis, growth at 15, 30 and 45 °C, survival at 60 °C after 0.5 h, reaction to the 0.1 % MB solution, growth at pH 4.0 and 9.6, growth at the 4 and 10 % brine solutions and cells motility. Morphological tests consisted of considering cell appearance, cell arrangement, spore production and gram staining reaction [17][18][19] .
Detecting pesticides residue in the vegetable samples: Liquid-liquid extraction method with the solvents of acetone and dichloromethane were used for extraction of the malathion and diazinon from vegetable samples. Quantification analysis of the pesticides was done by applying GC method (Shimadzu 2100, Japan) with the NPD detector and DB5 column 20 .
Replications: Experiments were performed three times in duplicate and the mean of the results were considered as final data.
RESULTS AND DISCUSSION
Enumeration of vegetable microflora during the fermentation process: Extra care must be taken when comparing the results; in vitro studies are not always real situation in food products. This is due to the fact that the biodegradation process may be affected by a number of factors such as the interaction between microorganisms, the microbial concentration of the medium, whether the medium is liquid or solid and the microbial growth conditions of temperature and pH.
Table-1 indicates total counts of lactic acid bacteria in MRS and LSDM agar along with the pH drop kinetic, in 12 h of vegetable fermentation. The population of microorganisms increased during 24 h of fermentation reaching pH 4.2, then, sudden decrease of population was observed. According to the Table, the population of Lactobacillus and Streptococcus genera (which were enumerated by LSDM media) was considerably higher than other genera of lactic acid bacteria. It can be attributed to the naturally higher number of bacteria belong to the two mentioned genera in the vegetable mix. Table-6 shows the effect of indigenous lactic acid bacteria in vegetable mix on the degradation of malathion and diazinon after 48 h of fermentation process. According to the Table-6, the initial concentrations of malathion and diazinon in the vegetable (unprocessed sample) were 3.5 and 0.6 mg kg -1 , respectively. This fact implies more penetration of the first pesticide into the plant tissues during the pesticide spraying stage. After 48 h of fermentation, the concentration of malathion considerably decreased and reached to 0.5 mg kg -1 , whereas, diazinon concentration only decreased about 0.1 mg kg -1 . The remarkably degradation of the malathion during the fermentation could be attributed 9 to its instability at low pH ranges, regardless of bacterial decomposition.
Conclusion
According to the results obtained from this research, indigenous microflora of Iranian vegetables which are substantially consisted of different species of lactic acid bacteria, were capable of degrading malathion and diazinon, the two common pesticides used in Iran. Regardless of enhancing hygienic value of the vegetable products, fermentation led to the formation of a novel fermented vegetable with well organoleptic characteristics. Among the isolated lactic acid bacteria, L. plantarum was a probiotic bacterium. Because the mix microflora was able to grow fast up to about pH 4.2 (Table-1), special attention should be made on this point whether mentioned bacterium is tolerant towards low pH ranges. This fact would be very important because probiotics are generally sensitive to low pH and high acidic media. The lactic acid microflora showed synergistic relationships among the species, because single cultures were not able to reduce the medium pH compared with the mix cultures, when inoculated to the vegetable, separately (data not shown). Adapted microflora after 3 times of transfer in MRS broth containing malathion and diazinon instead of glucose, made the tolerant microorganisms which were highly capable of growing and decreasing the pH of the media. Therefore, producing such a mix culture in the form of lyophilized starter culture for production of fermented vegetable products could be an important objective. Determination of optimum fermentation time is also an important issue, because along with the increase of fermentation time, the amount of pesticides as well as the viable counts of the bacteria decreases. The second fact is not favourable. Because the number of viable cells remarkably decreases during the period of 24 to 48 h after the start of fermentation (Table-1), it is recommended that the optimum fermentation time to be identified during 24 to 48 h.
Complementary research should be done about the degrading effects of fermentation on the other consuming chemical pesticides. Moreover, it might be interesting to determine the contribution of each species in the degradation of the pesticides, during the fermentation period. Finally, the components produced from degraded pesticides must be identified and evaluated from safety as well as sensory points of view. | 2017-08-27T06:58:45.986Z | 2011-01-21T00:00:00.000 | {
"year": 2011,
"sha1": "6b2f2705e200d4a4a72e4effe7972983287af6a1",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/13030",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "525a98e24a84e3a6cbec80579564152b42bd0889",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
225420006 | pes2o/s2orc | v3-fos-license | Pattern of congenital heart disease among children presenting to the Uganda Heart Institute, Mulago Hospital: a 7-year review
Abstract Background Congenital heart disease (CHD) is the most common congenital anomaly in children. Over half of the deaths due to CHD occur in the neonatal period. Most children with unrepaired complex heart lesions do not live to celebrate their first birthday. We describe the spectrum of congenital heart disease in Uganda. Methods We retrospectively reviewed the data of children with CHD who presented to the Uganda Heart Institute (UHI), Mulago Hospital Complex from 2007 to 2014. Results A total of 4621 children were seen at the UHI during the study period. Of these, 3526 (76.3%) had CHD; 1941(55%) were females. Isolated ventricular septal defect (VSD) was the most common CHD seen in 923 (27.2%) children followed by Patent ductus arteriosus (PDA) 760 (22%) and atrial septal defects (ASD) 332 (9.4%). Tetralogy of Fallot (TOF) and Truncus arteriosus were the most common cyanotic heart defects (7% and 5% respectively). Dysmorphic features were diagnosed in 185 children, of which 61 underwent genetic testing (Down syndrome=24, 22q11.2 deletion syndrome n=10). Children with confirmed 22q11.2 deletion had conotruncal abnormalities. Conclusion Isolated VSD and Tetralogy of Fallot are the most common acyanotic and cyanotic congenital heart defects. We report an unusually high occurrence of Truncus arteriosus.
Introduction
Congenital heart disease (CHD) is the most prevalent congenital abnormality and a leading cause of childhood mortality 1 . The estimated prevalence of CHD is 8-12/1000 live births [2][3][4] . Without appropriate treatment, about one in three children born with significant congenital heart disease will die within the first month of life 5 . Unrepaired congenital heart disease is a major cause of heart failure among children in Africa 6 . We believe that this study is the largest in Africa to report on congenital heart disease amongst children. The aim of this study was to describe the spectrum of congenital heart disease among children who presented to the Uganda Heart Institute.
Methods
We retrospectively reviewed 3526 echocardiography reports of patients with CHD that presented to the Uganda Heart Institute (UHI) between 2007 to 2014. The registry for congenital heart disease was established in 2007 by the pediatric cardiology division as part of the Ministry of Health reporting system and serves as the basis for this study. Makerere University School of Medicine Ethics and Research Committee approved the genetics component of the study.
Study site
The Institute is a 40 bed facility located within the Mulago Hospital complex and has been in existence for 28 years. The UHI has a fully functional operating theatre in addition to a catheterization laboratory. It performs paediatric and adult open-heart surgeries, diagnostic and interventional catheterization procedures. Outpatient clinics run on a daily basis. Averagely, 60 to 80 paediatric open heart surgeries are conducted per year. A paediatric cardiology fellowship program has run since 2010.
Study procedure
Detailed transthoracic echocardiography was performed and interpreted by one of the two pediatric cardiologists (PL /SL) using standard guidelines 7 with a Sonos 5500 (Philips, Best, Netherlands) and a Philips IE 33 (Philips, Best Netherlands) for the periods 2007 to 2011 and 2012 to 2014 respectively. Difficult cases were discussed and a final diagnosis made by consensus. Digital archiving enabled cases to be reviewed and discussed with colleagues (CS) from other centers. Re-evaluation of cases at follow up improved diagnostic accuracy. Severe congenital Heart disease, CHD was defined as complex heart abnormalities that were life-threatening. For example, heterotaxy syndromes, anomalous origin of the left coronary artery from the pulmonary artery (ALCAPA) and univentricular heart. Patient demographics including age, sex, weight and type of congenital heart defect were entered into an Excel spreadsheet and analyzed using SPSS version 16. Pulse oximetry readings were available for only a small subset of children and not included in the analysis.
Syndromic children
If the child had an obvious syndromic condition based on clinical examination, they were sent for genetics assessment. Genetics testing was done by a highly experienced genetics specialist from Washington DC, United States together with the Ugandan team and the paediatric cardiologist (CS) during some missions. Genetics diagnosis was based on clinical presentation of the common genetics syndromes with associated complications. These were matched with the echocardiogram diagnosis. We secured Institutional Ethical approval from the Makerere School of Medicine as well as from the National Institute of Health (NIH) laboratory in Washington DC, United States to run detailed microarray and DNA sequencing for the genetics study but blood sam-ples were not taken off or sent for analysis. The team relied on clinical diagnosis by a highly skilled genetics specialist. Syndromes such as Down's syndrome, 22q11 deletion syndrome, Holt Oram and Williams syndrome were diagnosed. Children with Holt Oram syndrome had upper limb abnormalities in addition to the CHD. A genetics syndrome was diagnosed based on clinical evaluation, the results were given to the family, and was counseled by our team, a geneticist and genetic counselor.
Congenital Rubella Syndrome
Congenital Rubella syndrome, CRS was diagnosed based on clinical features of cataracts, microphthalmia, microcephaly and hearing impairment. Retrospective data was obtained from the World Health Organization -Congenital Rubella (WHO-CRS) Surveillance that was conducted at the UHI in 2014. CRS was confirmed by blood samples obtained from the child and mother. Serum was tested at Uganda Virus Research Institute (UVRI) for evidence of active rubella virus infection through identification of rubella-specific IgM antibodies. UVRI is a government parastatal certified by the American College of Pathologist to conducts research, surveillance and diagnostics linked to viral etiology and provides expert advice 8 .
Results
Overall, 4621 charts were reviewed during the study period. A diagnosis of congenital heart disease was made in 3526 (76.3%) children. The majority 1941(55%) were females. Most patients presented during infancy (range 1 day -18 years). VSD was the most prevalent defect (921) of which 702 children (76%) had perimembranous VSD, and 79 (8.5%) had muscular VSD. PDA was the second most commonly occurring defect seen in 760 cases (22%). ASD was present in 332 children (9.4%) with the Ostium secundum type occurring in 293 (88%) followed by the sinus venosus defect, 23 (6.9%). Tetralogy for Fallot and persistent Truncus arteriosus were the most common cyanotic heart diseases. Some defects occurred in small percentages and have been reported. Children with syndromes in the study were examined for specific genetic abnormalities. Among the cyanotic defects, Tetralogy of Fallot (TOF) was the most common, 247 cases (7%) were seen with a male preponderance.
Patients with dysmorphic features
One hundred eighty-five children had dysmorphic features. The majority 143 (76%), had a phenotypic diagnosis of Trisomy 21 (Down syndrome) with endocardial cushion defects as the most likely diagnosis. Sixty-one of the 186 children underwent genetic testing. Congenital Rubella Syndrome was present in 15 (8) % of cases from data extracted from the WHO-UHI-CRS Surveillance in 2014. Eighty-eight percent had CHD, 68% had ocular defects (cataracts) and 20% had hearing problems. PDA was the most common CHD (77%).
Congenital heart defects and age at diagnosis
Most complex heart diseases such as univentricular heart defects, D-TGA, pulmonary atresia, tricuspid atresia, total anomalous pulmonary venous connections and Ebstein's anomaly were rarely diagnosed after the first year of life. Notably, no children with right isomerism/ heterotaxy were seen during the study period. VSD, ASD, PDA, TOF, pulmonary stenosis and coarctation of the aorta continued to present in children older than 5 years. Defects not commonly diagnosed after 6 months included; Truncus arteriosus, DORV, Tricuspid atresia, TGA, anomalous pulmonary venous return and Hypoplastic left heart syndrome.
Discussion
This was a retrospective study with a large number of patients making it representative of the entire country. Ventricular septal defects (VSD) were the most common congenital heart defects (26%) with the membranous type in high frequency. Our findings are similar to studies reported elsewhere 4,[9][10][11] . Ekure and colleagues reported VSDs in 25% of Nigerian children 12 however, a higher prevalence stated in some studies notably Cameroon, included adults attesting to the fact that VSD patients survive into adulthood 6,13 . VSD is one of those defects that were diagnosed till 18 years. Patent Ductus Arteriosus was the second most common defect. This may be due to increased number of premature deliveries, genetic syndromes, maternal rubella infection and peripartum hypoxia 12,13 . PDA is highly prevalent in extreme preterm babies with birth weight less than 1kg 14 Premature deliveries in Uganda directly eminent from multiple factors including; low and late antenatal attendance for the recommended visits hence mothers tend to miss drugs like Fansidar that are prophylactic for malaria. Premature deliveries have been linked to placental malaria in some studies, poor maternal preconception nutriton and adolescent pregnancies as well as child spacing less 24 months 15, 16 . Anemia and gestational hypertension are the highest risk factors for preterm deliveries 15 .
PDA is one of the cardiac manifestations of Congenital Rubella syndrome reported in infants whose mothers suffered Rubella infection during pregnancy 17 . Other abnormalities include; VSD, peripheral pulmonary branch stenosis, ocular complications and central nervous system problems 18 Atrial septal defects ranked third. As reported in other studies there was a female preponderance at 56%, with 88% secundum type. ASDs tend to be well tolerated through infancy and childhood and are still diagnosed into adulthood. We postulate that very few children of neonatal coarctation were seen because they are not referred early to our center and could have been missed by the primary health care provider. Critical neonatal coarctation often presents as an emergency with a new born in shock and fatal without immediate intervention [21][22][23] . Similarly, aortic coarctation among older children was rare. This may imply a low prevalence of this condition or show that man of these patients are not detected because blood pressure measurements are not routinely carried out in children 22 . The few cases in our study presented after 5 years of age. It has also reported in the Nigerian Congenital Heart Disease registry that coarctation of the aorta was one of the rare CHD 12 .
Tetralogy of Fallot remains the most common cyanotic heart defect as has been reported elsewhere 2,24,25 . There is a relatively large population of unrepaired patients alive which implies greater survival in less severe cases. This trend has improved with more patients accessing corrective surgery that is now available at the Uganda Heart Institute. By 2014, 80% of the open heart surgeries were performed by our local team and only 20% were referred abroad who mainly included complex congenital heart defects 23 . Five percent of the patients had Truncus arteriosus which is higher than what is reported in other settings that give an overall prevalence of 2.4% 26,27 . This was reflected consistently in the number of cases detected on a yearly basis over the study period. Most cases were diagnosed early (before 6 months) owing to an early presentation with heart failure. No new cases were seen in children above 5 years. Truncus arteriosus is associated with a high prevalence of genetic disorders. Thirty-nine percent of the children who underwent genetic testing had truncus arteriosus. This strongly suggests a genetic etiology in our population. Cases of D-TGA were rare in the study. Transposition of the great arteries has been associated with a high mortality as reported in some studies 9 . The advent of palliative atrial septostomy at the Uganda Heart Institute that acts as a bridge to surgery, offers hope to these critically ill infants who may present with TGA with restrictive interatrial shunts.
Other complex defects were most prevalent in the first year of life. These were not diagnosed after the first birthday. They have been associated with a high mor-tality, two thirds of children with complex heart defects such as Hypoplastic left/right heart syndrome do not cerebrate their first birth day5. Unfortunately, limited treatment options are available in the country for such children.
We noted that some CHDs were rare in our study population, a case in point TAPVC (3%) prevalence was comparable to that reported in the Nigerian Congenital Heart Registry (12). Others; Aortopulmonary window, Ebstein's anomaly and bicuspid aortic valve. Genetic studies though limited, had a high likely hood of a positive result indicating a need for routine genetic screening in children with congenital heart disease. Deletion 22q11.2 which is associated with immunodeficiency, hypocalcaemia and learning difficulties was also been diagnosed by our team based on clinical findings. Prior knowledge of a genetic syndrome improves surgical outcomes for patients, given the fact that the surgical teams plan for any related complications for such abnormalities. Doell and friends in Switzerland reported no difference between children with genetic syndromes versus those without who underwent open heart surgery for CHD, a genetic syndrome was an independent risk factor for re intubation, and kidney injury 28 . Digital archiving enabled cases to be discussed with colleagues from other centers and there was an opportunity for re-evaluation of cases at follow up which improved diagnostic accuracy. Our major limitation was having a retrospective study at a single site whose results may not be fully representative of the nation. However, two other sites have been established in the northern and western parts of the country.
Conclusion
Congenital heart disease is common among children. VSD, PDA and ASD were the commonest acyanotic heart defects while Tetralogy of Fallot and Truncus arteriosus topped the cyanotic defects. Genetic studies are called for in our population to further understand this high prevalence of Truncus arteriosus. | 2020-09-10T10:25:14.815Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "48704252a56159f314078fc156094c924ea6eed6",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/ahs/article/download/197855/186601",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "382eafb060008a8da51467634ca615329c91e0a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219105772 | pes2o/s2orc | v3-fos-license | Cognitive Deficits in Myopathies
Myopathies represent a wide spectrum of heterogeneous diseases mainly characterized by the abnormal structure or functioning of skeletal muscle. The current paper provides a comprehensive overview of cognitive deficits observed in various myopathies by consulting the main libraries (Pubmed, Scopus and Google Scholar). This review focuses on the causal classification of myopathies and concomitant cognitive deficits. In most studies, cognitive deficits have been found after clinical observations while lesions were also present in brain imaging. Most studies refer to hereditary myopathies, mainly Duchenne muscular dystrophy (DMD), and myotonic dystrophies (MDs); therefore, most of the overview will focus on these subtypes of myopathies. Most recent bibliographical sources have been preferred.
Introduction
Myopathies represent a wide spectrum of heterogeneous diseases mainly characterized by the abnormal structure or functioning of skeletal muscle [1]; they can be hereditary or acquired, and can also manifest during the course of endocrine, autoimmune or metabolic disorders. Usually they worsen with time, and while their symptoms are not specific, muscle weakness, movement restriction (which can affect different muscle groups depending on the myopathy form, and which can also be transient) and also weakness and fatigue (among others) should raise the suspicion for a myopathy. The proximal muscles, namely, those of the shoulder, pelvis and upper thigh, are usually affected earlier in the disease course than the distal muscles, and give way to symptoms such as postural instability and inability to raise the hands or stand up from a sitting position. Myopathies eventually lead to muscular atrophy, with several patients being confined to wheelchairs, but they are typically not considered fatal; muscular dystrophy, a hereditary myopathy, is, however, considered to be severe [2].
Cognitive deficit is a general umbrella-term used to describe impairment(s) in an individual's mental processes that underlie the acquisition of non-verbal and verbal information and knowledge, and drive how an individual interacts with the world. Cognitive function subdomains, including memory, attention, inhibition, problem solving and visual perception, have gained popularity in the field of neuropsychology. Importantly, though, cognition has been recently viewed through the lens of language processing, with fluency and narrative measures being widely-used in the field of neuropsychology.
Due to the pathogenesis of myopathies being diverse and several systems being affected, cognitive deficits may also appear as a result of these diseases. Given the gap in the literature regarding cognitive defects within the context of myopathies, we have attempted to gather and present the studies on this matter and provide much-needed insight for clinicians handling these patients, who are often in need of multidisciplinary care.
Myopathy Categories and Associated Cognitive Deficits
There are several forms of myopathies, from genetic/hereditary ones to endocrine-related, to name a few. The aim of this review is not to provide an overview of myopathies, but rather to present the available studies on the relationships between some forms of myopathies and cognitive functions. Therefore, we have categorized this section based on a basic causal classification of myopathy types, and only present those for which studies exploring cognitive functions have been published, hoping not to overflow the reader with information not pertaining to the review's purpose.
Genetic Myopathies
Muscle cells need thousands of proteins in order to remain functional, so a great number of genes are implicated in their protein production. Hereditary or genetic myopathies arise as a result of a genetic defects leading to either the absence or the alteration of respective proteins vital for muscular function, and they differ in their clinical presentation due to their different genetic backgrounds. Many genetic myopathies have been well-described, but studies regarding concomitant cognitive deficits are very scarce; in fact, we only found relevant studies that focused on two types of genetic myopathies, besides the most well-known Duchenne muscular dystrophy and myotonic dystrophy.
Duchenne Muscular Dystrophy
Muscular dystrophies are usually described separately from other myopathies, although they can be included in this umbrella-term due to the muscle weakness and wasting that they entail. In general, their main difference from what is typically perceived as a myopathy is that, while myopathies are caused by genetic defects in the contractile apparatus of muscles, muscular dystrophies are diseases of the muscle membrane and structural elements [3]. They are hereditary diseases, with the causative genes more or less identified. Their age of onset is usually in childhood and adolescence, although some forms affect younger or older individuals [2].
Duchenne muscular dystrophy (DMD) follows the X-linked recessive inheritance pattern. The affected gene in the X chromosome encodes the protein dystrophin, which is crucial for the musculature structure, but is also present in other tissues and the central nervous system (CNS) as well, mainly in the hippocampus, the cerebellum and the neocortex [4]. Clinically, it is characterized by severe, progressive and irreversible loss of muscular function, presented as predominantly proximal muscle weakness, eventual loss of ambulation, elevated serum creatine kinase (CK) levels, calf pseudohypertrophy and involvement of the cardiorespiratory system, leading to motor delays or regression [1,5]. Due to its cognitive aspects having been extensively studied, we deemed it important to include this in our review, although the taxonomy of myopathies/muscular dystrophy is generally not very clear in the particular disease.
The neurodevelopmental sequelae of DMD have long been known, and their prevalence is currently thought to be higher than once perceived [6], although global cognitive deficits are not noted in every patient. To further elucidate this phenomenon, Wingeier et al. (2011) studied a cohort of 25 boys with genetically confirmed DMD. The subjects underwent a detailed neuropsychological assessment and scored very low in a wide array of tests, including arithmetic and verbal fluency. The full scale intelligence quotient (IQ) was found one standard deviation (SD) lower than average, while verbal IQ received a stronger blow than non-verbal IQ [5]. This has been reciprocated in other studies that showed a delay in language milestone achievement in the mother tongue and poor narrative skills. Several other cognitive aspects also seem to be impaired in DMD, with studies showing deficits in working memory, attention and executive function [7][8][9][10][11][12][13][14][15]. For example, Kreis et al. (2011) reported deficits in verbal short-term memory and fluency, and visuospatial long-term memory, accompanied by a drop in full-scale IQ [4]. The hypothesis that these impairments may arise due to cerebellar dysfunction has also been stated, since this clinical image in DMD is similar to that of patients with lesions in the cerebellum [14][15][16].
However, as already mentioned, the non-progressive cognitive deficits that have been tied to DMD [17] do not lead to a distinct and homogenous phenotype across all DMD patients. No clear reason for this phenomenon has been pinpointed so far, but the pathogenesis seems to involve different dystrophin isoforms. For example, it has been shown that patients lacking the Dp140 isoform present important cognitive deficits [5]. Additionally, DMD has been linked to several neurobehavioral disorders also attributed to dysfunctional dystrophin isoforms. Banihani et al. (2015) conducted a retrospective cohort study with 59 boys with DMD. A full-scale IQ of <70 was reported in 27% of the patients, learning disability in 44% and intellectual disability in 19%; meanwhile, 32% carried a concomitant ADHD (attention-deficit/hyperactivity disorder) diagnosis, 15% had disorders of the autism spectrum and 27% presented anxiety. Of the children with learning disorders, 60% carried mutations affecting the Dp260 isoform or the 5 UTR region of Dp140, with the respective percentage for autism spectrum disorders being 77%, 50% for intellectual disability and 94% for anxiety. Furthermore, patients with mutations affecting the middle and 3 end of the gene were also reported to present higher rates of cognitive impairment and ADHD, while the researchers also associated severe intellectual disability and ADHD with mutations in exon 19, a finding that, per the authors, has been replicated by other researchers [18]. In a different study, ADHD in DMD patients was also linked to Dp140 mutations, and mutations predicted to influence all dystrophin subtypes [19]. Interestingly, in a study of cerebellar and neocortical metabolite disorders in DMD patients, the Dp140 deficit was not associated with brain metabolism, thereby suggesting that the noted cognitive deficits of the lacking Dp140 were not mediated through the studied metabolites [4]. All in all, these results highlight the roles the specific isoforms affected in DMD play in cognitive functioning, further hinting that detecting the mutated isoform could predict cognitive impairment and that early intervention concerning mental functions is highly needed.
The proposed mechanisms underlying the reported cognitive deficits within the context of DMD are versatile. Some attribute them to the different affected dystrophin isoforms and others to the involvement of dystrophin in embryonic development; others still imply an interplay between genes and non-genetic factors; one thing is for certain, no single mechanism has been so far identified. The involvement of the cerebellum has been postulated in some studies, given the similarities between the noted impairments in DMD-in particular, those pertaining to speech and verbal functions, with cerebellar lesions. To further examine this theory, Kreis et al. (2011) performed a metabolic analysis in the cerebellum and the temporo-parietal region of 12 and 8 DMD patients respectively, compared to 15 controls. They showed consistent choline deficits in both areas under study, and significant disorders concerning glutamate and N-acetyl compounds in the temporo-parietal regions. In their DMD cohort, total N-acetyl compounds in the temporo-parietal region were linked to verbal IQ and verbal short-term memory. On the other hand, choline and the putative general metabolic disorder were not found to be significantly associated with cognitive deficits, although the researchers mentioned that their reported choline deficit came in contrast to earlier similar studies that found an overabundance of choline in the cerebellum of DMD subjects [4].
As the aforementioned studies also suggest, boys with DMD are more susceptible to learning disability, and their verbal IQ seems to be more affected than nonverbal IQ. It has been reported that up to 40% of DMD patients had difficulty in reading and further demonstrated deficits in phonological awareness/processing, and in short-term verbal memory [18,19]. These academic difficulties seem to derive from learning disabilities such as dyslexia, and other cognitive deficits, such as working memory impairments [12]. Patients with DMD have been shown for instance to have learning difficulties qualitatively akin to subjects with developmental dyslexia (specific reading and writing difficulties, reduced automatized naming speed), although slightly less severe, and have also been shown to have difficulty with phonological processing similarly to a subgroup of individuals with dyslexia [20].
Concerning ADHD, it seems to be the neurobehavioral disorder most commonly associated with DMD [18,19]. More specifically, Banihani et al. (2015) reported ADHD in 32% of their 59 DMD patients [18], while Pane et al. (2012) reported the exact same percentage for their 103 DMD patients, which is almost four-fold the percentage for average school-aged children (8-10%). They also reported that deficits in attention, either combined with hyperactivity or not, were more frequent than hyperactivity alone [19]. Battini et al. (2018) have also recently studied 40 DMD boys without intellectual disability, and reported that several cognitive functions were affected, especially those that tapped on to multi-tasking, problem solving, inhibition and working memory, all of them being crucial subcomponents underlying goal-oriented behavior [21]. These deficits could explain the higher prevalence of ADHD in DMD, and to some extent, the higher rates of learning disorders as well.
Moving to autism spectrum disorders (ASD), it has been reported that their incidence in DMD ranges from 4% to 37% [22][23][24][25]. For instance, Banihani et al. (2015) reported a concomitant ASD diagnosis in 15% of their DMD patients [18], and Wu et al. (2005) had earlier expressed the notion that the co-occurrence of DMD and ASD in not coincidental [23]. These percentages are much higher than the latest estimates from CDC, where only 1 in 68 general-population children is thought to suffer from ASD [26,27]. In a similar vein, although most DMD boys seem to cope fairly well with their medical condition, emotional problems, such as anxiety, are also twice as likely to manifest in DMD patients [18].
Myotonic Dystrophies
Myotonic dystrophies (DMs) follow the autosomal dominant mode of inheritance and affect several organs. Clinically, two subtypes are recognized: DM type 1 (Steinert's disease), caused by a trinucleotide repeat expansion in the DMPK gene, and DM type 2, caused by a tetranucleotide repeat expansion in the ZNF9/CNBP gene [28]. Concerning their manifestation, both subtypes include myotonia (i.e., the prolonged muscle contraction that cannot be relaxed upon movement cessation), hence their name; muscular dystrophy; arrhythmias/cardiac conduction disorders; and cataracts. Additionally, involvement of the endocrine, gastrointestinal, respiratory and central nervous systems may also be present [29]. However similar in terms of symptoms, the clinical phenotypes of the two subtypes are distinct; in particular, DM1 is characterized mainly by facial and distal-predominant limb weakness, grip myotonia and no fluctuation, whereas DM2 by progressive proximal and distal limb weakness, and variable mild grip myotonia [28,30,31]. Finally, congenital myotonic dystrophy (CDM) usually manifests in the first month of life [32] and is considered to be the most severe form of DM1 [33].
As already mentioned, myotonic dystrophies 1 and 2 present distinct phenotypic differences, and these differences extend to the patients' neuropsychological profiles as well. For this reason, we will explore the cognitive deficits noted in each of these subtypes separately.
Some studies on children with DM1 have associated the degree of cognitive impairment (lower IQ scores) with the number of the trinucleotide repeats, tied to maternal inheritance, also affecting the age of disease onset, although no such association was revealed for the neuromuscular involvement and the overall disease severity [34][35][36]. In a recently published retrospective study, 74 DM1 patients, 52 of them being affected by CDM, were evaluated. Seventy-four percent of the cases had maternal inheritance, with the number of trinucleotide repeats spanning from 143 to 2300. More than half of the patients presented some degree of cognitive delay, with a higher percentage noted in those with the congenital form, while the vast majority of the patients had some sort of cognitive, developmental or behavioral disorder. The researchers also mentioned that speech/language delay was often observed. Formal IQ testing was only available for a subgroup of the patients, and showed that most scored below average. Finally, ADHD and mood disorders also presented higher-than-average rates in the cohort, but patients with infantile DM1 had particularly high rates [32]. Woo et al. (2019) have also recently compared adult-onset and juvenile-onset DM1 in a cohort of 19 DM1 patients that underwent numerous neuropsychological tests. Verbal intelligence and verbal memory were significantly impaired in the juvenile group, while both groups performed equally well in performance intelligence and executive function tasks [37]. Deficits in verbal functions, which were more prominent in the juvenile cohort, might be indicative of a neurodevelopmental disorder in the earlier-onset subtypes.
Congenital DM1 is considered to be the most severe early form of DM1 and is often accompanied by cerebral atrophy and ventricular enlargement since birth. Besides the developmental milestones presenting considerable delay, all patients suffer from mental retardation with global learning difficulties [28]. In childhood-onset DM1, sometimes children first present cognitive symptoms, expressed as learning difficulties hinting towards mental retardation, before showing signs of muscular involvement [30]. In these patients, IQ scores were comparable to those of the general population and their learning problems seemed to stem from executive function deficits, alongside impairments in other cognitive domains, such as visual perception, memory (specifically in visuospatial recall and verbal memory) and constructional ability. Additionally, they tended to present signs of psychopathological disorders, such as ADHD and anxiety [34,[38][39][40][41]. As expected, without a family history of MD, a diagnosis is difficult to be made, and the symptomatology of the children is usually only tied to MD after one of the parents is diagnosed with adult-onset DM1 [30]. Here, we would like to mention the entity of late-onset oligosymptomatic DM1, which is also characterized by mild symptomatology in earlier generations, with a worsened disease course in the generations to follow (especially, the third generation) [30]. These findings suggest that the age of onset of the disease is gradually set at an earlier time point as generations progress, and could be the outcome of more trinucleotide repetitions being gradually added; for example, congenital DM1 has been associated with extremely large repeat numbers [42].
In adult-onset DM1, structural and functional brain abnormalities have been noted. The most typical neuropsychological symptom seems to be reduced perception skills -an avoidance, therefore, of the disease's other signs and symptoms. This can be accompanied by obsessive, compulsive, schizotypal, passive-aggressive and other emotional disorder characteristics [34]. Depressive symptoms pertaining to both childhood and adulthood DM1, are mostly the sequelae of the disease diagnosis and its psychological impact, as it often leads to a life of low quality [43], while daytime sleepiness is usually the consequence of physical disability and at times obstructive apnea. Structure-wise, brain MRI scans of DM1 patients showed diffuse white-matter alterations which were more prominent than atrophy [30,44].
The involvement of the CNS constitutes one of the main differences between DM1 and DM2. Individuals with DM2 may also present cognitive deficits, but these are milder than in DM1, and are considered as unusual occurrences [5]. Meola et al. (2003) conducted a positron emission tomography (PET) study on 21 DM1 and 19 DM2 patients that underwent cognitive assessment. They have noted cognitive deficits pertaining to frontal lobe dysfunction (planning and conceptual reasoning), with one test having significantly worse scores only for the DM1 cohort. They also reported reduced cerebral blood flow in the frontal, parietal and temporal lobes, which was linked to cognitive impairment [45]. The same group, in an earlier study with 20 DM1 and 20 DM2 patients, reported that two thirds of DM2 patients, compared to half of DM1 patients, presented visuospatial recall impairments, while a smaller percentage of DM2 patients had deficits in visuospatial construction. MRI scans were either normal or with non-specific white matter lesions in both cohorts, but the PET scans revealed a more diffuse hypoperfusion of frontal regions in DM1 patients [46], a finding that is in accordance with the general acknowledgement that cognitive deficits best characterize DM1 rather than DM2.
Other Genetic Myopathies
Inclusion body myopathy with early-onset Paget disease and fronto-temporal dementia (FTD) is a rare hereditary disease caused by mutations mostly in the valosin containing protein (VCP) gene, with an autosomal dominant pattern of inheritance. As the name suggests, it is characterized by myopathy, firstly occurring in the proximal muscles and progressing to the extremities and other muscles. Half of the patients develop Paget disease of the bones, with skeletal pains, and one third develop the signs and symptoms of FTD, such as dysnomia, personality changes and attention deficits [47]. It is important to note that it has been reported for patients with no family history but with novel VCP mutations [48], so this should be taken into consideration in the differential diagnosis of patients with myopathy that present signs of FTD or Paget bone disease.
Muscle tissues require big amounts of energy to function, and so mitochondrial function is of paramount importance. Mitochondrial myopathies arise due to deficient oxidative phosphorylation in the mitochondria, which leads to a deficit in ATP and impaired skeletal muscle function. They are the results of mutations in genes implicated in mitochondrial function, and due to the solely maternal inheritance of mitochondrial DNA, this pattern of inheritance should be taken into consideration when investigating such cases. The symptoms vary depending on the different diseases and tend to affect multiple systems besides muscles [49]. One particularly interesting condition in terms of its cognitive phenotype is the mitochondrial encephalopathy, lactic acidosis and stroke-like episodes (MELAS) syndrome. MELAS usually occurs before the age of 40, but overall presents heterogeneity at its early stages. It is caused by specific mitochondrial DNA mutations and muscle biopsies show ragged red fibers [50]. Kraya et al. (2019) studied 10 patients with MELAS syndrome and found that they had lower scores than controls in the entirety of the neuropsychological tests. Specifically, significant differences were reported for tests assessing visual construction ability, visual and divided attention, and verbal fluency [51]. The cardinal symptoms of the syndrome, mainly the stroke-like episodes and the encephalopathy, can also manifest in altered mental status, and lead to gradual accumulation of a plethora of deficits in neurological functions. Eventually, 40% to 90% of the patients develop dementia, mainly as a result of the cortical lesions caused by the stroke-like episodes. Executive functions are also impaired, although neuroimaging studies have shown that the frontal lobe is not particularly affected, and that a generalized, possibly neurodegenerative process underlies cognitive dysfunction. Finally, the syndrome has also been reported to be accompanied by psychiatric disorders, such as depression, psychosis, anxiety and bipolar disorder [52].
A concise presentation of the main cognitive deficits noted in genetic myopathies can be found in Table 1. Language milestone achievement delay 3.
Working, verbal short-term and visuospatial long-term memory impairment 4.
High rates of ADHD and ASD Visual construction, visual and divided attention, verbal fluency impairments 3.
Endocrine-Related Myopathies
It is widely known that endocrine malfunctions heavily impact muscle functions, with several endocrinopathies producing muscular symptoms; hypercortisolism and thyroid disorders are the entities most commonly associated with myopathy.
Steroid myopathy is the result of hypercortisolism, either via long-term exogenous corticosteroid administration or in Cushing's syndrome, with glucocorticoid-induced myopathy being the commonest drug-induced myopathy [53]. Muscle weakness and atrophy mainly affect the proximal muscles and are accompanied by other typical hypercortisolism symptoms, such as body trunk obesity, virilization and cushingoid ("moon") facies [54].
Hyperthyroid myopathy accompanies an over-function of the thyroid gland and leads to generalized muscle weakness with quickly setting fatigue, occasional mild muscle atrophy in proximal muscle zones and involvement of the ocular muscles [55]. Hypothyroid myopathy manifests in the setting of hypothyroidism and entails muscle rigidity, cramps and general weakness, pronounced in the extremities [2,56]. Both have also been tied to rhabdomyolysis, especially hypothyroidism, with elevated serum CK enzymes [55], while a similar entity has been described in aggressively-treated hyperthyroidism [57].
Due to the multifaceted nature of these diseases, it is hard to support the notion that a reported cognitive deficit is linked to the myopathy, and not to the overall endocrine disorder itself. These disorders almost always affect the CNS to a larger or smaller extent, with some conditions having neurological symptoms as cardinal symptoms.
In greater detail, a 2015 study on a pediatric population with thyroid disorders showed that impaired overall cognitive function may be tied to hypothyroidism myopathy, with deficits in attention, memory, arithmetic and verbal skills being mainly reported [57]. Mental retardation has also long been associated with severe hypothyroidism and is one of the main reasons as to why neonatal screening is being considered as a standard clinical practice, while the differential diagnosis of acquired cognitive deficits in older individuals typically includes thyroid function tests [58]. Similarly, individuals with hyperthyroidism and its associated myopathy may also present with altered mental status, emotional instability and confusion [57]. The specific symptoms are classically attributed to increased thyroid function.
Finally, hypercortisolism has also long been associated with a variety of neuropsychological disorders; emotional instability, cognitive impairment, depression and anxiety symptoms are frequently encountered. An excess in cortisol has been shown to induce structural changes in CNS regions such as the hippocampi, which can explain the cognitive symptoms associated with Cushing syndrome [59]. Since the muscular symptoms are only one aspect of the wide symptomatology of hypercortisolism, an attempt to examine the effect of this myopathy on cognition has not yet been made.
Conclusions
Cognitive deficits in myopathies do not seem to be that rare of an occurrence. Some muscular disorders present impairment in well-described cognitive functions, and others are also frequently tied to neuropsychological disorders, such as ADHD and anxiety. In certain diseases, especially the genetic ones, phenotypical variability is often the derivative of different isoforms being involved, or of the existence of certain proteins both in muscles and the CNS. In the majority of other myopathies, such as those tied to endocrine disorders, it is difficult to pinpoint the effect or the association between myopathy and cognitive impairment, given than most of the times, this impairment is considered a steady part of the symptom constellation of the disorder and is the direct result of the endocrine dysregulation. Regardless, studies on the matter are still lacking. This is understandable to a degree, given that the diseases in hand are not frequent and recruiting cohorts of patients big enough to provide the extraction of powerful results is hard. Additionally, the wide array of neurobehavioral assessment tools does not always help in classifying the results, as studies may use diverse IQ tests and neuropsychological batteries to test cognitive impairment. However, given the impact that cognitive impairment has on the life quality of a person already burdened by the muscular involvement itself, it is crucial that more research be conducted in the future, so that timely intervention may possibly preserve the individuals' cognitive functions. Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-05-31T13:05:14.640Z | 2020-05-27T00:00:00.000 | {
"year": 2020,
"sha1": "3b1404a633acd3a9c1a8d1e650bb37e3b54665fe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/11/3795/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee313380be3a6d768b87477eeb98f35acd4dfcdd",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
134990031 | pes2o/s2orc | v3-fos-license | Geometrical analysis of Palesch family chapel in Kľačno, former Gaidel in Western Slovakia
Oval and elliptic spaces are one group of central plans, used mainly in sacral architecture and palaces. Paper deals with geometrical analysis of one of the smaller representatives of the neoclassical sacral architecture with oval plan Palesch family chapel of Virgin Mary in village Kľačno in Western Slovakia. Oval and elliptic forms are not so often used in Slovak historical architecture and they are almost always connected with foreign influence and knowledge brought form Vienna, Paris, Pest, Eger and other education and praxis localities of the builder or architect. This uncommon oval form used in the small chapel is therefore certainly of interest from the point of view of architecture and geometry.
Introduction
Oval and elliptic spaces in architecture start to appear in the 15th -16th cent. and evolution of forms continued with refinement of geometric constructions and broadening of mathematical knowledge. Baroque central sacral and hall spaces were the most inventive demonstration of this progress. Oval, circular and elliptic central spaces continued as a part of architectonical vocabulary in Slovakia also in the following periods of neoclassicism and historicist styles, but only few examples of fully developed large spaces with curvilinear forms could be found here and it is therefore a surprise, that oval was chosen for a small chapel. Fine form and well thought proportions are the most certainly connected with person of highly educated Georg Palesch. Further research in the archive of Spiš Diocese in Spišská Kapitula will be able to confirm hypotheses about construction and bring maybe even original drawings and name of author, who is unknown at the present.
Village Kľačno, its history and architecture of the chapel
Village Kľačno with population of 1100 inhabitants situated in Prievidza district, Trenčín region in the western Slovakia, surrounded by Strážovské mountains, most of the dwellings lie in the narrow valley of Nitra river. Village was founded in the 13th cent. (the first mention in 1413 as Gaydel). It was mostly occupied by German catholic inhabitants, only after the WWII Slovaks become a majority. Name Gaydel (Gaÿdiell ad Baÿmocz [1], Gaydelehota) remained together with Hungarian name Nyitrafő (1913) up to 1948, when village got name Kľačno. Village is rich in term of historical buildings comparing to the size of the settlement -Roman catholic church with preserved renaissance constructions (built 1464) and two chapels (baroque from 1777 and neoclassical from 1824). Patron of the analysed monument -Chapel of Virgin Mary -was Juraj Palesch -Georg Palesch [2], born 1753 in Gaidel (Kľačno) died 1833 in Zipser Kapitel (Spišská Kapitula) [3,4]. He was a roman catholic priest, teacher and pious writer, known as a deeply educated and enlightened personality who founded schools and foundations for education promotion [5]. Palesch founded school also in Kľačno (built 1829) and decided to built here chapel of Virgin Mary, dedicated to memory of his deceased parents (both born and lived in Kľačno). About the period the construction of Palesch family Chapel in Kľačno, that was in the year of 1824 according to [6], village had 254 houses with 1774 inhabitants. Chapel was built in the neoclassical style in 1824. It is oval in the plan, rather small building decorated with pilasters and portico with wooden bell tower in the exterior and disproportionally large built-in organ gallery (added in 1827 according to inscribed cartouche "ANNO 1827 JT " on organ encasement) in the interior. Building was not profoundly altered up to date -geometry analysis could be conducted at the original proportions (original masonry shape, volume). Roofing replacement and minor renovations were realized in 1905, 1927. Renovation together with waterproofing and humidity control works were realized in 1995 (project of Ing. M. Pichová). [7] 3 Geometrical analysis of chapel
Module consideration and measurement units
Compact mass of the chapel shows clearly well thought proportion ratios indicating primary geometric construction. The very first step in geometric analysis was a search for modularity in plan of structure. Square raster was used with various modules. The most striking was coordination of the most important structural points of chapel with module 600 by 600 mm ( fig.2.). Intersections coincide with regulation, main points of circle sections describing curvilinear complex plan. Rectangle circumscribing the main oval/elliptic space is constructed in the ratio 6:9 (approx. 3600 by 5400 mm). Also other main axes and lines lie very closely to or at the modular net -e.g. centres of pillars on the main facade, centres of the concave chamfering between main mass and portico entrance etc. Modularity close to 300 or 600 mm would be logical, because of the most probably used measurement units of the Austrian Empire -the Vienna fathom ((KI) 6 ft = 1896 mm ) and its fraction -foot, Fuß Another system of units that could be used is Roman cubit (römische Elle), Roman foot (pes) , Vienna cubit (Wiener Elle) or German elbow (elle). Any system used would be approximation of human proportions. Further analysis of historical measurement units depends on if original plans could be found in the future. Also overall mass of the chapel structure is so small comparing to the possible observational error and deviations caused by building process, that it is impossible to relevantly proof difference between possible historical measurement units.
Hypothesis A -Ellipse was used in construction drawing and on site
Elliptic constructions are rather common in technical praxis, mainly because of its interesting properties -sum of distances between any point on the ellipse to each of its focuses is constant. But it is not so easy to draw. Various drawing aids were invented to help the process of drawing an elliptic forms in technical design. One of the oldest so called "ellipsographs" originated from the idea of ancient Greek trammel of Archimedes (maybe even from Proclus time). Later on this idea was transformed into many simpler or more sophisticated variations. One of them, based on ellipsograph of Guidobaldo del Monte is described in [1]. Description only deals with mechanism and author took as already clear, that final drawing is really ellipse in the geometrical sense. Proof is in fact already implicated by the means of construction of the mechanism. Ellipsograph consists of two long flat boards, perpendicular to each other with length not shorter than length of minor axis of the ellipse. Third part is movable flat board with two sharp points -two screws, marked as K, L see fig.3 that could be moved on it and fixed in the distances a and a-b (a is equal to the major axis and b is equal to the minor axis of the ellipse) from the drawing end M, which will inscribe ellipse. It is clear that distance |LM| = b. Points K, L are fixed on the third board, but they have to be fluently movable along two basic perpendicular boards, so K is movable along board with points C,D and L is We have for the rectangular triangle ∆KRM and the lengths of its sides that Equations (1) and (2) show that ௫ మ మ + ௬ మ మ = 1 and point M is already located on an ellipse whose centre is at point S and the length of the major and minor axis are a and b, because x and y are the coordinates of M.
Hypothesis B -Mandorla and pole-rope method on site.
Mandorla (also called Vesica Piscis) is geometrical shape also often with symbolical meaning, used in the medieval European art and architecture, but with an ancient origin. Simply constructed as intersection of the two same circles with centres on their radiuses. Intersection is almond shape with symbolical meaning of origin of life. Combination of two mandorlas with their axis oriented perpendicularly is called "Eye of the God". Mandorla is simple construction, easily drawn using poles and rope, therefore practical for building site construction.
Fig.4. Oval constructions by Serlio. Source [3] redrawn by Grúňová
All of the points of the mandorla based geometrical construction are derived from two points of origin A and B with distance of approximately 3600 mm, that create a hypotenuse of the first smaller mandorla. All derived points could be drawn using circles or their section and divisions of segments to 2 or 3 parts. Construction of the "God's eye" is modified, more "rounded" -bigger mandorla has centres of its circles at the points with the distance of 1/6 from endpoints of smaller mandorla hypotenuse. Length of basic segment A, B close to 3600 mm also means transfer of hypothetic module close to 600 mm to all derived distances. The same mandorlas could be attributed to position of important points of exterior architecture of the Chapel.
Hypothesis C -Oval based construction and pole-rope method on site.
Commonly used approximation of the ellipse are various oval shapes consisting of combination of the arcs -more easy to construct basic oval and parallel smaller and bigger oval arcs. The most famous geometrical constructions of oval (used also in baroque and later) are Serlio´s two-centre ovals. (fig.5).
Fig.5. Oval constructions by Serlio. Source [9] redrawn by Grúňová
All these constructions retain the length of the major axis for the given ellipse. Ellipse is approximated by these constructs and for constructions (b) to (d) the length of the major axis gives the length of the minor axis. If we denote the height of the triangle ∆UHK as k and half of the side of the triangle ∆UHK as h (see Figure 6), we can express these lengths for each type of Serlio's constructions. We can also express the length of the minor axis ܾ = ܽ + √ℎ ଶ + ݇ ଶ − (ℎ + ݇).
In the construction in Figure 5a), we can approach two ways. The first is determining the length of the minor axis b, thus finding the length of the triangle side 2h. The second is determining the length of the side 2h of the triangle ∆UHK and calculating the length of the minor axis ܾ = ܽ + ℎ൫1 − √3൯.
Conclusions
Hypothesis A Ellipse -construction is simple, but not easily applicable on building sitemaybe on small scale, on larger scale is not applicable at all. If similar drawing method would be used on building site of the Palesch family Chapel in Kľačno movable board with drawing point M would had be almost 6 m long.
Hypothesis B Mandorla based construction -easily executable, but highly complicated one, therefore it is much less probable that it would be the first choice for a small scale design of the Chapel (and no visible mention in the form of inscription of symbolism on the Chapel itself or in the chronicle is found by research so far). fig.7. clearly shows that three of Serlio's methods result in too narrow ovals to be applicable on the chapel's plan. The first construction is much more flexible, so it is possible to use reverse-engineering to find the most suitable pair of equilateral triangles.
Hypothesis C Oval based construction -comparison on the
Three main ways of possible construction method were chosen and those divided to method of construction "on a paper" and on the building site. Chapel could be designed using "true" ellipse construction, mandorla based, heavily symbolic construction and by oval based construction method. Comparison of hypothesis A, B and C shows the logical conclusiontwo of the methods A and B are too difficult, rather complex, therefore not easily repeatable, and un-practical to construct on the building site. Serlio's constructions method were certainly known to builder and designer of the chapel as a common practice. They are easily executed and method fig. 5 a) and fig. 7a) oval is the most probably used.
Further analysis of the chapel in Kľačno will concentrate on finding more connections of Georg Palesch to possible architects -probably foreign or builders in the region with aim to find if possible original documentation. The second aim will be to make comparison as broad as possible to similar historical buildings in Slovakia with the goal to find possible used methods of construction known in the first half of 19th century. | 2019-04-27T13:05:48.892Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "3408d26b7336e1294f28627c16bac965657553f4",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/31/matecconf_rsp2017_00060.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c3af100236efa8249db17763e2ccc328e86605a3",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Geography"
]
} |
7617727 | pes2o/s2orc | v3-fos-license | Memory load modulates graded changes in distracter filtering
Our ability to maintain small amounts of information in mind is critical for successful performance on a wide range of tasks. However, it remains unclear exactly how this maintenance is achieved. One possibility is that it is brought about using mechanisms that overlap with those used for attentional control. That is, the same mechanisms that we use to regulate and optimize our sensory processing may be recruited when we maintain information in visual short-term memory (VSTM). We aimed to test this hypothesis by exploring how distracter filtering is modified by concurrent VSTM load. We presented participants with sequences of target items, the order and location of which had to be maintained in VSTM. We also presented distracter items alongside the targets, and these distracters were graded such that they could be either very similar or dissimilar to the targets. We analyzed scalp potentials using a novel multiple regression approach, which enabled us to explore the neural mechanisms by which the participants accommodated these variable distracters on a trial-to-trial basis. Critically, the effect of distracter filtering interacted with VSTM load; the same graded changes in perceptual similarity exerted effects of a different magnitude depending upon how many items participants were already maintaining in VSTM. These data provide compelling evidence that maintaining information in VSTM recruits an overlapping set of attentional control mechanisms that are otherwise used for distracter filtering.
INTRODUCTION
Our ability to select relevant information from our sensory input for further processing is critical for optimizing our performance on many cognitive tasks. This selection can be challenging if the information relevant for the task at hand shares sensory features with task-irrelevant information. Specifically, the challenge emerges because these two sets of input compete for representation (Desimone and Duncan, 1995). Hence, this selection requires attentional control mechanisms to enhance task-relevant inputs and/or supress task-irrelevant inputs, thereby biasing this competition in favor of relevant sensory material (Corbetta et al., 2000).
Given that our visual short-term memory (VSTM) capacity is highly limited (Cowan, 2001;Todd and Marois, 2004), encoding and maintaining only relevant information is critical for efficient performance. Indeed, the significance of filtering mechanisms for optimum VSTM performance has been well established. A number of studies have demonstrated that attentional filtering acts as a gateway mechanism for VSTM by reducing the memory load one needs to maintain and, importantly, that successful filtering is predictive of VSTM capacity (Vogel et al., 2005;McNab and Klingberg, 2008;Gazzaley, 2011). However, the reverse relationship is less well known, i.e., whether VSTM load can modulate attentional filtering during encoding and maintenance. Recent evidence suggests a more intimate relationship between attentional filtering and VSTM than has been previously shown, by documenting shared mechanisms between competition biasing and VSTM maintenance (e.g., Shimi and Astle, 2013). The behavioral evidence so far suggests that the number of items being held in memory will influence subjects' ability to mitigate the impact of distracting stimuli. In our previous study we demonstrated that targets and distracters needed to be more perceptually distinct in order for subjects to reach asymptotic performance when VSTM was full. This behavioral result mirrors some other recent results, which show that distracter processing is attenuated when subjects are maintaining a large number of items (e.g., Rissman et al., 2009). Other studies have explored the impact of distracting stimuli on processing using a different behavioral design. For example, presenting flanking to-be-ignored stimuli alongside targets, that can be either congruent or incongruent with the target, is a good way of exploring the impact of this irrelevant information on target processing. When working memory is taxed subjects are less able to mitigate the interference from the incongruent distracters (Pratt et al., 2011). These findings are all consistent with the view that maintaining information for brief periods of time has a detrimental impact upon subjects' ability to select targets and ignore distracters.
Here, we examined the modulatory effects of VSTM load on the attentional filtering of perceptually competing items. Distracter filtering is an ideal way of manipulating attentional control, with participants having to enhance the processing of relevant targets and supress the processing of to-be-ignored distracters. The process of filtering distracters requires participants to attenuate the activity corresponding to the representation of the distracter, and/or to enhance the activity corresponding to the representation of the target. Experiments in this area usually include trials in which there are no distracters, and average performance (and/or neural activity) is then compared with that resulting from distracter-present trials. The difference between these two trial types is then attributed to the attentional control mechanisms recruited to deal with the distracters. This difference could be in terms of neural activity, or in terms of a relative behavioral cost, such as reduced accuracy (e.g., Vogel et al., 2005). However, in reality attentional control will likely vary from moment to moment, with the control applied fluctuating in response to changing task goals or levels of potentially distracting input. Here, we explore these graded changes in attentional control. Rather than comparing distracter-present and distracter-absent trials, each trial in our task contained a number of distracters which could be variably target-like. On some trials, the distracters were perceptually very distinct from the targets, meaning that they provided little competition for representation; on other trials, distracters were perceptually more similar to the targets, thus requiring greater attentional control to bias the processing of the targets. In short, the target-distracter similarity was varied on a continuum between these two extremes, enabling us to explore the graded changes in the application of attentional control. Therefore, the analytic approach we employed in our study differs from that used in previous studies. Instead of looking at differences (either neural or behavioral) across trials with and without distracters, our strategy focuses on the graded continuous effect of distracter similarity that occurs across trials.
Research has demonstrated that attentional control mechanisms are highly flexible (e.g., Yantis and Johnston, 1990;Lavie and Tsal, 1994;Lavie, 1995) and can act not only upon sensory representations but also at later cognitive stages, upon items already stored in VSTM (e.g., Griffin and Nobre, 2003;Nobre et al., 2007;Astle et al., 2009;Gazzaley and Nobre, 2012). Whilst it is clear from these studies and those demonstrating that distracter-filtering constrains VSTM, that the spatial attention and VSTM systems can interact (e.g., Griffin and Nobre, 2003;Astle et al., 2012), the extent to which the basic functions of these two systems will trade-off against one another remains to be explored. In this study, we aimed to examine the extent to which the very process of maintaining information in VSTM would recruit those mechanisms typically used for attentional control. In our paradigm, in addition to manipulating the similarity of targets and distracters, we also varied the number of targets. This enabled us to examine memory load and attentional control effects in a single paradigm, and to track graded changes in attentional control both in terms of behavioral performance and its underlying neural processes. In particular, we sought to test the extent to which these two variables would interact, i.e., whether participants' ability to filter distracters would change depending upon the number of items they were already holding in VSTM.
PARTICIPANTS
Fifteen healthy right-handed adults (10 female, mean age 24 ± 4.78 years SD) with normal or corrected-to-normal vision participated in the study. One participant contributed behavioral data only because of poor data quality in their EEG recording. The study was approved by the University of Cambridge Psychology Research Ethics Committee and participants provided written informed consent. Participants were recruited from the MRC Cognition and Brain Sciences Research Panel and received a monetary compensation (at a rate of £10 per hour). Figure 1 illustrates the task. On each trial, participants viewed a sequence of three matrices, each containing a target disc in a particular color (see Section Stimuli below). Participants were instructed to remember the location and order of the target discs in all three matrices. At the end of each trial, participants viewed a final "probe" matrix with one location highlighted; they responded as to whether a target disc had occupied the highlighted location in the preceding sequence and, if so, in which matrix the probed location had been occupied. They responded by pressing keys 1-3 on the numeric keyboard corresponding to the three matrices respectively, or key 4 if none of the previous targets had occupied the probed location. Participants were instructed to make non-speeded reaction times (RTs), and instead to attempt to maximize their accuracy.
BEHAVIORAL TASK
Targets varied in number depending on the VSTM load condition: for load 3 there was one target disc in each matrix; for load 5 there were two targets in the first and second matrix followed by a single target in the third matrix. This ensured that the third matrix always contained a single target across both load conditions and therefore there was a common phase of the trial that was perceptually the same across the two levels of VSTM load (Shimi and Astle, 2013). In addition to the target disc/s, each matrix contained distracter discs. There were always three distracters per array, and these are described below.
STIMULI
We varied the perceptual similarity between the targets and the distracters parametrically, in order to vary the difficulty of selecting targets relative to distracters (Desimone and Duncan, 1995). Each disc (0.53 • in diameter) was defined in RGB space: the targets were made of a red background (R:255, G:0, B:0) with a blue ring (R:0, G:0, B:255). For each distracter we then added green in 1% increments from 1 to 255, with the most dissimilar distracter comprising a yellow background (R:255, G:255, B:0) and a cyan ring (R:0, G:255, B: 255). This was counterbalanced across participants: for half of the participants the target comprised the yellow background and cyan ring, with Frontiers in Human Neuroscience www.frontiersin.org January 2015 | Volume 8 | Article 1025 | 2
FIGURE 1 | (A)
A trial schematic showing the paradigm. Subjects are presented with three matrices of targets and distracters, which have to be remembered. Following this a single location is probed and participants must indicate in which (if any) matrix this location was occupied by a target. The top trial sequence shows a VSTM load 3 trial, in which the correct response is "matrix 2" . The lower trial sequence shows a VSTM load 5 trial, in which the correct response is "4" , meaning "no matrix"; (B) Examples of the target and distracter stimuli.
distracters having progressively less green. For each participant we had a target item and a set of 99 distracters, each of which was progressively more dissimilar to the target (the color of the target was consistent throughout the experiment for each participant, examples of which can be seen in Figure 1B). Each matrix comprised a 4 × 4 set of boxes, with each matrix each spanning 3.08 • × 3.08 • .
EXPERIMENTAL DESIGN
Each matrix appeared for 300 ms and followed the previous one after 700 ms. Finally, after a randomly varied duration of 1100-1480 ms the fourth (probe) matrix appeared and remained on the screen until subjects selected their response. Following their response they briefly received feedback (the empty matrix flashed red or green for incorrect or correct responses, respectively, for 250 ms) there was then an additional 500 ms gap (with just the presentation of the empty matrix) before the start of the next trial. Participants performed 600 trials in a fully randomized order: 300 for each level of VSTM load (Load 3 and Load 5), with an equal number of 40 different levels of distracter similarity (ranging from 10% to 50% dissimilar to the target). Our previous study had demonstrated that this was a sensitive range to choose (Shimi and Astle, 2013). Participants completed 12 test blocks of 50 trials each, interleaved with self-paced breaks. We imposed the additional constraint that no location could be occupied by either a target or distracter twice on any trial, as this would introduce the situation in which subsequent items could mask or overwrite previous items. There were an equal number of trials upon which we probed a target from the first matrix (M1 trials), the second matrix (M2 trials), the third matrix (M3 trials) and trials upon which we probed a non-target location (which was always one of the distracter-occupied locations, evenly distributed across the three matrices). That is, 25% of trials were allocated to each of these four trial types. Each participant began the session with a practice block. During this block participants performed only the Load 3 condition.
EEG ACQUISITION
Electroencephalogram activity was recorded continuously using a BrainVision amplifier and actiCAP electrodes mounted on an elastic cap from 64 sites according to the 10-20 system. The montage included 6 midline scalp sites (Fz, Cz, CPz, Pz, POz, Oz) and 29 scalp sites over each hemisphere (FP1/FP2, AF3/AF4, AF7/AF8, F1/F2, F3/F4, F5/F6, F7/F8, FC1/FC2, FC3/FC4, FC5/FC6, FT7/FT8, FT9/FT10, C1/C2, C3/C4, C5/C6, T7/T8, CP1/CP2, CP3/CP4, CP5/CP6, TP7/TP8, TP9/TP10, P1/P2, P3/P4, P5/P6, P7/P8, PO3/PO4, PO7/PO8, PO9/PO10, O1/O2). AFz served as the ground. Blinks and eye movements were monitored with electrodes placed horizontally and vertically around the eyes. Electrode impedances were kept below 20 kΩ. We used a 250 Hz analog-to-digital sampling rate and recorded all frequencies between 0.1 and 124 Hz. The EEG was referenced online to the FCz electrode and then re-referenced off-line to the algebraic average of the left and the right mastoids. Bipolar electro-oculogram (EOG) signals were derived by computing the difference between recordings horizontal to each eye (HEOG) and between recordings vertical (VEOG) to the left eye. Participants were instructed not to move their eyes from central fixation or to blink, and any eye movements and blinks were removed using an independent component analysis (ICA): we applied a 1 Hz highpass filter and submitted the continuous EEG to a temporal ICA (using EEGLAB; Delorme and Makeig, 2004); we correlated the time-course of each IC with our bipolar EOG channels in order to identify the ICs that corresponded to blinks and eye-movements; any that correlated >0.1 were then removed from the continuous data prior to epoching (as in Shimi and Astle, 2013). In almost all cases this resulted in two to three components being removed.
EVENT-RELATED INDUCED RESPONSE ANALYSIS
A main aim of this study was to measure the graded effect of target-distracter similarity on amplitudes and then compare it across the two levels of VSTM load. A standard ERP analysis, with its averaging procedure, is not capable of capturing these graded changes, therefore we used an Event-related Induced Response Analysis. We formed epochs starting 700 ms before and ending 1700 ms after the onset of the third matrix. We chose this period of the trial to form our epochs because it is perceptually equated across the two levels of VSTM load. That is, for both levels of VSTM load the third matrix contains only one target. This is important, because it means that any interaction we might observe between VSTM load and distracter-similarity represents a genuine effect of VSTM load rather than of presenting different numbers of items to be encoded. Phase and power estimates were extracted for these epochs using a continuous wavelet transform (Tallon-Baudry and Bertrand, 1999). This used six full cycles to establish the phase angles and power estimates, for frequencies between 2 and 30 Hz (in steps of 1 Hz). The power estimates were then submitted to the subsequent steps of our analysis. Because we were not interested in average evoked responses per se, but rather in the graded effect of target-distracter similarity, we analyzed the data using a general linear model (GLM), otherwise known as multiple regression. Within this model there were two continuous trial-wise regressors: the first was the target-distracter similarity measure (10-40% dissimilarity, inclusive) and the second was memory load (Load 3 vs. Load 5). This GLM was applied to each sample, at each electrode and for each participant. The result was a data set in which we established the linear effects of both regressors and their interaction, including their topographical distribution and time-course, within each participant. These were then fed into a group-level mixed-effects analysis, such that we identified significant effects of each regressor, or their interaction, at the population level inferred over all 14 participants.
Once the group-level analyses were completed, we identified clusters of consecutive samples (either consecutive in time and/or across neighboring electrodes). To do this, the output of the GLM was first converted into t statistics, using the mean of the interaction parameter across the subjects and the standard deviation of the parameter across subjects. This was repeated across all electrodes and time points to produce a single dataset that expressed our effect as t values. To be included in a cluster, the t statistic of that particular sample had to exceed 2.1. This threshold is essentially arbitrary since it is the subsequent permutation procedure that tests for significance (the same threshold is applied to each permutation to produce the null distribution). This particular clustering threshold was chosen because it approximates a two-tailed p = 0.05 threshold. Once our clusters were identified we recorded the size of these clusters. We then used a sign flipping permutation procedure to produce a null distribution, using 5000 permutations. With each random permutation we identified the size of any clusters where t > 2.1; after many permutations this resulted in a distribution that expressed the size and frequency of clusters of t > 2.1 that we could find by chance under the null hypothesis. We were then able to compare the size of our clusters to this null distribution, thereby identifying their relative alpha level and produce a P value. This approach has a number of advantages relative to more traditional approaches to significance testing with electrophysiological data: firstly, it makes no a priori assumptions about when or where effects are likely to be apparent within the epoch of interest, as is sometimes the case if researchers focus on particular peaks and latencies; secondly, this approach accounts for multiple comparisons over space and time, which can result in reporting spurious effects if not corrected for (Kilner, 2013). We did not enter all the GLM parameter estimates into the multiple-comparisons correction (i.e., we just explored the interaction term). This was because each additional comparison ought to be reflected in the correction; if the analysis across all time points and electrodes is being repeated multiple times to explore the effect of various regressors then these repetitions should be factored into the multiple-comparisons correction. For this reason we chose to focus on the contrast of primary interest here-the interaction between VSTM load and target-distracter similarity.
To summarize, our EEG analysis enabled us to estimate the linear effect of the continuous variable of target-distracter similarity and to compare this effect over the two different levels of load (Load 3 vs. Load 5). This is conducted over the whole set of electrodes and across all time points, without regions or time-windows of interest. The results are then fully corrected for multiple comparisons over both space and time.
BEHAVIORAL DATA
Increasing the memory load significantly increased mean RT (t (14) = 9.152, p < 0.001) and reduced mean accuracy (t (14) = 7.159, p < 0.001). We also conducted an analysis on the accuracy of trials when the final item was probed as all trials are well matched across the two levels of VSTM load (that is the final matrix is perceptually identical across the two levels of load). We split the trials into those on which the distracters were similar or dissimilar (using a median split along the target-distracter similarity dimension). We then averaged these together, thereby reducing target-distracter similarity to a two level factor, and included it alongside VSTM Load in a 2 × 2 repeated measures ANOVA. There was a significant impact of target-distracter similarity on accuracy (F (1,14) = 8.008, p = 0.013), but no significant impact of VSTM Load on the accuracy of these M3 trials (F (1,14) = 0.096, p = 0.761). The interaction between these two factors approached significance (F (1,14) = 3.521, p = 0.082), because there was no difference between the two levels of VSTM load when the distracters were very similar to the target (t (14) = 0.648, p = 0.527), but there was when the distracters became more dissimilar (t (14) = 3.928, p = 0.002).
However, this more conventional way of testing for the interaction between VSTM load and distracter processing is not well suited to our design, because it does not make full use of the target-distracter similarity continuum. Indeed dichotomizing our target-distracter similarity variable, something necessary for performing the conventional ANOVA, reduces the overall statistical power of the comparison (Cohen, 1983), and can produce misleading results (MacCallum et al., 2002). For this reason we also analyzed the behavioral data using a regression approach. This allowed us to include the trial-by-trial changes in target-distracter similarity, in a way that mirrored the electrophysiological analysis. We quantified the effect of targetdistracter similarity on accuracy, for each trial type, using a logistic regression. The resulting slopes were submitted to a twoway ANOVA, with the within-subject factors of Order (whether the first, second, or third array was probed) and Load (Load 3 vs. Load 5). There was a main effect of Load (F (1,13) = 11.012, p = 0.006), with the effect of target-distracter similarity being greatest for Load 3 relative to Load 5 trials. However, there was no main effect of serial Order (F (2,26) = 0.128, p = 0.880). These two factors interacted significantly (F (2,26) = 3.583, p = 0.042): memory Load had no effect upon slopes when the first item was probed (F (1,13) < 0.001, p = 0.992) or when the second item was probed (F (1,13) = 2.221, p = 0.160), but it did when the final item was probed (F (1,13) = 7.964, p = 0.014).
It is difficult to interpret the behavioral data from this task, but what can be more readily interpreted is the final simple main effect; that target-distracter similarity has a greater effect on Load 3, relative to Load 5 trials, and cannot stem from differential delay from presentation or any perceptual differences since the final array was identical across the two levels of memory load. These data can be seen in Figure 2A. The same pattern of results was apparent in the RT data, although there was no significant interaction between memory Load and serial Order (F (2,28) = 0.975, p = 0.390), or significant main effects of either factor (Order: F (1,14) = 0.510, p = 0.606; Load: F (1,14) = 3.381, p = 0.087). These data can be seen in Figure 2B.
Interaction between target-distracter similarity and VSTM load
The design of our task enabled us to explore the extent to which memory load modulated the effect exerted by target-distracter similarity on amplitudes. Over the frontal and fronto-central electrodes, and from 140 to 440 ms, memory load attenuated the effect of target-distracter similarity; i.e., when memory load was high, target-distracter similarity had less of suppressive effect on power estimates (P corrected = 0.0474). A topographical plot of this interaction can be seen in Figure 3B, with the effect of target-distracter similarity also being plotted separately for the two levels of memory load in Figure 3C and Figure 3D (Load 3 and Load 5, respectively). In Figure 3A we use the frontal (Fz, F1, F2, F3 and F4) and fronto-central (FC1, FC2, FC3 and FC4) electrodes to show the time-course of this interaction. We plotted the effect in terms of the "parameter estimate", which corresponds directly to the relative effect of target-distracter similarity on power estimates (i.e., the steepness of the slope from our GLM). We reasoned that this reduced effect of target-distracter similarity may stem from a reduced ability to supress distracters when memory becomes full. If this were the case then we might expect that those participants who are worst at the load 5 trials would show the greatest attenuation of the target-distracter similarity effect with load.
To test this we extracted the size of the interaction for each individual and correlated this alongside performance on load 5 trials. The relationship between these two factors was negative (r = −0.519, p = 0.057)-there was a tendency for the worse the participant at load 5 trials, the greater the attenuation of the distracter effect with load. Although this failed to reach significance. We were also interested in the frequencies that drove this suppression effect in our result. For this reason, we reanalyzed the data looking for the interaction between target-distracter similarity and VSTM load separately across different timefrequency bands (separately from 2-30 Hz in 0.5 Hz steps). The results of this can be seen in Figure 4, for Load 3 and Load 5 trials separately, and for the difference between them. From this we can see that our effect is primarily driven by a suppression of beta band activity, which is greater for Load 3 trials.
DISCUSSION
This study aimed to provide further insight into the relationship between the neural mechanisms of attentional selection and VSTM maintenance. To do so, we generated a set of stimuli in which we manipulated parametrically the degree of similarity between targets and distracters (from 90% similar to only 50% similar). This allowed us to vary in a continuous trial-wise manner the ease with which participants could select a sequence of targets amongst distracters. The behavioral data from this task are necessarily difficult to interpret, because on each trial there is only one opportunity to obtain a response, despite items being presented in sequence. This makes our interaction between serial order, VSTM load and our target-distracter similarity factor difficult to interpret, although we can think of a number of possible explanations: one possibility is that the impact of target-distracter similarity is swamped by a recency effect (Waugh and Norman, 1965;Shimi and Astle, 2013)-that is, performance is overall worse when the first two sets of items are probed, relative to when the final item is probed, and the impact of target-distracter similarity may only be apparent on the most accurate trials (i.e., those that come towards the end of the sequence). A second possibility is that when items are encoded into VSTM subsequent processes, such as consolidation or rehearsal, alter the attentional control effect. This could explain why the target-distracter manipulation only has a significant effect upon the final item in the sequence, presumably before these processes can take effect. A third possibility is that the target-distracter manipulation may only take effect when VSTM becomes full, towards the end of the sequence. With these data alone we do not think that we can tease apart these explanations. To do this a design is required wherein the experimenter can separate these processes, by varying sequence length and probing performance at different points in the sequence. In addition, incorporating a system of retrieval cues may enable the experimenter to separate the impact of target-distracter similarity on processing during in different phases of the trial-at encoding, during maintenance or at the point of retrieval. Nonetheless, the critical behavioral result from the current design that can unambiguously be interpreted was that the degree of similarity between targets and distracters had the greatest effect on low VSTM load trials, and that this was apparent when the final item in the sequence was probed. That target-distracter similarity has a greater effect on Load 3, relative to Load 5 trials, and cannot stem from differential delay from presentation or any perceptual differences since the final array was identical across the two levels of memory load. In order to understand the electrophysiological basis of this attention-VSTM interaction, we used a GLM in which target-distracter similarity was a continuous trial-wise regressor. As in our behavioral data, this factor interacted with VSTM load.
GRADED CHANGES IN DISTRACTER FILTERING ARE MODULATED BY VSTM LOAD
Our data demonstrate that distinguishing targets from distracters requires active cognitive control. The same graded changes in perceptual similarity between targets and distracters exerted effects of a different magnitude depending upon how many items participants maintained in VSTM. Following the onset of the final array of items, the more similar the targets and distracters the greater the power suppression over the frontal electrodes, primarily in the beta band. This frontal power suppression was further modulated by VSTM load; the more items being actively maintained in VSTM prior to the onset of this final item, the lesser the effect of target-distracter similarity.
A breakdown of our results shows that that this suppression is most prominent in the beta band. In general, oscillations are thought to play a critical role in coordinating activity of distinct regions of cortex and in regulating neuronal excitability (e.g., Haegens et al., 2011). However, the specific role of beta band activity is not well understood. A view growing in popularity is that whilst rapid neural rhythms indicate the integration of information over small spatial scales, slower rhythms, such as those in the beta band, correspond to the integration of information of larger spatial scales (e.g., Engel and Fries, 2010). Owing to its prominence at rest, the beta rhythm has been termed an "idling rhythm" (Pfurtscheller et al., 1996).
The suppression of beta band activity has been shown to correspond closely with the implementation of voluntary topdown control processes, over both motoric and cognitive processes (see Engel and Fries, 2010, for a review). For example, coherence between frontal and parietal regions predominantly occurs within the beta-band during an endogenous top-down attentional search, but more predominantly in the gamma band during attentional pop-out searches (Buschman and Miller, 2007).
One possible interpretation of our result is that the power suppression that we observed reflects the actual processing of the distracters themselves. When VSTM load is high, it is possible that there are no more resources available for processing distracters, hence their reduced effect on power (Lavie, 1995). However, we think it is more likely that this frontal power suppression reflects participants' top-down control of the target-distracter competition; i.e., when VSTM load was high, participants were not able to exert the control necessary to mitigate the influence of the distracters, and this is why the neural effect of the distracters is attenuated in the high VSTM load condition. We believe that our results are more readily explained by this latter interpretation. Firstly, this explanation fits well with the behavioral data. Secondly, this is in line with findings from the functional magnetic resonance imaging (fMRI) literature, which have demonstrated that when memory load is high, participants are less able to attenuate the sensory processing of taskirrelevant distracters (Rissman et al., 2009;Kelley and Lavie, 2011). These studies have shown that the active suppression of salient incongruent distracters is impaired by a concurrent maintenance task. For example, Rissman et al. (2009) showed that participants were less able to use attentional control to attenuate the processing of irrelevant visual distracters when maintaining a high load of auditory memory items. Similarly, Kelley and Lavie (2011) demonstrated that the early visual processing of distracters was modulated by short-term memory load. When memory load was high, the distracters exerted a larger effect on early sensory processing. A final example that supports this interpretation is that when memory load is high, functional connectivity between areas in frontal cortex and areas in occipital cortex is reduced (Soto et al., 2012), which may provide the underlying mechanism by which topdown control can be exerted on visual sensory processing. These results collectively suggest that resources expended in maintaining information are also used for attentional control: that is, when resources are already tied up with maintenance, attentional control functions are impaired. The experimental and analytic approach we employed here allowed us to demonstrate that graded changes in target-distracter discrimination are modulated by VSTM load. In a previous study we used behavioral measures to demonstrate this effect (Shimi and Astle, 2013); in behavioral terms, larger differences between targets and distracters were needed for successful target selection when VSTM load was high. Here, we extend this finding to demonstrate its neural basis.
An alternative explanation for our results could be that the event-related induced response effect simply reflects generic difficulty per se; that is, when difficulty is also high due to VSTM Frontiers in Human Neuroscience www.frontiersin.org January 2015 | Volume 8 | Article 1025 | 7 load, the relative effect of sensory discrimination difficulty is reduced. Although such a possibility exists, we believe that if this were true then we should also expect to see an impact of VSTM load on those trials when the final item is probed. However, whilst the relative effect of target-distracter similarity is altered, there is no main effect of VSTM load on these trials. A further possible interpretation could be that as distracters become more targetlike, participants mistake them for targets and thus they exert a VSTM load effect. Of course this is possible and it is difficult to rule out, nonetheless, we do not think that it can account for, or undermine, the particular interaction that we report here. The same number of distracters were present across the two levels of VSTM load, so if participants began to erroneously store distracters as targets, then this should have increased the number of stored items equally for each level of VSTM load.
In the current design we defined our targets and distracters in RGB space, and did so universally for all subjects. However, in reality there is unlikely to be a linear relationship between RGB space and subjects' perception of color. Future studies could explore the relationship between VSTM load and perceptual competition effects more sensitively by titrating these discrimination values individually for each subject. Furthermore, this psychometric process would be better implemented using a color space that better reflects the color-opponent processes of human vision, such as the LAB system (with L corresponding the lightness, and the other values corresponding to the two color-opponent channels).
In conclusion, findings here suggest that the memory load maintained in VSTM modulates graded changes in perceptual processing. These findings provide further insight to the close coupling between attentional filtering and VSTM maintenance and demonstrate that these two cognitive processes may share some underlying neurophysiological mechanisms. In short, when items are maintained in VSTM our ability to use attentional control mechanisms to distinguish targets and distracters is modulated. | 2016-05-17T06:56:58.319Z | 2015-01-06T00:00:00.000 | {
"year": 2014,
"sha1": "71bf42ea8a03954f398c93220f22b611a8f3e20c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2014.01025/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "71bf42ea8a03954f398c93220f22b611a8f3e20c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
259295849 | pes2o/s2orc | v3-fos-license | CLARIFY: cell–cell interaction and gene regulatory network refinement from spatially resolved transcriptomics
Abstract Motivation Gene regulatory networks (GRNs) in a cell provide the tight feedback needed to synchronize cell actions. However, genes in a cell also take input from, and provide signals to other neighboring cells. These cell–cell interactions (CCIs) and the GRNs deeply influence each other. Many computational methods have been developed for GRN inference in cells. More recently, methods were proposed to infer CCIs using single cell gene expression data with or without cell spatial location information. However, in reality, the two processes do not exist in isolation and are subject to spatial constraints. Despite this rationale, no methods currently exist to infer GRNs and CCIs using the same model. Results We propose CLARIFY, a tool that takes GRNs as input, uses them and spatially resolved gene expression data to infer CCIs, while simultaneously outputting refined cell-specific GRNs. CLARIFY uses a novel multi-level graph autoencoder, which mimics cellular networks at a higher level and cell-specific GRNs at a deeper level. We applied CLARIFY to two real spatial transcriptomic datasets, one using seqFISH and the other using MERFISH, and also tested on simulated datasets from scMultiSim. We compared the quality of predicted GRNs and CCIs with state-of-the-art baseline methods that inferred either only GRNs or only CCIs. The results show that CLARIFY consistently outperforms the baseline in terms of commonly used evaluation metrics. Our results point to the importance of co-inference of CCIs and GRNs and to the use of layered graph neural networks as an inference tool for biological networks. Availability and implementation The source code and data is available at https://github.com/MihirBafna/CLARIFY.
Introduction
In the complex human body system, cells continually interact with one another through a series of biochemical signals. This communication helps the encompassing tissue-an ordered collection of multiple cell types-maintain its shape and function. These extracellular signaling interactions (CCIs) often occur when ligands secreted from one cell bind to receptors on another cell. Identifying these interactions is critical to understanding the role of individual cells in maintaining tissue homeostasis, while responding to their microenvironment (Rouault and Hakim 2012;Zhou et al. 2018). Thus, methods have been developed to elucidate these cell-cell interactions (Almet et al. 2021;Armingol et al. 2021;Dimitrov et al. 2022).
These methods are largely based on single-cell (sc)-RNA seq data, and unfortunately, result in the positive labeling of many false interactions. For example, a cell expressing a ligand may be deemed to interact with another cell expressing the receptor, regardless of their spatial location. In reality, the interaction can happen only if the pair is proximal as the ligand can only diffuse so far through a tissue. With the rise of spatial transcriptomics, we are now able to not only understand gene expression in a single cell, but also identify the spatial location of the cell expressing the gene (Stå hl et al. 2016;Wang et al. 2018;Eng et al. 2019;Rodriques et al. 2019). Now the methods have introduced post-processing steps to cut down on false-positive interactions by eliminating distant predicted interaction (Efremova et al. 2020;Garcia-Alonso et al. 2021). And, those that use spatial transcriptomics data from the start mainly predict cell-type level interactions (Cang and Nie 2020;Efremova et al. 2020;Shao et al. 2022).
Note that these extracellular interactions are not standalone, but occur alongside intracellular molecular interactions. Gene expressions are known to be regulated by transcription factors (TFs), which are also encoded by genes. Together they form networks called gene regulatory networks (GRNs) (Levine and Davidson 2005). Many methods have also been developed for GRN inference using gene expression data, mostly for bulk cells (Pratapa et al. 2020), while some infer cell-type specific GRNs (Chasman and Roy 2017;Wang et al. 2021). A few known methods have been created for single cell-specific GRN inference (Zhang et al. 2022b;Zhang and Stumpf 2023). However, to our knowledge, there are no published methods for inferring GRNs using spatial transcriptomic data.
To summarize, both CCI inference and GRN inference have been extensively researched in the last few years even at the single cell level. However, current methods view the two tasks as being essentially separate. In reality, however, intracellular signaling (through GRNs) affects extracellular signaling (CCIs) and vice versa. Extending from our previous example, when a ligand from one cell binds to a receptor on another, it will activate or repress a signal transduction pathway in the second cell, thus significantly impacting the GRN of the second cell. Similarly, the extracellular signals generated from cell 2 may, in turn, further activate or repress the communication from cell 1. Therefore, while many methods have been published for CCI inference that incorporate spatial constraints, they are still plagued with a high number of false positives, as downstream gene regulatory information is not incorporated. Similarly, with GRN inference, there is a need to infer spatial context aware and cell-specific GRNs. Here, we make the reasonable assumption that the closer two cells are in spatial proximity, not only are they more likely to engage in a CCI, but also their GRNs should be more similar as they will engage in similar regulatory actions. The cells that are spatially close AND of the same type shall have the most similar GRNs, for the aforementioned reasons. Using this idea, we propose the first method for a joint refinement of spatially-aware CCI and GRNs.
While it is logical to motivate the need for joint inference of extracellular and intracellular interactions, developing computational methods for simulating and inferring these complex signaling pathways remains a challenging task. Our method relies on first viewing this entire network of interactions as a multi-level knowledge graph incorporating information from the cell-level and the gene-level. Our method then utilizes graph neural networks (GNNs) to embed both the cell-level and the gene-level information together into a robust latent representation. GNN based methods have become largely ubiquitous in the computational biology domain (Yuan and Bar-Joseph 2020;Li and Yang 2022) and in biomedicine/drug discovery as well (Li et al. 2022b;Zeng et al. 2022) largely because of their ability to take advantage of contextual information (Tie and Pe 2022). They have been used in myriad situations where spatial context was important and have recently made breakthroughs in biological findings (Wu et al. 2022;Zhang et al. 2022a). This motivates GNNs as a fitting candidate for our task to learn our multi-level knowledge graph.
We propose CLARIFY, a multi-level graph autoencoder (GAE) that refines intracellular and extracellular interaction networks by utilizing the spatial organization of single cells given by spatial transcriptomics data. CLARIFY takes as input spatial transcriptomics data and produces cell-level, genelevel, and combined embeddings that encapsulate the single cell gene expression, spatial context, and gene regulatory information to aid in the refinement of extracellular/intracellular interactions. We test CLARIFY on two real datasets and one simulated dataset. For the task of CCI reconstruction, we compare the performance of CLARIFY with the only other existing semi-supervised learning method for this task: DeepLinc . Additionally, on simulated data, where ground truth GRNs and cell-type CCIs are available, we compare the CCI inference with SpaOTsc (Cang and Nie 2020), and compare the GRN inference with Genie3 (Huynh-Thu et al. 2010). We show that CLARIFY outperforms existing methods in both cell-level and gene-level tasks, while tackling the problem jointly unlike the baselines. This, along with our multiple spatial enrichment experiments confirm that CLARIFY is able to refine both the cell-level and the gene-level regulatory interaction networks, clarifying the true spatially constrained dynamic of the tissue.
Materials and methods
Here, we describe our multi-level graph autoencoder (GAE) approach, starting with the input knowledge graph construction, then graph neural network inference, and finally the training objective.
Multi-level graph construction
To address the shortcomings of current methods in extra/ intra-cellular interaction prediction, our multi-level construction can be broken into two main views: cell-level and genelevel. The goal of the cell-level graph is to encode the notion of spatial constraints and the gene-level graph provides the downstream gene regulatory information. For simplicity, we denote every cell-level element with subscript 'c' and gene-level element with subscript 'g'. For this section, refer to Fig. 1.
Cell-level graph
At the cell level, we view each single cell as a vertex in our graph. To utilize the spatial component of our data, we connect edges between cell vertices based on spatial proximity. If no ground truth interactions are available, we used a k-NN algorithm on the spatial transcriptomics data to determine edges. We denote the adjacency matrix describing the vertices and edges as A c 2 R ncÂnc , where n c is the number of cells. A i;j ¼ 1 if there exists an edge connecting cell i and cell j; Finally, each cell (vertex) in our graph will have an attributed feature vector based on the single cell expression values (each row of the ST data). This can be organized into a feature matrix X c 2 R ncÂfc , where f c stands for the number of features (genes) per cell.
Together, adjacency matrix A c and feature matrix X c make up our cell-level proximity graph G c , which will be used as one part of the training input to our model. In essence, the purpose of this cell-level graph construction is to introduce, to our model, the notion of cells that have the capacity to interact based on their spatial location in the tissue.
Gene-level graph
At the gene level, we essentially take the cell level graph one step further, by viewing each single cell as a subgraph of its underlying cell-specific gene regulatory network (GRN). To do this, we must first infer baseline cell-specific gene regulatory networks with the CeSpGRN method (Zhang et al. 2022b). Note that cell-type level GRN inferences can also be utilized, but cell-specific methods encourage more cell-cell variability. As the first part of the gene-level preprocessing, we take in the input cell-level feature matrix X c 2 R ncÂfc defined in the previous subsection. CeSpGRN then infers and outputs a gene regulatory network for each single cell, as a list of adjacency matrices where each vertex in a single adjacency matrix represents a gene-though of the same name-which belongs to a specific cell. The gene adjacency matrix is firstly constructed by stacking the cell-specific GRN adjacency matrices diagonally, resulting in a block diagonal matrix A g 2 R ngÂng , where n g is the number of total genes, i.e. each gene in each cell corresponds to one row or column. Note that for each cell i in the cell-level graph G c , there exists a corresponding GRN component in the gene-level graph G g , which is represented by a block along the diagonal in A g and denoted by the pink dotted line (Fig. 1d between the cell/GRN pair across the two graphs).
We then augment the gene-level graph with inter-cellular edges by translating the proximity edges of G c to the GRN components of G g .To do this, we first must understand which genes of cells have the capacity to interact with genes of neighboring cells. These cell-cell interactions (CCIs) are primarily observed by the genes corresponding to ligands and receptors. Bafna et al.
Using a standard ligand-receptor (LR) database (Shao et al. 2021), we identify LR genes in every GRN. The LR edges are then constructed in the following manner: given cell i and cell j, if they share an edge in G c (meaning they are spatially proximal), we construct an edge between every LR gene in GRN i and GRN j in G g . That is, A g u;v ¼ 1 if u is in cell i and v is in cell j, and ðu; vÞ is a gene pair present in the LR database. The adjacency matrix of A g will have intracellular (GRN) edges on the block diagonal and extracellular (CCI-LR) edges off the block diagonal. For proper graph autoencoding, we establish an initial set of features R fg for each vertex (gene) in our graph by using the Node2Vec method (Grover and Leskovec 2016), where each vector represents an embedding of the corresponding vertex's local network neighborhood. The feature vectors can be grouped into a matrix X g 2 R ngÂfg format analogous to the cell features but differing in dimension.
The adjacency and feature matrix complete our gene-level graph construction, which can essentially be thought of as a graph of GRN subgraphs. With this gene-level graph, we effectively provide our model the knowledge of each cell's underlying gene regulatory network which models the downstream effect of an extracellular interactions.
Multi-level graph autoencoder framework
2.2.1 Overview CLARIFY has four inputs: the features and binary adjacency matrices from both cell-level and gene-level graphs (X c ; A c ; X g ; A g ). CLARIFY makes use of two parallel Graph Neural Network encoders (see Fig 2.) for both the cell and the gene level graphs E c ðÁÞ and E g ðÁÞ. Each encoder embeds the respective cell or gene features into latent representations. These separate latent representations are then aggregated (either concatenation or averaging), to integrate learned information from both levels. This combined latent variable is then decoded (inner-product) into a reconstructed cell-level adjacency matrix. The model is then optimized on reconstruction ability of the cell-level adjacency, but also penalized for harsh changes in intracellular gene interactions.
GCN layer
For the encoding layers of CLARIFY, we utilize Graph Convolutional Networks (GCNs), which is a widely used GNN architecture that have become omnipresent in the computational biology world.
Built upon message passing neural networks, a GCN can be deconstructed into a series of message passing and aggregation steps. This can be thought of as a function Z ¼ f ðX; AÞ that takes a graph's vertex features X and adjacency A and uses the edges to pass messages between neighboring vertices to embed the vertex features into a more effective representation Z. In this way, the development of novel GCN layers is essentially a tweaking of the function f ðÁÞ, i.e. the steps taken in message passing and aggregation. Note that we can stack these layers analogously to standard convolutional neural networks. For our model, we use stacked graph convolutional layers which has the following message-passing rule proposed by (Kipf and Welling 2016): At GCN layer 0, Z ð0Þ is the initial input node features X. The graph's input adjacency matrix is symmetrically normalized shown by the normalization step in (1). Note thatà ¼ A þ I n andD is the degree matrix ofÃ. At each layer l, there is a learnable weight parameter W ðlÞ .
Cell/gene level encoders
To adapt this standard GAE to our task on a graph with multiple levels, we utilize two parallel Graph Encoders-one for each level. Both graph encoders use GCN layers to embed the vertex features of their respective level as denoted below.
Note that Z c 2 R ncÂd and Z g 2 R ngÂd , where d is the dimension of the latent embedding space. Each row in Z c is the latent representation of the cell (vertex) in G c . Each row in Z g is the latent representation of a gene belonging to a single cell's GRN. We aggregate each GRN's gene representations together into one gene-level cell embedding such that the updated matrix is of the form Z Ã g 2 R n c Âd . Formally, for the k genes in cell i, As noted, either pooling or concatenating (written as direct sum notation) can be used for this step of aggregation. Essentially, this step aggregates the gene-level embeddings by the cells to which they belong, effectively creating a GRN based cell-level embedding. We then integrate the information learned in the original cell level embeddings and the new (GRN) cell level embeddings, by concatenating the two matrices: This resulting embedding encapsulates the single cell gene expression, spatial context, and downstream gene regulatory information.
Cell/gene level decoders
For both the cell-level and the gene-level tasks, graph reconstruction is done by the use of inner-product decoders. The inner product decoder for the cell-level makes use of the combined embedding Z and is defined as such: The gene-level decoder on the other hand carries out the genelevel graph reconstruction using only the gene-level embeddings: The inner product decoders compute the inner product (cosine similarity score) between each pair of embeddings. Each cosine similarity score is an entry in the resulting matrix, which represents how likely an edge exists between the two candidate vertices. The sigmoid function is then applied to transform the cosine similarity matrix into probabilities that represent the existence likelihood of an edge. These, in essence, are the reconstructed values of the adjacency matrix.
Training objective
CLARIFY is optimized on two tasks, the first of which is its ability to reconstruct the spatial proximity edges defined by the cell level adjacency matrix A c . For this, we utilize binary cross entropy (BCE) reconstruction loss. Note that each i, j entry of A c represents the ground truth label for the existence of a proximity edge between cell i and cell j. And, each i, j entry of A 0 c represents CLARIFY's predicted probability score for that same edge. Thus, the BCE loss is defined as such: As the model trains, the updated weights will drastically change each of the gene feature vectors in the gene-level graph. In order to reduce the effect of this message propagation and the cell-specific GRN information, we include a secondary loss term that ensures that the edges in each cellspecific GRN are not changed too drastically, but rather just enough to be spatially refined. Recall that the intracellular (GRN) edges are located on the block diagonal of A g . Thus, for L g , we use mean squared error loss between the block diagonal entries of A g and reconstructed A 0 g . Each block is of dimension R gÂg , where g is the number of genes per cell. Formally, we let the block diagonal entries of a A g be defined as such: In other words, the mask is a matrix with 1 s in the g  g blocks along the diagonal. The entries of A g are then masked by element-wise multiplication . The same is done for A 0 g . Finally, the loss of the block diagonal entries is: We combine these losses in a weighted sum as follows. k i are hyperparameters defined by the user depending on whether the preservation of GRN information or spatial refinement is more important. As default, they are both set to 1. The total loss is defined below:
Results
We evaluated CLARIFY in a series of experiments, broken up into two main components: cell-level and gene-level.
Recall that CLARIFY jointly refines both cell-level (CCI) and gene-level interactions (GRN), and it is the only known method to do so. Typically, however, these problems were viewed as distinct, and independent methods were devised to solve either problem. Therefore, we evaluate CLARIFY performance separately against existing methods in each domain.
Datasets
Due to the lack of data at single cell resolution for spatial transcriptomics, there are only a handful of datasets to be utilized. And, most of them are not extensively studied, so there are no known ground truth interactions for those real datasets. For each task, we evaluated CLARIFY and existing methods on two real spatial transcriptomics datasets and one simulated dataset. We considered two published datasets on mice. The first dataset was acquired from the mouse visual cortex using seqFISH technology (Lubeck et al. 2014). The data captures transcript expression from 125 genes in 1597 single cells, along with the spatial location of the expressed transcripts. The second dataset was a slice from the mouse hypothalamus using the MERFISH technology (Moffitt et al. 2018), which sampled 160 genes in 2000 single cells. Data from both sets was preprocessed using a standard approach (log transform over counts), also used by other tools like DeepLinc.
We also generated simulated data with scMultiSim (Li et al. 2022a). scMultiSim generates single cell gene expression data from multiple cell types as well as cell locations. The gene expression data is driven by the ground truth GRNs, CCIs, and cell-type structures.
Evaluation metrics
To evaluate CLARIFY, we use two commonly applied metrics. The first is a precision-recall based framework, specifically the Average Precision (AP) score, which calculates the weighted mean of precisions achieved at each threshold. The weights are defined by the increase in recall from the previous threshold. Note, that the AP score is robust to datasets that are highly skewed as it does not use linear interpolation. Secondly, we utilize the area under the receiver operating characteristic (AUROC), where the ROC curve measures the True Positive Rate (TPR) versus False-Positive Rate (FPR) at different decision thresholds. We used the scikit-learn implementations of these methods (https://scikit-learn.org/stable/).
Each of the experiments defined in were designed to assess the main capabilities of CLARIFY on these datasets: reconstruction of the cell/gene interaction networks, and spatial refinement of the said networks.
Cell-level experiments
To the best of our knowledge, we have identified only one method (DeepLinc) that is aimed at cell interaction landscape reconstruction. There are indeed other CCI methods, however, most of them are at the cell-type level, and do not seek to reconstruct and impute spatially refined edges as DeepLinc and CLARIFY. Thus, our cell-level evaluations are mainly compared to DeepLinc. DeepLinc is similar in that it is a Variational Graph Autoencoder for CCI reconstruction, but it does not incorporate downstream gene regulatory information, nor does it consider the joint problem of CCI and GRN refinement. Therefore, we evaluated CLARIFY against it for only cell-cell interactions, but not gene-gene interactions. For the CCI reconstruction, we used the DeepLinc methodology of evaluation to provide a fair comparison.
CLARIFY outperforms related methods for cell-cell interaction network reconstruction
For the task of CCI reconstruction, we first need to define a set of ground truth interactions as the real datasets do not have any. Following the same procedure described with DeepLinc and our cell-level graph construction, we constructed cell-cell adjacency matrices for each of the real datasets by using the k nearest neighbor (kNN) algorithm to find the k closest neighbors in Euclidean distance (using the spatial coordinates) for a cell. This follows the same assumption in DeepLinc, that in a 2D tissue, each cell could be locally interacting with k ! 3 other cells. As noted in the methods section, this cell-level adjacency matrix A c was used as the set of ground truth interactions for CLARIFY to reconstruct.
To construct the training and testing split, we randomly selected 70% of edges for CLARIFY to train on and the remaining 30% were masked out and utilized for testing/evaluation. These edges are denoted as the positive set. In each training and test set, we also add randomly sampled negative edges in a 1:1 ratio with the positive edges. To assess reconstruction performance, we measured the AP and AUROC in reconstructing the test set edges over training epochs and compared them to DeepLinc's performance. See Fig. 3b and Supplementary Fig. S4b. CLARIFY significantly outperformed DeepLinc on the seqFISH and scMultiSim datasets, while the two methods achieved comparable results on the MERFISH dataset. These results strongly suggest that CLARIFY was able to properly incorporate not only spatial information and single cell gene expression, but also the downstream network of regulating genes as part of the celllevel embeddings, and that directly influenced its performance in reconstructing cell-cell interactions.
Next, to assess CLARIFY's robustness to different edge partitions, we also evaluated the model across all datasets while varying the size of the number of test edges. DeepLinc noted that their model was mainly trained on a split of 10% test edges leaving 90% of training. But, such a small test size may not be enough for a reconstructability task. Thus, across all datasets, we measured the AP and AUROC of test edge reconstruction over different splits ranging from 10% to 90% test edges. This was repeated 5 times for epochs 100, 110, 120 (total 15 per split) to generate the boxplots. Once again, CLARIFY outperformed DeepLinc across all splits for the seqFISH and scMultiSim datasets while gaining comparable performance for the MERFISH dataset ( Fig. 3c and Supplementary Table S1), indicating robustness in maintaining performance even when training on less data. Note that for the scMultiSim simulated data, the ground truth cell interaction graph is very sparse (Fig. 3a). This contributes to the unorthodox training curves, as due to the low number of edges, each split of test edges may contain high variability, leading to slightly skewed performance for both models.
To evaluate CLARIFY's tolerance to noisy data, we perturbed the input training graph with false-positive and falsenegative edges. For false-positive edges, in the input training graph of known ligand-receptor edges, we add fake edges at rates from 0.1 to 0.5 times the original number of edges. Similarly, for the false-negative edges, we remove edges from the training set at rates from 0.1 to 0.5. We then train CLARIFY on these noisy inputs and evaluate its Average Precision score on the test set of edges for each of the cases and compare them to DeepLinc (Supplementary Fig. S2).
Lastly, for the scMultiSim simulated dataset, we obtained a cell-type CCI ground truth. As a baseline, we utilized a representative tool for cell-type level interaction prediction from spatial transcriptomics, known as SpaOTsc (Cang and Nie 2020). For each of the cell-type pairs that SpaOTsc deemed significant, we maintained in a set. We then constructed a SpaOTsc cell level adjacency matrix R ncÂnc , where every i, j entry was set to 1 if cell i's type and cell j's type is a cell-type pair in the aforementioned set. We followed the same procedure to construct the ground truth adjacency matrix for scMultiSim and then compared CLARIFY's reconstructed adjacency matrix with SpaOTsc's adjacency matrix, by measuring the AP and AUROC score. Note, we also provided a baseline based on randomly permuting the scMultiSim ground truth matrix (maintaining the number of ones) 100 times and calculating the average AP and AUROC score with the normal scMultiSim ground truth. This was to provide a random baseline, to give reference for the performance of other methods. The final results are formulated in Table 1.
It is worth mentioning that SpaOTsc does not require any labeled data for training, while DeepLinc and CLARIFY both split the interactions into training and testing sets. The large improvement of CLARIFY over SpaOTsc and that SpaOTsc performance is close to random indicate that supervision can significantly improve the accuracy of this task.
CLARIFY latent cell embeddings indicate valid spatial refinement and preserve spatial domains
After establishing CLARIFY's reconstruction performance, we then assessed its ability to embed the input cell features (normalized counts) to latent representations that better contextualize the spatial distribution of cells in the tissue. These experiments help validate the claim that CLARIFY's cell embeddings are spatially refined.
To provide context, we first visualize pairwise Euclidean distances between cells in Fig. 4a. In this n c  n c matrix, entry i, j represents the distance between cell i and cell j using the ST data coordinates. It represents the distribution of spatially located cells. We generate a representation of cell-cell similarity using both the cell's initial features (Fig. 4b) and the cell's latent representation produced by CLARIFY (Fig. 4c). In both cases, the entry at i, j represents the Euclidean distance between cell i and cell j's initial feature vector or latent representation, respectively. We can see that the heatmap of the CLARIFY latent representations is visually more similar to the location distribution. For example, in Fig. 4c, the block diagonal entries (cell-cell neighborhoods) are darker (closer) similar to (Fig. 4a). In contrast, the initial feature distribution appears to be nearly uniformly distributed, and every pairwise comparison is given a similarly high Euclidean distance (indicating features are equally distant and diverse). In comparison, we note that the CLARIFY latent representations have an underlying structure, but they are not completely identical to the location distribution, which is important, as spatial location is not the only information that the embeddings encapsulate. Rather, the embeddings represent spatial location combined with gene expression, gene regulatory network information, and cell-cell interaction information.
To quantify this result, we computed the Spearman correlation between the location and the cell embedding heatmap, and as a baseline, between the location and the initial features heatmap (see Table 2). Since the entire matrix is quite large and represents sparse distal interactions, we provide the Spearman correlation between the block diagonal entries of both matrices as well. These entries represent the cell-cell neighborhoods (cells that are close together spatially shown in the location heatmap), and thus are more likely to be spatially refined. Thus, we compute this statistic for both real datasets in two scenarios: using the entire matrix and over the block diagonal entries. The results are shown in the table below. We note that the P-value of the Spearman correlation was highly significant in every single case (P-value <2e-308) because of the large number of data-points.
Across both datasets, we saw a significant improvement in the correlation when comparing the CLARIFY latent representation distribution to the location distribution, with a 2-4Â increase in Spearman correlation. When using the entire matrix as comparison, there was a moderately positive correlation (0.22, 0.33), which is still interesting because the matrices represent both sparse and distal interactions. However, when using the block diagonal entries of the matrix, representing the cell-cell neighborhoods in the tissue, there was a strong positive correlation (0.696, 0.625) compared to the initial features (0.25,0.2).
As a final proof of concept, for both datasets, we clustered the cell latent representations using the k-Means algorithm (k ¼ 6), similar to the analysis in DeepLinc. Each of the six clusters was defined as spatial domains (0 through 5) and then mapped back to each single cell and plotted (Fig. 4d). This provides another visual confirmation that even with unsupervised clustering of the embeddings, CLARIFY latent representations are clearly spatially organized into separate domains in the tissue.
All of these results strongly indicate that CLARIFY representations are spatially correlated, thus validating CLARIFY's ability to spatially refine the single cell features. i490 Bafna et al.
Gene-level experiments
3.3.1 CLARIFY cell-specific GRNs outperform existing cell-type inference methods Currently, there are few methods that infer cell-specific GRNs (a main one is CeSpGRN, which is used for our initial graph construction). However, there are a number of cell-type GRN inference methods. The most notably benchmarked is the Genie3 proposed by (Huynh-Thu et al. 2010), which utilizes a regression tree based method to infer the GRNs based on expression data (thus cell-type specific). Though, it is worth noting, like in the SpaOTsc case, that CLARIFY is a semisupervised cell-specific method, we still compare it to a representative cell-type method to gauge the baseline performance. We use the scMultiSim dataset which has ground truth GRNs. To obtain the Genie3 GRNs, we isolate cells from each cell type (5 total) from the scMultiSim expression data and infer a cell-type GRN for each. Any cell of type i will have the same GRN i. To obtain the CLARIFY GRNs, we take the block diagonal of the gene-level adjacency matrix A g . We compare both CLARIFY and Genie3 to the simulated ground truth using the AUPRC ratio, which allows us to quantify how many folds the candidate model performs better than a random classifier, and has been used in previous work (Pratapa et al. 2020). CLARIFY performs better with an AUPRC ratio of 1.48 compared to Genie3's 1.40 and CeSpGRN's 1.33-a good result considering CLARIFY's multiple other functions.
3.3.2 CLARIFY latent gene embeddings indicate valid spatial refinement through global structure while also maintaining local structure information To assess the spatial refinement of CLARIFY gene embeddings, we used unsupervised clustering. We projected all genes belonging to GRNs of the first 10 cells, across all datasets. Each point in Fig. 5 represents the lower dimensional projection of a gene.
First, to assess the global structure, we compared the projections on the first two Principal Components of the input gene features and the CLARIFY embeddings ( Fig. 5a and b, respectively). The input gene features showed virtually no clustering. This was expected because the gene features were constructed on the GRN connected components with Node2Vec. The initial graph consisted of disjoint GRN components, thus no gene from different GRNs were able to share information via the Node2Vec random walks. Hence, the scattered projections across datasets.
However, after embedding the gene features with CLARIFY, we observed a tight clustering of genes belonging to the same cell ( Fig. 5c; each cell has a distinct color). Moreover, because PCA preserves global structures (intercluster distance), we also observed that genes of neighboring cells are also clustered. For example, proximal cells Cell0, Cell1, and Cell2 are clustered on the far right of seqFISH plot (b). We also investigated the local structure between CLARIFY Gene embeddings using Uniform Manifold Approximation and Projection (UMAP), which tightly Inferring CCIs and refining cell-specific GRNs i491 clusters each gene belonging to the same cell and far apart from other genes, showing that local structure is preserved. Both the PCA and the UMAP plots confirm that CLARIFY gene representations are spatially refined (indicated by the global structure) and cell-specific as well (shown by the UMAP local structure).
Lastly, in order to test if the CLARIFY "refined" GRNs are spatially correlated, we used Spearman correlation again (see Table 3). The baseline experiment was the same as the celllevel heatmap, where each entry represented the euclidean distance between the pair of cell locations, which essentially encapsulates the spatial distribution of the cells. Since each cell is now associated with an adjacency matrix of the corresponding GRN, we therefore tested if the adjacency matrices of each GRN were spatially refined. First, we construct another heatmap/correlation matrix with the same dimensions as the cell by cell analog. Each, i, j entry represents a "distance" metric between the adjacency matrices corresponding to GRN i and GRN j. Matrix distance was measured using the Frobenius Norm defined below or alternatively, using the Euclidean distance on the flattened matrix. We calculated each of these pairwise Matrix comparisons and organized them into a heatmap correlation matrix. This was done for both the initial gene adjacencies (inferred by CeSpGRN) and the CLARIFY "refined" gene adjacencies. Finally, analogous to the cell level experiment, we compute the Spearman correlation in two cases: the initial adjacency versus location distribution baseline and the CLARIFY adjacency versus location distribution baseline. In these two cases, we compute the scores either using the entire heatmap matrix or just on the block diagonal. Understandably, there was a lot of sparsity in the entire matrix and the block diagonal entries better represented the cell-cell communities. Across all correlation comparisons along the block diagonal (and both Euclidean and Frobenius distances), there was an increase in correlation with the spatial distribution when using the CLARIFY refined adjacency (block diagonal Correlation coefficient À0.0069 for CeSpGRN versus 0.2079 for CLARIFY refined GRN). For comparisons using the entire matrix, there was a lower increase, which can be explained by the sparsity of data (Correlation coefficient À0.0055 for CeSpGRN versus 0.0766 for CLARIFY refined GRN).
In summary, these results, including (i) unsupervised clustering experiments that indicated both global spatial patterns while maintaining local structure and (ii) the Spearman correlation experiments that quantified increase in spatial correlation after CLARIFY refinement, support our claim that CLARIFY is able to spatially refine gene regulatory networks.
Conclusion
We present CLARIFY, a graph autoencoder based method that jointly refines both CCIs and cell-specific GRNs. It is the first method that outputs CCIs and GRNs in the same model. The improvements predicted by our tool point to the importance of joint model inference in the future. Our future work will focus on using these regulatory inference tools for problems like the characterization of the tumor microenvironment, or the interplay between tumor cells and immune cells. Since the study of CCIs is still in its infancy, there is much unknown and some common assumptions are needed to be made while designing computational models. Here, we made the assumption that the GRNs of cells which are spatially close are similar. As more knowledge is gained on the spatial landscape of GRNs, the CLARIFY model can be modified to accommodate new information. | 2023-07-01T06:16:10.010Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "0622b7f2b4261c2b0ffbf4c0b2c77a72ec86db69",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "16515802a0f6c09e77d134d82fdb1cc94bd57e52",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
225703854 | pes2o/s2orc | v3-fos-license | Original Article Effect of Yoga on Memory in Elderly Women
Methods: This experimental study was a randomized, controlled clinical trial that was conducted in 2018. Two elderly day care centers in Yazd city ,in central Iran were selected and randomly assigned to control and intervention groups. Then, according to the inclusion criteria, eligible people were selected from the centers and enrolled in the study. Participants were 58 elderly women who were assigned to control (n: 29) and intervention (n: 29) groups. Yoga exercises were held for 2 months (three 1-h sessions a week) for intervention group. The Wechsler Memory Scale was completed for both groups before and after the intervention. Data were analyzed by SPSS using descriptive and inferential statistics. Results: The mean score of memory in the intervention group before intervention was 77.7 ± 17.8 and after the intervention reached 86.4 ± 17.3, with a statistically significant difference (p < 0.05), but in control group, no significant difference was observed. In the intervention group, mental control, logical and visual memory subscales increased significantly (p < 0.05), but there was no significant difference in other subscales. There was no significant difference in any of the subscales in control group (p > 0.05). Conclusion : To improve the memory of the elderly, physical activity such as yoga exercise can be helpful. The elderly can routinely practice these exercises in elderly care centers.
and synthesis of neurotransmitter receptors (2) and the volume of white matter in the brain decreases (3). In fact, in many countries, decline in cognitive function in elderly people has become an important health care issue. If we cannot design effective preventive measures to prevent it from starting or progressing among the elderly, the number of people with mild cognitive impairment or even dementia will increase due to the rising population of the elderly (4). Based on evidence on pharmacological interventions, in addition to harmful side effects, drugs do not reduce cognitive impairment or progression towards dementia (5). However, various therapeutic approaches, such as psychotherapy interventions (6), cognitive exercises (7) and physical activity and exercise (8), are available to slow down age-related decline in cognitive function. Physical activity and exercise is one of the interventions that have recently drawn more attention.
As the aging population grows, the incidence of agerelated cognitive disorders and interest in the role of physical activity and exercise in improving cognitive function of older people are increased (9). Yoga is referred to a set of physical exercises (Asanas), controlled breathing exercises (Pranayama) and relaxation and meditation exercises (Savasana) (10). According to the philosophy of yoga, mankind is rich in energy and happiness. Due to the constraints we create in our minds, we also create obstacles in the direction of these energies and cause illness and disorder in the body and mind. Yoga is a way to reach this source of internal energy, and therefore, this sport is one of the most important ways of preventing many diseases at this age (11). The close relationship between mind and body has long been clear, and yoga influences increasing quality of life and having general health by adjusting the cognitive, nervous, immune, and psychological systems, modulating the body's autonomic nervous system, increasing body resistance and physical stability, and modulating the immune system. Physical and breathing yoga exercises lead to muscle flexibility and strength and improve the function of the hormonal system, circulation and oxygen uptake. In addition, meditation and relaxation in the yoga stabilize the autonomic nervous system and, by controlling emotions, cause the person to feel healthy (12). Eyre et al. reported the improvement of memory function by examining the effect of yoga on memory in the elderly (13), and a study by Kasai et al. showed that Tai Chi improves the cognition of elderly women with mild cognitive impairment (14). Sharma et al. also reported the effect of yoga on cognitive improvement in depressed people (15). Irandoust also noted in his study that aerobic exercise and yoga had a positive effect on the overall memory and dynamic balance of elderly men (16). Given that very few studies have been conducted on the short-term effects of exercise on the memory of elderly people (17), most of the studies conducted abroad are in people with cognitive impairment and dementia, and in Iran, much of the research has been done on the impact of yoga exercise on the memory of age groups other than the elderly, and studies conducted on the elderly have investigated the effect of Pilates on memory, then the purpose of this study was to investigate the effect of yoga exercise on memory of elderly women.
Study design and participants
The present experimental study is a randomized, controlled clinical trial conducted in 2018. Required sample size was calculated at 58 (29 in intervention group, 29 in control group) according to the study of Marandi et al. (18), 95% confidence interval, 80% power, minimum difference of 2 units, and 20% attrition. Two elderly day care centers of Yazd were selected and randomly assigned to control and intervention groups. Then, according to inclusion criteria, 29 eligible elderly were selected from each center and enrolled in the study. Participants in control group were matched by BMI, education level and marital status with the intervention group.
Inclusion criteria were age of over 60 years, no physical problems such as pelvic replacement, Parkinson's disease, vertigo to the extent that the person could not perform the exercises, lack of taking any hypnotic drug during the past month, and the approval of the trusted physician for performing yoga exercises. Exclusion criteria were suffering from cognitive problems (Alzheimer's disease), development of progressive physical and mental problems during the exercise program, sensory problem (deafness), mental problem (mental retardation), absence in more than three sessions of exercises and transfer to another day care center. All the participants in both group completed the exercise program without any attrition.
Instrument
The measurement instrument in this study was the Wechsler Memory Scale designed for assessing memory and learning abilities in the people aged 16-89 years. The scale consists of seven subscales: 1personal awareness of personal and daily issues. 2. knowledge of time and location (orientation); 3) mental control; 4) logical memory; 5) repeating the figures forward and backward; 6) visual memory; and 7) associative learning. The raw score of the respondent is obtained by adding the sum of the seven subscales to the correction factor of age and memory. The reliability of this scale has been reported to be 0.81 (16).
Intervention program
The intervention group participated in an 8-week exercise program (three 1-h sessions a week) under the supervision of an experienced yoga instructor (yoga exercises: warm-up, tensile and rotational movements, physical and breathing movements, and relaxation). This program is based on the protocol of Janizadeh, Badami and Torkan ( Bending the toes and stretching the ankles, rotating the ankles, bending the elbows, rotating the ankles, bending the knees, rotating the knees, lifting the legs, hand punching, rotating the shin, bending the wrists, rotating the wrists, rotating the shoulders, moving the neck, twisting the abdomen, Savasana, reverse Savasana, shin lock position, simple screw position, twist of spinal propulsion, hand grinder, canoe, cat position, hand lifting position, palm position, palm movement position, back rotation position, back stretch position
Ethical considerations
The present study's protocol, after being approved by the Ethics Committee of Shahid Sadoughi University of Medical Sciences (IR.SSU.SPH.REC.1396.45), was also registered as IRCT20190103042225N1 in the Iranian Registry Clinical Trial. Necessary coordination with the Organization of Welfare were also made. Potential participants were selected in the day elderly care centers. The meeting with the participants was held and before the intervention, the purpose and procedure of the study were explained to them by the researcher. The participants were assured that all information obtained from this study would be kept confidential and provided informed written consent for voluntary participation in the study.
Data analysis
Data were entered into the SPSS. Descriptive data were analyzed by central tendency and dispersion, and inferential data by paired t-test and Wilcoxon test. Significance level (p) was considered < 0.05.
Results
In this study, 58 elderly women from Yazd's elderly day care centers participated and were assigned to control (n: 29) and intervention (n: 29) groups, with a minimum age of 60 and a maximum age of 89 (mean: 68.76 ± 7.24) years. Comparing the two groups after the intervention showed, the mean of memory before the intervention was 77.7 ± 17.8 and after the intervention reached 86.4 ± 17.3, with a statistically significant difference (p < 0.05), but in control group, no significant difference was observed. In the intervention group, mental memory subscales logical memory and visual memory significantly increased (p ≤ 0.05), but there was no significant difference in other subscales. There was no significant difference in in any of the subscales in the control group (Table 2).
Discussion
The purpose of this study was to investigate the effect of yoga exercises on memory of elderly women. As our study showed, the intervention group experienced significant changes in memory, mental control, logical and visual memory after intervention compared to before the intervention. However, there were no significant changes in general memory, orientation, figures repeating and associative learning.
There are many discussions about how physical activity affects memory, and the underlying mechanisms of intervention have not yet been adequately elucidated, but it is assumed that these effects will occur along with certain changes in the body. Human research is limited for ethical reasons, but many animal studies have been done on the underlying physical and mental changes that have demonstrated increased brain volume, angiogenesis, neurogenesis, and synaptogenesis. Colcombe et al. investigated the effect of synaptogenesis (increased synapses and neurotransmitters) on the human brain, and observed aerobic exercise affected the volume of gray matter in the brain (20). Kramer et al. argued that even relatively short exercises could increase the gray area of the brain in the upper extremity of the frontal temporal lobe and prevents reduce the size of the brain in aging (21). Other studies on angiosis showed that blood capillaries and blood flow increased in the brain, especially in the hippocampus, as a result of physical activity. Investigations on light and heavy exercises in adults show that more active people have denser and greater number of blood capillaries in the brain, confirmed by Bullitt et al. (22).
Besides, in people aged 65 and over with only at least two days of physical activity at their leisure, a positive, significant correlation with delayed development of Alzheimer's disease was reported (23). According to researchers, physical activity causes delay in age-related memory loss (24). Evidence suggests that physical activity can improve mental and cognitive functions in a person and also play a more pronounced preventive role against decline in cognitive function (25,26). In fact, it can be argued that yoga exercises in the present study enhanced the formation of the brain, prevented deterioration of and improved memory in our participants due to the physiological and neurological effects of physical activity on the nervous system of the brain, neurotransmitters and the cerebral circulation system. In addition, because exercise and physical activity, especially yoga, which is related to mind and mental relaxation, can reduce distresses and worries, it can also improve memory and enhance learning in them.
5
Elderly Health Journal 2020; 6(1): 3-8. Besides, in people aged 65 and over with only at least two days of physical activity at their leisure, a positive, significant correlation with delayed development of Alzheimer's disease was reported (23). According to researchers, physical activity causes delay in age-related memory loss (24). Evidence suggests that physical activity can improve mental and cognitive functions in a person and also play a more pronounced preventive role against decline in cognitive function (25,26). In fact, it can be argued that yoga exercises in the present study enhanced the formation of the brain, prevented deterioration of and improved memory in our participants due to the physiological and neurological effects of physical activity on the nervous system of the brain, neurotransmitters and the cerebral circulation system. In addition, because exercise and physical activity, especially yoga, which is related to mind and mental relaxation, can reduce distresses and worries, it can also improve memory and enhance learning in them.
The results of this study are comparable to those of the study by Brenes et al. which reported the positive effect of yoga exercise on the memory of patients with mild cognitive impairment and dementia (27). A study by Eyre et al. examined the role of yoga in improving the memory of people over the age of 55, concluding that yoga interventions are useful for improving cognitive function in the elderly (13). The study of Joolaei et al. also suggested the positive effect of Pilates on the memory of the elderly (28). Brooks et al. in their study of aerobic exercise (cycling), on the students' longterm and short-term memory in ten minutes, observed there was no significant difference between the control and test groups. The difference in the type of participant, the type of aerobic activity, duration and measurement tool can explain the inconsistency in the findings of the present study and other studies (29).
Conclusion
Elderly people are more predisposed to psychological problems due to the problems of this period. Attention should be directed to this age group. As it has been observed, most researches in the field of exercise, physical activity and memory of elderly people have shown positive effects as in the current study, yoga had a positive effect in improving the memory of the elderly. Therefore, it can be a good idea that the elderly, as a group at day care centers and even at home individually, do these exercises as one of the non-pharmacological psychological approaches, along with other methods, to reduce mental health problems.
Study limitations
Some of the limitations of this study include lack of access to men because all the participants were from women's elderly day care centers. The study was conducted only in two elderly day care centers in Yazd, and it was not possible to randomize each participant. It is necessary to consider the above considerations in applying the results.
Conflict of interest
The authors of this article declare no conflicts of interest. and especially the elderly who assisted us in carrying out this study. | 2020-07-02T10:38:50.401Z | 2020-06-10T00:00:00.000 | {
"year": 2020,
"sha1": "fb6257de1702ab35b13edcfd82882c3b76014f05",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18502/ehj.v6i1.3409",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d4fa99e9453304a39abe18f29653639e37fa2e4f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17620972 | pes2o/s2orc | v3-fos-license | A note on the runtime of a faulty Hamiltonian oracle
In these notes we show that it is impossible to obtain a quantum speedup for a faulty Hamiltonian oracle. The effect of dephasing noise to this continuous time oracle model has first been investigated in [1]. The authors consider a faulty oracle described by a continuous time master equation that acts as dephasing noise in the basis determined by the marked item. The analysis focuses on the implementation with a particular driving Hamiltonian. A universal lower bound for this oracle model, which rules out a better performance with a different driving Hamiltonian has so far been lacking. In this note, we derive an adversary type lower bound which shows that the evolution time T has to be at least in the order of N, i.e. the size of the search space, when the error rate of the oracle is constant. For the standard quantum oracle model this result was first proven in [2]. This note can be seen as an extension of their result to the continuous time setting.
I. INTRODUCTION
The Hamiltonian oracle [1] model can be seen as a continuous time analogue of the unstructured search problem, which is also known as Grover's problem.In unstructured search, the task is to find one marked item, which is commonly labeled by w, out of N possible items.It is known, that on a classical computer on average at least O(N ) queries to the oracle are needed to find the marked item.One of the major breakthroughs in search of quantum algorithms was that this bound could be beaten on a quantum computer.Lov Grover showed that a quantum algorithm exists, which only queries the oracle O( √ N ) times [2].It can be shown that this quadratic speed-up is optimal [3].Hence, no algorithm can outperform Grover's search for this problem.
However, the investigation of Grover's algorithm in the presence of noise [4][5][6][7], has shown that this quadratic speedup is very fragile.In quantum query algorithms two classes of noise models have been considered.One class of noise model considers coherent errors [8,9], whereas the other class models errors in terms of either dephasing or bit-flip errors [4][5][6][7]10].For the latter class, the quadratic speed-up of Grover's algorithm vanishes and the runtime assumes a linear scaling [4].Regev and Schiff have recently proven, that no other query algorithm that has access to a dephasing oracle can outperform this scaling [11].
The effect of dephasing noise on the continuous time analogue of Grover's algorithm in the Hamiltonian oracle setting has also been investigated by Shenvi et al. [4].Similar to the discreet case, the authors have found that when considering a constant error rate the quadratic speed-up over the best classical solution vanishes.This was investigated by the direct analysis of a specific quantum algorithm subject to an appropriately chosen noise model.The question that remained open is, whether this performance in the continuous time (Hamiltonian oracle) algorithm found by the authors is in fact optimal, i.e no other algorithm could perform better.We will show, as could be expected, that this is indeed the case by extending the proof of Regev and Schiff to the Hamiltonian oracle setting.
In [4], Shenvi et al. considered the effect of phase fluctuations on the query term of the Hamiltonian model.The authors showed that such a fluctuating term leads to dephasing of the density matrix.Such an effect can best be described by a continuous time master equation of Lindblad form.The most general form for such an equation is given by [12]: Here, the Hamiltonian H drives the coherent evolution, whereas the Lindblad operators L i can lead to loss of coherence and damping.
II. FAULTY HAMILTONIAN ORACLE MODEL
The general Hamiltonian oracle model can be described as follows: Rather than applying a sequence of unitaries, as is done in the circuit model of quantum computation, one considers the evolution of some quantum state subject to the Schrödinger equation The computation is encoded in the Hamiltonian H(t), which is allowed to vary in time.The search problem can be encoded in terms of a Hamiltonian oracle, which is a term present in the Hamiltonian.It is important to note, that even though we are allowed to choose particular Hamiltonians H(t) that realize the quantum algorithm, we do not have control over the term which corresponds to the Hamiltonian oracle.
For the unstructured search problem we consider the Hilbert space N where the basis states {| k } k=1...N label the items in the search space of size N .The task in the unstructured search problem is to find a single marked item we label by | w , also referred to as the winner.The general goal is to construct a Hamiltonian that drives the evolution towards the state | w .In general such a Hamiltonian is of the form Here the projector on the winner H w = E| w w | encodes the Hamiltonian oracle.The actual computation, we have control over, is encoded in the driving Hamiltonian H D (t).
A particular driver that solves the unstructured search problem is given by the the projector on the superposition of all basis states in the search space.This corresponds to the choice H D = E| s s |, where | s is the superposition of all basis states given by the coherent 'mixture' The overlap between the winner and the mixture is given by w | s = N − 1 2 .For such a driving Hamiltonian it was shown [1], that the time to generate constant overlap with the winner starting from the coherent mixture scales as t = O( √ N E −1 ).Moreover, it was shown that no other choice of driver H D (t) can outperform this scaling.
The analysis of this problem assumes a perfect implementation of the oracle Hamiltonian | w w |.However, in a realistic application one would expect, that the oracle is subject to some form of noise.Let us assume that the magnitude of the oracle Hamiltonian is subject to small fluctuations.That is we assume that the oracle is of the form where ξ(t) is a stochastic variable for which the Markov assumption holds.This variable satisfies π 0 ξ(t)dt = ǫ, where ǫ is distributed according to a Gaussian distribution with variance s.As was shown in [4], such a fluctuating term in the oracle model leads to dephasing in the basis determined by the oracle with a rate Γ = s 2 2π .We therefore state the noisy Hamiltonian oracle model in terms of a dephasing master equation of the form (1) with a single dephasing Lindblad operator The full noisy oracle model describes the evolution of a density matrix ρ according to the equation where we denote The coherent evolution is now given again by the error-free Hamiltonian H(t) = H w + H D (t).
We will have to compare the evolution of the system where the oracle is present with the evolution in absence of the oracle to see how much progress is made towards achieving the goal, i.e. generating sufficient overlap with the target state | w w |.
To this end we also state the evolution in absence of the oracle which is given by Note, that since no oracle term H w is present, we assume that this evolution is not subject to noise and hence the system only evolves unitarily with the driver H D (t).Unlike the evolution subject to the noisy oracle, the evolution in the absence of an oracle retains the purity of a pure initial state.
III. RUNTIME LOWER BOUND
We now proceed to derive the lower bound on the runtime to find the marked state when we can make use of the noisy oracle.
We find that the noisy oracle with a constant dephasing rate Γ cannot yield a quantum speed up over the classical bound.We can state as our main result: Main Result: Every driver Hamiltonian that finds the marked state | w with probability p > 2 −1/2 has to evolve on avarage for a time T at least The strategy for showing this is the following: First we construct a progress measure which has to be larger than O(N ) after the evolution time T of the algorithm.We then derive an upper bound on the growth rate of the progress measure.From this we can infer the bound on the runtime of the algorithm.
We compare the evolution of the state that evolves according to the Hamiltonian oracle with respect to the evolution of the system where no oracle is present.In order to differ between the two cases, we need to define a progress measure.A suitable progress measure can be defined from the Frobenius norm difference between two states.We write Recall that the Frobenius norm is defined as Since we are interested in the performance of the algorithm for an arbitrary marked item | w , we need to consider the (unnormalized) average over all marked items.We define the progress measure as a. Lower bound to the progress measure: The lower bound to the progress measure after time T is obtained from the following argument: For the algorithm to be successful, we want to be able to find the state | w at least with a fixed probability p after the algorithm has completed.To this end the trace norm difference between the state ρ w T which has evolved for time T subject to the oracle (5) has to differ by from the state ρ 0 T which evolved in absence of the oracle.The trace norm of some operator A on the space N is defined as A tr = tr √ A † A .This distance has an operational interpretation and indicates the best statistic distinguishability by quantum measurements between the two states [13].Recall that the evolution without oracle preserves purity.We therefore know that ρ 0 T = | ϕ T ϕ T | if we started without loss of generality in some pure state | ϕ 0 ϕ 0 |.A well known bound [13] on the trace distance between two quantum states can be given in terms of the fidelity.We can therefore bound Since we have that tr ρ 0 T 2 = 1 and tr (ρ w T ) 2 ≥ 0, we know that after some T the value of F w T has to be The final bound is obtained from (12).After summing over all marked items, the average progress measure is bounded by b.The growth rate of the progress measure: We now have to see how long it will take for the evolution of the progress measure to reach this value and will compute a bound on the rate by which it increases.So we compute Note that the dependence on the driver Hamiltonian H D (t) has vanished.This is due to the fact that evolution of the driver in the oracle model cancels with the evolution of ρ 0 t .The evolution equation for the density matrix ρ w t depends only on the winner | w w |.The other relevant projector is given by the pure state ρ 0 t = | ϕ t ϕ t | which has evolved in the absence of the oracle.
Let us for convenience first consider the two-dimensional subspace spanned by the non-orthogonal vectors | w , | ϕ t .We can introduce the two orthogonal basis vectors | w , | w ⊥ that span the same space so that we can write We proceed to patch these states with some orthonormal basis supported only on the complement of this two-dimensional space.
The resulting basis is {| w , | w ⊥ , | 3 , . . ., | Ñ }.To simplify the notation we define We evaluate the contributions to the derivative of the progress measure F w t in eqn.( 16), in this basis.We see that both terms that depend on ρ 0 t and ρ w t are given by Note that in these terms the only contribution from ρ w t comes from the subspace spanend by | w , | w ⊥ .If we consider the remaining summand that only depends on ρ w t , we obtain Recall, that we want to find an upper bound on the evolution of the progress measure.Therefore, we can only increase the bound on the progress measure by assuming that the state ρ w t is only supported on the two dimensional subspace and therefore set (21) It is easy to see, that the RHS of (21) becomes maximal for the choice Note, that | ϕ t is a normalized state.We therefore have, that the sum over all winners is bounded by Integrating inequality (23) with the initial condition F 0 = 0, we find that Together with inequality (14), this leads to the bound on the minimal evolution time T as stated in the main result (8).
When considering a fixed error rate Γ, we observe that the previous square root scaling in the database has now been reduced to a liner scaling, which is also what happens for the standard oracle model of quantum computation.The authors of [4], also considered what happens when one allows for an error rate that decreases in the size of the data base, i.e.Γ = αN −2δ , where both α and δ are positive constants.With this error rate the runtime of the noisy Grover algorithm scales as T = O(N 1−2δ ), as long as δ ≤ 1/4.The bound in the main result reproduces the exact same scaling of the runtime.However, for δ > 1/4 the actual bound of the coherent evolution T = O(N 1/2 ) has to be considered since the bound given for the noisy oracle seizes to be tight.
IV. CONCLUSIONS
In conclusion, we have recovered the bound on the runtime of unstructured search which holds for the standard noisy oracle model also in the noisy Hamiltonian oracle model framework.With a constant dephasing error rate the quantum speed up breaks down and reduces to the known classical result of unstructured search.The techniques used here are very much in the spirit of the original proof [1] of the noise free Hamiltonian oracle model.The major difference is the noisy evolution described by the dephasing master equation and a new progress function, which uses the Hilbert-Schmidt norm be-tween two density matrices, as apposed to the standard L 2norm between two pure states.
N k=3 [ρ w t ] w, k 2 = 0 .
Now the derivative of F w t only depends on the single matrix element x ≡ [ρ w t ] w,w ⊥ With the variables x and f defined earlier we can write for eqn.(16) | 2014-12-09T05:12:43.000Z | 2014-04-08T00:00:00.000 | {
"year": 2014,
"sha1": "85f3d86f8a5422edb5234023de44b334050d17f8",
"oa_license": "CCBYNC",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/89459/1/PhysRevA.90.022310.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0229a04cc000999fa41d1701477ad58456e1f9f5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
121130822 | pes2o/s2orc | v3-fos-license | The lubricity of mucin solutions is robust against changes in physiological conditions
Solutions of manually purified gastric mucins have been shown to be promising lubricants for biomedical purposes, where they can efficiently reduce friction and wear. However, so far, such mucin solutions have been mostly tested in specific settings, and variations in the composition of the lubricating fluid have not been systematically explored. We here fill this gap and determine the viscosity, adsorption behavior, and lubricity of porcine gastric mucin solutions on hydrophobic surfaces at different pH levels, mucin and salt concentrations and in the presence of other proteins. We demonstrate that mucin solutions provide excellent lubricity even at very low concentrations of 0.01 % (w/v), over a broad range of pH levels and even at elevated ionic strength. Furthermore, we provide mechanistic insights into mucin lubricity, which help explain how certain variations in physiologically relevant parameters can limit the lubricating potential of mucin solutions. Our results motivate that solutions of manually purified mucin solutions can be powerful biomedical lubricants, e.g. serving as eye drops, mouth sprays or as a personal lubricant for intercourse.
Introduction
Body fluids, a kind of aqueous lubricants, play essential roles to assist the rubbing contacts in the human body. Lacking sufficient biolubrication can result in some severe clinical problems, such as dry irritated eyes, vaginal dryness, dryness of the mouth impeding proper speech and mastication, and excessive friction and wear of articulating cartilage surfaces, especially for the elderly and patients with Sjogren's syndrome 1 . Moreover, prolonged contact between an artificial, hard material and soft tissue surfaces in the human body may cause inflammation and tissue damage and thus significant patient discomfort.
To prevent those issues, using some artificial biolubricants to reduce friction and wear would be a promising strategy 2,3 . Mucus is an adhesive substance that is widely secreted in living organisms, which can protect the lung airways, eye, gastrointestinal (GI) tract, vagina, and other mucosal surfaces 4 . It is a hydrogel containing water (> 90 wt.%), inorganic salts, mucins, and some minor components depending on the source 5 . The main functional component of mucus is mucin, a group of high molecular weight (0.5-20 MDa) and densely glycosylated biopolymers (glycoproteins). More specifically, the mucins can be roughly classified into three groups: membrane-bound epithelial mucins, secreted non-gel-forming mucins, and secreted gel-forming mucins with the latter being the major component of mucus 6 . One of the key characteristics of mucins is their ability to form viscoelastic solutions or gels 7 . Those mucin gels act as a chemical and biological barrier towards pathogens, dust particles and toxins and provide lubrication and hydration to protect tissues from dehydration, shear stress and wear damage.
Due to their outstanding performance as biolubricants, the tribological properties of mucin solutions have been widely studied during the past years. For instance, Pult et al. 8 investigated friction between the cornea and the eyelid as well as between the contact lens surface and the eyelid, and the authors indicated that the tribological properties of the eyes were significantly correlated to the quantity and quality of the mucins in the tear film. Winkeljann et al. 9 suggested that purified gastric mucins can reduce the formation of tissue damage on porcine cornea as it can be induced by contact lenses and serve as a powerful tool in fighting ocular dryness. Mucin-based lubricants typically promote boundary lubrication both on artificial and biological surfaces, i.e. they provide low friction coefficients (μ) ranging from 0.2 to less than 0.01 with different material pairings 10-13 . This is attributed to the mucins and mucinous glycoproteins, i.e. highly surface-adhesive macromolecules, which can adsorb to a wide variety of surfaces. Owing to the absorption of mucins, hydrophobic (artificial and biological) surfaces are rendered hydrophilic, which facilitates the formation of lubrication water film on those surfaces which, in turn, leads to the separation of the load-bearing surfaces under shear force. This hydrated boundary layer further improves lubrication via the hydration lubrication mechanism. In addition, shearing off the adsorbed mucin macromolecules from the substrates can further result in the reduction in friction 14 .
Although the studies mentioned above give good examples of how recent research provided new insights into the lubricating potential of mucin solutions, some of them have been conducted by employing commercially available mucin specimens. However, solutions containing such industrial mucins are poor lubricants, and they also lack the gel-forming abilities and antiviral properties observed for manually purified mucin glycoproteins 9 . Hence, to develop highly functional mucin-based solutions, it is crucial to use manually purified mucins which have maintained their native properties. Meanwhile, the human body is a complex system, which demonstrates different pH, proteins as well as salts in different organs and body fluids. It deserves to study the tribological performances of the purified mucins in physiologically relevant conditions to extend their potential use in biomedical applications, such as artificial joint fluid, eye drops, personal lubricant, and mouth spray (Fig. 1). This work is motivated by such considerations. The lubricity of manually purified gastric mucin on hydrophobic surfaces with different pH, proteins, mucin and salt concentrations is investigated and the potential use of mucin-based solutions in biomedical applications is evaluated.
A better understanding of the regulation mechanisms of pH, protein, mucin and salt concentrations will be achieved.
Mucin purification
Mucin purification was conducted as described earlier in detail 15,16 . In brief, fresh porcine stomachs were opened, and food debris washed off with tap water. Then, crude mucus was collected by scraping the mucosal surface of the tissue. The obtained mucus was pooled and diluted in PBS buffer (10 mM, pH = 7.4) for overnight solubilization. Cellular debris and lipid contaminants were removed from this solubilized mucus via two centrifugation steps (first at 8300 x g at 4 °C for 30 min, then at 15000 x g at 4 °C for 45 min) and a final ultracentrifugation step (150000 x g at 4 °C for 1 h). Afterwards, the mucins were separated from other macromolecules by size exclusion chromatography using an Äkta purifier system (GE Healthcare, Chicago, IL, USA) and an XK50/100 column packed with Sepharose 6FF. Then, the mucin fractions were pooled, dialyzed against ultrapure water, and concentrated by cross-flow filtration. Finally, the concentrated mucins lyophilized and stored at -80 °C. All purified mucins were exposed to UV-light for 1 h for sterilization before use.
Buffer solutions
To avoid introducing artefacts associated with buffering substances, four kinds of buffer solutions with different components were tested. Phosphate buffer was prepared by dissolving
Viscosity measurements
The viscosities of the mucin-based lubricant solutions used in this study were determined on a commercial shear rheometer (MCR 302, Anton Paar, Graz, Austria) using a cone-plate geometry (CP50-1, Anton Paar). For each measurement, 570 µL of the test solution were pipetted onto the stationary plate to fully fill the space between the measuring head and the sample plate.
Measurements were conducted at 21 °C, and the shear rate was varied from 10 to 1000 s -1 . The viscosity values shown in table 1 for each solution represent averaged measurement results acquired at a shear rate of 100 s -1 from three independent samples.
Tribological tests
A steel-on-PDMS tribo-pairing was chosen to evaluate the lubricity of mucin solutions. Steel spheres with a diameter of 12.7 mm (Kugel Pompel, Vienna, Austria) were used as received. PDMS pins were prepared as cylinders with a diameter of 6.1 mm. In detail, PDMS prepolymer and crosslinker (Sylgard 184, Dow Corning, Wiesbaden, Germany) were mixed in a ratio of 10:1. Then, the mixture was placed into a vacuum chamber for 1 hour to remove air bubbles. Afterwards, the solution was poured into a steel mold and cured at 80 °C for 4 h. Both, the steel spheres and PDMS pins were used without further polishing as they showed low roughness (Sqsteel < 200 nm, SqPDMS < 50 nm) when investigated with a laser scanning microscope (VK-X1100, Keyence, Osaka, Japan).
The tribological experiments were performed at 21 °C using the tribology unit (T-PTD 200, Anton Paar) of a commercial shear rheometer (MCR 302, Anton Paar) as described before 22 . In brief, three PDMS pins were mounted into a pin holder and washed with ethanol and ultrapure water.
Then, 600 µL of a lubricant solution were applied to ensure full coverage of the pins. The normal load was chosen to be 6 N, resulting in an average contact pressure of ~0.3 MPa. The sliding velocity was varied from 10 -5 to 10 0 m/s to probe as many lubrication regimes as possible. For each condition, three independent experiments were carried out using a fresh set of PDMS pins for each measurement.
Adsorption measurements
The adsorption properties of mucins on hydrophobic PDMS surfaces were studied by quartz At the beginning of each adsorption test, a pure buffer solution (without any proteins, mucins or salts) was injected at a flow rate of 100 µL/min until a stable baseline was obtained. Afterwards, a mucin-based solution was injected at 100 µL/min for ~30 min to obtain an adsorption curve. The resulting frequency shift (Δf, Hz) and dissipation shift (ΔD) were automatically calculated by the software "qGraph" (3T-Analytik, Tuttlingen, Germany).
Results and discussion
To evaluate the potential of mucin solutions as biolubricants for human use, several physiological parameters need to be considered, which might affect mucin lubricity. We here focus on variations in pH and ionic strength as those two parameters can be quite different on different body sites. Moreover, we aim at identifying the minimal mucin concentration that still conveys good
Choosing the right buffer system
When working with solutions of biological molecules such as mucins, using buffers to control pH and ionic strength is critical, especially if the influence of either parameter is on properties of the solution is investigated. Although phosphate based buffers are frequently used in mucin tribology [25][26][27] , one needs to be aware, thatdepending on the material pairing studiedbuffer substances may have a significant influence on the experimental outcome. For a steel/PDMS pairing, which is not only used in this study but is regularly employed in the field of biotribology 6, 14, 28-31 , this can be indeed an issue: As shown in Fig. 2(a), the friction coefficient in the boundary lubrication regime drops by almost one order of magnitude when the phosphate concentration in a standard PBS buffer is increased from 10 mM to 500 mM. This can be attributed to the ability of phosphate ions to readily react with the steel surfacethis is a frequently reported mechanism to modify the surface of steel components for industrial applications 32,33 . Although such a phosphate buffered tribology system would still allow for direct comparisons between buffered biopolymer solutions and buffer only, it still disturbs the measurement outcome in two ways: First, one main advantage of a steel/PDMS pairing is the fact, that all three lubrication regimes are clearly distinguishable; thus, the influence of different (macro)molecular ingredients can be easily identified. However, when using a phosphate-based buffer, the Stribeck curve is less pronounced and the maximal ambitus between 'poor lubricity' and 'great lubricity' is reduced, which makes it harder to detect gradual improvements in lubricity provided by lubricating molecules. Second, a reaction of the phosphate ions with the steel surface might cause secondary effects arising from a changed surface structure or surface chemistry, and this can complicate the interpretation of the results. A material incompatibility between the friction partners and buffer solution is not the only condition that needs to be considered when selecting a buffer system. Proteins, such as mucins, might be able to operate in a wide range of pH settings as they also occur in the human body 34 (BRB, pH range: 1.9 -11.0). As presented in Fig. 2(b)
Viscosities of the different mucin-based lubricants
Before we study the tribological performance of mucin solutions at different conditions, we first assess the viscosities of the different mucin-based mixtures. These values are summarized in Table 1 and sorted in groups according to the different parameters (mucin concentration, buffer pH, buffer ionic strength, influence of other proteins) whose influence on mucin lubricity is assessed
Influence of the mucin concentration
Owing to the complex purification procedures and low production volume, the manually purified mucins are quite expensive to some extent. It would be nice if the effective lubrication range of those mucins can be figured out. The obtained friction coefficients of the buffer solution go down by tuning the mucin concentration from 0.005% to 1.0%, suggesting the improvement of lubricity. As displayed in Fig. 3(a), the mucin solution shows superior lubrication properties, with friction coefficients < 0.02 in all the three lubrication regimes, even diluted in a concentration of 0.01%, suggesting that the superlubricity status of purified mucin is easy and quick to achieve.
However, when the mucin was further diluted to 0.005%, the lubricity was getting worse, which was just the same as the pure HEPES buffer. It is noting that there is no obvious difference between the friction coefficient curves of 0.1% and 1.0% mucin solutions. Thus, in this study, adding too much mucin will not be of great help to the lubricity.
In the next step, we investigated the adsorption properties of different mucin concentrations.
According to the results displayed in Fig. 3(b), higher mucin concentration leads to quicker adsorption speed. Indeed, with the increasing mucin concentration, strongly increased adsorption to PDMS can be found, as indicated by the drastically increased frequency shifts shown in Fig. 3(b).
Therefore, one can conclude that the sufficient adsorbed mucins onto PDMS surface are needed to provide excellent lubricity. That is the reason why lubricant solution with a mucin concentration of 0.005% failed. Moreover, owing to the steric effect, once sufficient mucin molecules are offered, the addition of mucins will not help to improve the lubricity a lot. Thus, the lower limit for the mucin to offer the lubricity is a concentration of 0.01%. As for the upper limit, it depends on different situations, and 0.1% is recommended for normal conditions.
Influence of pH
Due to the broad working pH range in potential applications, the lubrication and adsorption properties of the manually purified mucin at different pH were evaluated in this study. As can be seen in Fig. 4(a), with the increasing pH value from 2 to 8, the friction coefficients decrease first and then increase. Mucin solution at pH 4 demonstrates the lowest friction coefficients (< 0.01 over the whole speed range), corresponding to the highest frequency shift shown in Fig. 4(b), indicating pH 4 is the optimum value for the lubrication of purified mucin. Indeed, pH 4 is a key point for the lubricity of mucin. Once the pH value departure from 4, the mucin will perform relatively higher friction coefficients, suggesting the lubricity is getting worse. Also, similar situations can also be observed in the results of adsorption measurement displayed in Fig. 4(b). Therefore, the lubrication properties of purified mucin at the sliding interface of steel and PDMS in aqueous condition can be attributed to its adsorption behavior, which is highly dependent upon the pH value of lubricant solutions.
It is suggested that the conformation of gastric mucin changed from a random coil to an anisotropic with the decreasing pH (at pH ≥ 4) and further extended at pH < 4 38 . Cell et al. 39
Influence of salt concentration
In the next step, the lubricity and adsorption properties of the mucin solution with different salt concentrations were tested in this study. The idea is that salt ions, especially NaCl, play a vital role in the regulation of many body functions and is also an important part of the body's fluid balance control system. Sometime, the salt concentration in body fluids will change with different health conditions. In order to understand the lubricity of mucin under different salt concentrations, we developed a series of mucin/NaCl solutions, and their tribological behavior as well as adsorption properties were evaluated in this study.
According to the results obtained in Fig. 5(a), the manually purified mucin is more sensitive to salt ions, which is different from the commercial one 42 . By adding NaCl into mucin solution, a noticeable rise of friction coefficients can be found, suggesting the introduced salt ions make mucin lubrication properties worse. Moreover, dissolving NaCl in mucin solution does not change the shape and trend of the friction coefficient curves, and just leads to the curve shifts. Surprisingly, even in a high salt concentration (500 mM NaCl), purified mucin performs an excellent lubricity with the friction coefficients < 0.1 under various sliding velocities, indicating that purified mucin can still work well in the salt condition of body fluids.
The adsorption properties of mucin solution with different salt concentrations were carried out. Fig. 5(b) presents their frequency shifts as a function of time. In order to better understand the adsorption kinetics, the dissipation shifts are also provided in these tests. It is indicated that the frequency drops while the dissipation increases with increasing the salt concentration from 20 mM to 500 mM. In order to further study the interaction between mucin and salt ions, additional twostep QCM measurements were conducted, which was first evaluated with 0.1% mucin solution and then followed by 20 mM or 500 mM NaCl in 20 mM HEPES buffer. As displayed in Fig. 5(c), there is no obvious influence on the adsorption performance of mucin solution upon injecting 20 mM NaCl. However, a significant decline of the frequency and raise of dissipation can be found after injecting 500 mM NaCl. High ionic strength influences charge shielding and further changes the available number of charged groups for ionic paring 43 . It has been also suggested that the range and magnitude of the steric forces encountered between mucin layers decrease with increasing NaCl concentration, which can be attributed to the decrease in Debye-length 44 . Mucin undergoes a conformational change from a fully extended state to a collapsed state in salt solutions, due to the electrostatic screening effect and osmotic pressure in chains 45 . It is proposed that the activity of water molecules is limited under the influence of salts and a lot of electric charges on the protein surface are neutralized 46 . Thus, adsorbed mucin molecules are decreased and some of the hydration layers are destroyed. As a consequence, higher friction coefficients are observed.
Influence of proteins
It is reported that lysozyme, amylase, and serum albumin are the main proteins in natural and artificial body fluids, such as tear, salivary, synovial fluid and vaginal fluid [47][48][49][50] . For the purpose of investigating the interaction of mucins and three proteins in body fluids, they were chosen in this study. The friction coefficients versus sliding speed plots of mucin solution with three different proteins (BSA, amylase, and lysozyme) are shown in Fig. 6(a). For the purpose of comparison, the protein-free mucin solution and pure HEPES buffer are also presented. It reveals that the protein-free mucin solution shows excellent lubricity, giving a relative stable friction coefficient around 0.01 at various sliding speed. Higher friction coefficients are obtained when proteins are introduced.
More specifically, by adding BSA or amylase into mucin solution, the friction coefficients, which are all higher than the one of pure mucin solution, still display a noticeable reduction in the mixed lubrication regime in comparison with the pure HEPES buffer solution, while increase gradually towards pure buffer in the boundary lubrication regime. Different from the above two mucin/protein solutions, the friction coefficients of mucin/lysozyme solution decrease when lubrication regime change from mixed lubrication to boundary lubrication, remaining excellent lubricity of this solution.
To figure out the relationship between tribology and adsorption, QCM-D measurements were conducted with the three mucin/protein lubricants for ~30 min to evaluate how the mucin adsorption behavior would be like with the existed protein. For the adsorption kinetics of mucin/protein solutions displayed in Fig. 6(b), the mucin/BSA solution demonstrated the least adsorption on PDMS while mucin/lysozyme gave the highest adsorption. As indicated by the frequency shifts, the mucin/BSA and mucin/amylase solutions adsorbed onto PDMS quickly and maintained relatively stable status with the increasing measurement time. Regarding the mucin/lysozyme solution, though a drastically frequency drop was observed at the beginning, the frequency shift still decreased versus time, which is similar to the trend of protein-free mucin solution shown in Fig. 3(b), indicating the continuous adsorption onto PMDS and resulting in the optimum lubricity among the three mucin/protein lubricants. For neutral pH, i.e. at pH 7, mucin, BSA and amylase are negatively charged while lysozyme is positively charged 51,52 . The molecular weight of BSA, amylase, and lysozyme are ~66 kDa, ~51 kDa and ~14 kDa, which are all much smaller than mucin (0.5-20 MDa).
The smaller protein molecules can diffuse to the substratum easier and adsorb onto PDMS surfaces faster. It can be assumed that the proteins used in this study adsorbed onto PDMS first, and then influenced the mucin adsorption with their different charges and molecular properties. For the negatively charged proteins, i.e. BSA and amylase, they adsorbed onto PDMS and blocked surface, preventing the negatively charged mucin adsorption altogether under electrostatic interaction, which is similar to the adsorption behavior of bovine submaxillary gland mucin (BSM) with BSA reported in a previous study 53 . Compared with pure mucin solution, higher frequency shift is found when introduced positively charged lysozyme in mucin solution. Whereas, the effective space for mucin adsorption is decreased owing to the anchored protein molecules on PDMS. In addition, the obtained results also indicate that through the protein adsorbed on the PDMS surface hydrophobic interactions between mucin and PDMS are weakened which are still stronger than the electrostatic ones. This might be the reason why mucin alone shows better lubrication than combined with positively charged lysozyme. Consequently, higher friction coefficients are obtained in comparison with the protein-free mucin solution.
Conclusions
In this work, we have shown the viscosity, adsorption, and lubricity of manually purified porcine gastric mucin on hydrophobic surfaces. Different pH, proteins, mucin and salt concentrations were investigated to mimic different physiologically relevant conditions. It implies that the mucin can offer excellent lubricity even with ultralow concentration. The lower limit of the mucin concentration in aqueous solution to offer the lubricity is 0.01%. As for the upper limit, it depends on different situations, and 0.1% is recommended for normal conditions. It is indicated that pH 4 is the turning point for the adsorption and lubricity of mucin owing to the conformational change, which provides the optimum friction coefficients and adsorption kinetics. Because of the electrostatic screening effect and osmotic pressure, the introduced salt ions in mucin solution make side effects on the adsorption and lubricity of purified mucin. The addition of proteins in mucin solutions increase the friction coefficients. The purified mucin demonstrates optimum lubrication behavior with positively charged proteins rather than the negatively charged ones. In summary, the results presented here can provide some theoretical evidence for extensive uses of manually purified mucin solutions in biomedical applications, such as artificial joint fluid for viscosupplementation, eye drops, mouth spray and personal lubricant personal lubricant for intercourse.
Conflicts of interest
There are no conflicts of interest to declare. | 2019-04-18T09:19:38.000Z | 2019-04-18T00:00:00.000 | {
"year": 2019,
"sha1": "0af2fc11b2a340f9f3dd9ef1608fcc38930343ac",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1904.08648",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f64c3b5ab22edf90bf1721db60f254bbe2f2dc0a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Physics",
"Medicine",
"Chemistry"
]
} |
216532756 | pes2o/s2orc | v3-fos-license | Transient damping method for narrowing down leak location in pressurized pipelines
: Numerous leak detection methods have been developed for pipeline systems because of the shortage of water resources, increased water demand, and leak accidents. These methods have their advantages and disadvantages in terms of cost, labor, and accuracy; therefore, it is important to narrow down the location of a leak as easily, rapidly, and accurately as possible. This study applies the technologies based on the execution of a transient event (transient test-based technologies (TTBTs)), and a model is presented for representing the relation between the leak location and the damping of the pressure transient due to the leakage. The model is verified with laboratory experiments in which the leak location can be narrowed down to be less than 10% to 30% of the total pipe length. The model is found to be more effective if the leak location is nearer to the upstream end. In addition, the leak location found by the damping model varies with an approximate absolute error of 2% to 5% of the pipe length. It is suggested that the damping model is suitable for narrowing down and not for finding the leak location, and should be used in combination with other leak detection methods.
INTRODUCTION
Water leakage in pipelines occurs in all water distribution systems due to age, corrosion, or a third party (Zhang et al., 2015), and causes considerable economic loss, such as that related to shortages of drinking and irrigation water. The amount of water leakage in water distribution systems varies widely between different countries, regions, and systems (Puust et al., 2010). Especially in Asia, home of 53% of the world's urban population, the estimated annual volume of leakage in urban water utilities is approximately 29 billion cubic meters; hence, water utilities are losing nearly 9 billion US dollars per year (Asian Development Bank, 2010). Leakage is not just an economic issue, it also has environmental, health, and safety implications (Puust et al., 2010). For example, leakage in pipelines causes ground Correspondence to: Masaomi Kimura, Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Yayoi, subsidence, contamination, and sinkholes, which results in damage to the infrastructure (Ali and Choi, 2019). Moreover, leakage possibly influences water quality by introducing contamination into water distribution networks through leaks in low-pressure conditions (Colombo and Karney, 2002). Hence, detecting the existence and exact locations of leaks as quickly and accurately as possible is of utmost importance.
Although various leak detection methods have been developed (such as ground-penetrating radar, acoustic leakdetection, and infrared spectroscopy), no single method has been able to satisfactorily meet the operational needs from the perspectives of cost and labor. Hence, a simple, cheap, and reliable method for leak detection would be of great economic value (Pudar and Liggett, 1992). Leak detection by measurement of the pressure in a pipe can be employed in the daily maintenance of pipelines, because manometers are less expensive than flowmeters and can be easily installed at air valves on pipelines. However, a pressure change caused by leakage is too small to be detected, even in a steady flow. Additionally, in cases of low water pressure, it is difficult to detect leaks by capturing the force caused by the pressure change (Chatzigeorgiou et al., 2015).
As a recent solution to the aforementioned problems, transient test-based technologies (TTBTs), in which a transient hydraulic event (such as that caused by a water hammer) is used for leak detection, are attracting interest. TTBTs are expected to offer leak detection methods at a lower cost and with less labor compared to other methods (Meniconi et al., 2011). In a transient hydraulic event, a pressure wave sufficiently strong to be detected can be generated, and only a pressure measurement taking a few minutes (or even less) is required for detecting leaks. After a transient hydraulic event occurs in a pipe, a pressure wave travels repeatedly between both ends of the pipe stretch. The movement of this pressure wave is observed as cyclical pressure transients at an arbitrary point in the pipe stretch. Leakage in a pipeline system will result in an increased damping rate and the creation of new leak-reflected signals within the pressure transients. Most TTBTs have been developed and applied to water pipeline systems using information contained within these two effects (Duan et al., 2010). Brunone (1999) and Brunone and Ferrante (2001) investigated the effect of leak-reflected signals in the pres-sure transients, and demonstrated how leaks can be detected by leak-reflected signals in both laboratory and field experimental pipes. While the method adopted was simple and easy, it could not be applied in the case of a slight leakage and if noise due to the pipe structure is present (Asada et al., 2019). Inverse transient analysis is a more powerful TTBT method, using both leak-induced damping and leak-reflected signals (Covas and Ramos, 2010;Shamloo and Haghighi, 2010;Vítkovský et al., 2007) , and it is theoretically applicable to pipes of any structure or characteristic. However, the computational complexity is generally enormous, and it is important to reduce it, for example, by narrowing down leak location with other methods in advance. Leak-induced damping is thought to be minimally affected by the noise compared to leak-reflected signals, thus it is effective for the leak detection in pipeline systems with a complicated structure (Asada et al., 2019). In addition, it was revealed that the damping rate of the pressure transients is faster because of an increased energy dissipation from the leak, because the leak location is nearer to the downstream end in the case of rapid and complete closing of the valve (Asada et al., 2019). In this study, the leak-induced damping is theoretically modelled for leak location by considering energy dissipation from the leak and friction in a pipeline. The effectiveness of the damping model for narrowing down leak location is demonstrated based on the experimental results.
Damping model
The damping of pressure transients in the pipeline with leakage is represented by means of an exponential law, according to the results obtained by Ramos et al., (2004), assuming the damping by friction loss and leakage are mutually independent (Wang et al., 2002), as follows.
where ΔH is the change in the piezometric head generated by the water hammer (m), ΔH 0 is the initial change in the piezometric head (m), t is the time (s), t* is the time nondimensionalized by the wave propagation period T (t* = t/T), R is the friction-induced damping coefficient, and R L is the leak-induced damping coefficient. To calculate the value of R, a numerical simulation needs to be used because the damping by the unsteady state friction has to be considered in the complicated model proposed by two conceptually different approaches. In the first approach, the so-called weighting function-based, the unsteady state friction is given by a weighted integral of past fluid accelerations (Trikha, 1975;Vardy and Brown, 1995;Zielke, 1968.) In the second approach, the so-called instantaneous acceleration-based, it is assumed as a function of the instantaneous local and convective accelerations (Brunone et al., 1991;. In this study, the value of R is measured by the experimental pressure transient in case of no leakage without resorting to a numerical simulation. Additionally, Meniconi et al. (2014) reported that there is a biunique correspondence between the damping of the pressure transient at any pipe section and the energy dissipation of the entire pipeline system. Therefore, the value of R L related to leak location is derived from the energy dissipated from the leak.
In a single pipeline with total length L (m) and pipe cross-sectional area A (m 2 ), water is supposed to flow at a flow rate Q (m 3 s -1 ) from an upstream reservoir to a downstream end valve, in which the continuity equation is established as follows: where Q up is the flow rate upstream from the leak, Q down is the flow rate downstream from the leak, and Q leak is the rate of leakage volume in a steady flow. A leak is assumed to exist at a point x L *L from the upstream reservoir (0 ≤ x L * ≤ 1), where x L * is the distance to the leak nondimensionalized by the total length L. A wave with the value of ΔH generated by rapidly closing the valve propagates through the pipeline at wave speed c (m/s), which decreases because of the leakage during the transient event. The period is the time taken to propagate for two round trips through the pipe in the case of a reservoir-pipeline-valve system (T = 4L/c). The interpretation of transient conditions is simplified in almost the same manner as the models of spring oscillations by measuring displacements with respect to the spring's equilibrium position (Karney, 1990). Thus, the energy dissipation under transient conditions can be considered using the change ΔH with respect to the basis of the steady-flow condition. For the case in which ΔH is zero at the leak, the energy in the pipeline is preserved because the change in kinetic and elastic energies is balanced, as in the case of no leakage. Therefore, the change in the energy from the leak in the pipeline is derived from the case in which the change in the piezometric head at the leak is ΔH, as shown in Figure S1.
where ΔE is the change in energy per unit time (N m s -1 ), ρ is the fluid density (kg m -3 ); g is the gravitational acceleration (m s -2 ); and ΔQ leak is the change in Q leak when the wave with the value of ΔH exists at the leak. The rate of the leakage volume is a function of the pressure head at the leak and the size of the leak, and Q leak + ΔQ leak is expressed by the orifice equation as follows: where a is the product of the discharge coefficient and the cross-sectional area of the leak hole (m 2 ), H L is the steady state piezometric head at the leak (m) and z L is the elevation at the leak (m). Substituting Equation (4) to Equation (3), (5), it can be simplified as follows: where h L is the steady state pressure head at the leak (m). The value of ΔH at the leak is influenced by the line packing via the influence of friction, by which ΔH continues to rise to the maximum head after the wave passes until it is reflected back (Duan et al., 2012;Liou, 2016), while ΔH reaches the full Joukowsky head immediately in case of no friction. The full Joukowsky head is the change in the piezometric head converted from the flow rate by closing the valve, and can be expressed as follows (Joukowsky, 1904): where ΔH J is the full Joukowsky head (m). Liou (2016) presented an analytical formula for ΔH in a downstream valve during a half period (0 ≤ t* ≤ 0.5). However, this underestimates an actual result for ΔH because it neglects the line packing by the unsteady state friction, which cannot be derived analytically.
Thus, considering the line packing by the steady and unsteady state friction and smooth variation of ΔH, a formula is presented here for ΔH at the downstream valve, by simply assuming power variations, as follows: where α is the rate at which the initial head change increases to the maximum in the downstream valve (0 ≤ α ≤ 1), and β is the ratio of the maximum head change in the downstream end to the ΔH J value (β ≥ 1). The variation in ΔH at the leak during a period (0 ≤ t* ≤ 1) is formulated based on Equation (8). In a transient event, the wave with ΔH reflects in antiphase at the upstream reservoir, and it reflects in phase at the downstream valve. For the case of x L * ≥ 0.5, the reflective wave from the downstream valve reaches the leak when ΔH at the leak is varying to zero; whereas, for the case of x L * < 0.5 it reaches the leak after ΔH at the leak has varied to zero as shown in Figure S2. Thus, ΔH (0 ≤ t* ≤ 1) can be classified into two types, according to whether x L * ≥ 0.5 or x L * < 0.5. For x L * < 0.5, ΔH is formulated for the duration of a half period (0 ≤ t* ≤ 0.5) as follows: For x L * ≥ 0.5, ΔH is formulated for the duration of a half period (0 ≤ t* ≤ 0.5) as follows: The value of ΔH (0.5 ≤ t* ≤ 1) is equal in absolute value and opposite in sign to the value (0 ≤ t* ≤ 0.5). Figure 1 shows the variation in ΔH at the leak during a period, calculated using Equations (9) and (10), for the case of ΔH J = 5 m, α = 0.1, and β = 1.2 and using x L * = 0.2 for x L * < 0.5 and x L * = 0.8 for x L * ≥ 0.5. The total energy E in a pipe, on the left side of Equation (6), is expressed as the elastic energy form (Karney, 1990;Meniconi et al., 2014) and can be derived from the work of water and pipe by ΔH as shown in the shaded area in Figure 2.
where E p , E w are the Young's moduli of the pipe material and water (N m -2 ), respectively, D is the pipe diameter (m), b is the pipe wall thickness (m), and ε p , ε w are the strain rate of the pipe and water, respectively. Substituting Equation (11) to Equation (6), As can be seen clearly from Figure 1, the first and third terms in Equation (12) become zero and the second term only needs to be integrated for t* ranging from 0 to 0.5 and then doubled.
By solving Equation (13) for ΔH J using Equations (9) and (10), the formula for the damping of ΔH due to the leakage is expressed as follows:
Collection of experimental data
An experimental test was conducted in a pipeline to collect pressure transient datasets with simulated leakage to evaluate the effectiveness of the damping model at the Institute for Rural Engineering, Tsukuba, Japan. The pipeline has a spiral structure with bends at 25 m intervals, is composed of stainless steel, and has a 24.2 mm inner diameter, a thickness of 1.5 mm, and a length of 900 m (Figure 3). The pipeline includes a pump and a pressurized tank (maximum pressure 3.0 kg cm -2 ) at the upstream end, and manual and ball valves at the downstream end. The manual valve is used to control the velocity of flow downstream from the leak, and the ball valve is used to generate a transient event via rapid and complete valve closure. A manometer (with a gauge pressure range of 0 MPa to 0.1 MPa and an accuracy of 0.5% of full range) is set just upstream from the ball valve to collect the pressure transient data. The experimental test cases are shown in the left side of Table I. The pressure transient data measured in each case is represented as the time variation of ΔH in Figure S3. The wave speed in the pipeline was calculated by the time for a period of the measured pressure transient. Simulated leaks were established at three points: 150 m (upstream leakage (UL)), 450 m (middle leakage (ML)), and 750 m (downstream leakage (DL)) from the upstream end, corresponding to x L * values of 0.167, 0.500, and 0.833, respectively. The study assumes that the leak detection method uses the knowledge of the relative leak size to the pipe cross-sectional area a/A in advance using a water leakage test. Thus, the relative leak size was derived from the rate of leakage volume measured under two different hydrostatic conditions using the following equations, which are based on the orifice equation: where H 1 and H 2 are a pair of static piezometric heads, and a z L is set as zero in this study b ε L = (x LO * -x L *) × 100 Q leak1 and Q leak2 are the rates of leakage volume measured for the cases of static piezometric heads H 1 and H 2 , respectively.
Method of narrowing down leak location by damping model
The procedure for calculating R L in the experimental cases is as follows: (1) The change in the piezometric head ΔH at the downstream end of the pipe is measured with and without the leakage for cases 1 to 6; (2) The total damping coefficient R + R L and the frictioninduced damping coefficient R for cases 1 to 6 are derived from the exponential variation in the two values, which are calculated by averaging each of the absolute values of ΔH for t* ranging from 0 to 1 and for t* ranging from 1 to 2, with and without the leakage; The values of the leak-induced damping coefficient R L for cases 1 to 6 can be found by subtracting R from R + R L . The value of R L in Equation (14) is calculated by changing α ranging from 0 to 1 by increments of 0.01, β ranging from 1 to 2 by increments of 0.01, and x L * ranging from 0 to 1 by increments of 0.001. The absolute error of R L from Equation (14) and the experimental pressure transient is calculated for cases 1 to 6, and the leak locations are searched in order that the absolute error of R L is almost negligible. The objective of the present study is to narrow down leak location using the damping model. However, the leak location cannot be completely narrowed down by using only the information of the leak-induced coefficient R L, as shown in Figure S4. Such a behavior was highlighted by Meniconi et al., (2014) in different pressure transients by numerical experiments. In fact, a given damping of pressure transients is not exclusive of a unique pressure transient, and provides multiple couples of solutions (i.e. the values of α and β) if no other information is available. Therefore, the time variation in ΔH is used to further nar-row leak location. The value of ΔH (0 ≤ t* ≤ 0.5) in Equation (8) is calculated by changing α within a range of 0 to 1 by increments of 0.01, and β within a range of 1 to 2 by increments of 0.01. The root mean squared error (RMSE) of ΔH (0 ≤ t* ≤ 0.5) from Equation (8) and the experimental pressure transient is calculated for cases 1 to 6. Therefore, leak location searches return almost negligible absolute error of R L and RMSE of ΔH.
RESULTS AND DISCUSSION
First, the value of ΔH in Equation (8) is fitted to the measurement value of ΔH (0 ≤ t* ≤ 0.5) for cases 1 to 6 to investigate the accuracy and the efficiency of this equation. In all cases, Equation (8) reproduces ΔH (0 ≤ t* ≤ 0.5) with the minimum RMSE of approximately 0.2 m ( Figure S5). The reason that the RMSE is considered is that the measurement value of ΔH includes high frequency noise due to the spiral structure of the pipeline, and the model in Equation (8) neglects the variation of ΔH during the relative short time until the complete closure of the valve (t* = 0-0.03). In this study, the RMSE of ΔH is set as ε h < maxε h = 0.5 m to narrow down the leak location for cases 1 to 6, where ε h is the RMSE of ΔH (m) and maxε h is the maximum value of ε h . The leak-induced coefficient R L largely influences ΔH after a period from the initial state. The absolute error of R L is set as ε d < maxε d = 5.0 × 10 -5 so that the error of the ΔH is negligible using Equation (14), where ε d is the absolute error of R L and maxε d is the maximum value of ε d The right side of Table Ⅰ presents the results of the narrowing down of leak location for cases 1 to 6. Figure 4 presents error plots for the dimensionless leak location x L * under the conditions and shows they are dense around the true leak location x L * for cases 1 to 6. The vertical axis is the dimensionless hybrid error ε* (0 ≤ ε* ≤ 1) of the RMSE of ΔH and the absolute error of R L . The dimensionless error ε* is represented so that the value of ε h and ε d can be evalu- ated as a combined single function, as in the weighted sum method treating multi-objective function (Marler and Arora, 2004), as follows: As presented in Table Ⅰ and Figure 4, narrowing down the leak location can be successfully performed by the damping model in all cases, because the true leak location x L * exists within the narrowed down leak locations x LN *. The narrowing down rate of leak location for the total pipe length is of 30% level in DL (case 1, case 4), 20% level in ML (case 2, case 5), and less than 10% in UL (case 3, case 6); thus, it is larger as the true leak location x L * is nearer to the downstream end. This is because the variation range of the leak location for a value of R L is larger as the leak location is nearer to the downstream end, when the parameters α, β are varied, as shown in Figure 5. In addition, the leak location x LO * is optimized when the dimensionless error ε* is minimum for cases 1 to 6. The optimized results in Table Ⅰ show that the error ε L of the leak location for the total pipe length varies largely from approximately -5% to 2% for cases 1 to 6. The accuracy of the damping model is the same (or lower) compared to that of the previous leak detection methods using damping (Asada et al., 2019;Wang et al., 2002). Therefore, optimizing the leak location using only the damping model results in an inaccurate estimation.
It is important to narrow down the leak location by the damping model with suitable conditions, and find the leak location using a combination of other methods, such as inverse analysis. Compared with DL and ML cases, in the UL case, the true leak location x L * is farther from the optimized leak location x LO * of the minimum dimensionless error ε* in Table Ⅰ and Figure 4. These differences result from neglecting the variation of ΔH during the valve closure, which gives the larger errors of α and β to small energy dissipation from the leak in case of UL. Therefore, by improving the damping model considering the variation of ΔH by closing the valve, it is possible to narrow down the leak location further under severe conditions. Additionally, the pressure transient is more affected and inhibited by viscosity diffusion in the pipe as the pipe length L is larger and the wave speed c and the pipe diameter D are smaller (Duan et al., 2012;Wahba, 2008); thus, the damping model cannot be used in this case. In further studies, the validation of the damping model needs to be investigated for different types of pipes and multiple leaks, so that the damping model can be widely applied to field pipes.
CONCLUSION
This paper presented a method for narrowing down leak location using the damping model of pressure transients in a pressurized pipe. The leak location was narrowed down from approximately 30% to less than 10%, and it was revealed that the effectiveness of the damping model increases as the leak location is closer to the upstream end. In this method, we assumed that 1/16(ΔH/h L ) 3 << 1, which can be realized by restricting the valve in advance and suppressing the flow rate in field pipes. Under this condition, the application of the proposed method to field pipes can have considerable benefits, because the operation of rapidly closing the valve will be relatively easy given that the valve opening and the load on the pipe due to the pressure change are small. Therefore, the proposed damping model has a possibility of narrowing down the leak location simply and rapidly in field pipes by investigating the effectiveness of the model in different types of pipes in a detail.
ACKNOWLEDGMENTS
This work was supported by JSPS KAKENHI Grant Number JP19J10410. Figure S1. Head and flow rate profiles for the case in which the change in the piezometric head at the leak is ΔH Figure S2. Diagram of the wave propagation through the pipeline and the pressure transient at the leak for the cases of (a) x L * ≥ 0.5 and (b) x L * < 0.5 Figure S3. Time variation of the change in piezometric head in (a) case 1, (b) case 2, (c) case 3, (d) case 4, (e) case 5, and (f) case 6 Figure S4. Narrowing down results of the leak location using only the information of R L in the condition: ε d < 5.0 × 10 -5 , where ε d is the absolute error of R L between the calculated and measured value in (a) case 1, (b) case 2, (c) case 3, (d) case 4, (e) case 5, and (f) case 6 Figure S5. The fitting curve of ΔH calculated by Equation (8) in (a) case 1, (b) case 2, (c) case 3, (d) case 4, (e) case 5, and (f) case 6 | 2020-03-26T10:18:55.378Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "a745a63cc5f6f2b4d6abf5c72b3af32d8685134f",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/hrl/14/1/14_41/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2684683cbc048902e8c0789b3f4285186500e2a1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
} |
257864921 | pes2o/s2orc | v3-fos-license | A Computational Approach to Predict the Role of Genetic Alterations in Methyltransferase Histones Genes With Implications in Liver Cancer
Histone methyltransferases (HMTs) comprise a subclass of epigenetic regulators. Dysregulation of these enzymes results in aberrant epigenetic regulation, commonly observed in various tumor types, including hepatocellular adenocarcinoma (HCC). Probably, these epigenetic changes could lead to tumorigenesis processes. To predict how histone methyltransferase genes and their genetic alterations (somatic mutations, somatic copy number alterations, and gene expression changes) are involved in hepatocellular adenocarcinoma processes, we performed an integrated computational analysis of genetic alterations in 50 HMT genes present in hepatocellular adenocarcinoma. Biological data were obtained through the public repository with 360 samples from patients with hepatocellular carcinoma. Through these biological data, we identified 10 HMT genes (SETDB1, ASH1L, SMYD2, SMYD3, EHMT2, SETD3, PRDM14, PRDM16, KMT2C, and NSD3) with a significant genetic alteration rate (14%) within 360 samples. Of these 10 HMT genes, KMT2C and ASH1L have the highest mutation rate in HCC samples, 5.6% and 2.8%, respectively. Regarding somatic copy number alteration, ASH1L and SETDB1 are amplified in several samples, while SETD3, PRDM14, and NSD3 showed a high rate of large deletion. Finally, SETDB1, SETD3, PRDM14, and NSD3 could play an important role in the progression of hepatocellular adenocarcinoma since alterations in these genes lead to a decrease in patient survival, unlike patients who present these genes without genetic alterations. Our computational analysis provides new insights that help to understand how HMTs are associated with hepatocellular carcinoma, as well as provide a basis for future experimental investigations using HMTs as genetic targets against hepatocellular carcinoma.
Introduction
Liver cancer is the third leading cause of cancer deaths in the world, about 830 000 people worldwide died of this type of tumor in 2020, according to the World Health Organization (WHO). 1,2 Liver cancer has an incidence that represents the fifth most frequent cancer in men (7.5% of the total) and the ninth in women (3.4%), with an unfavorable prognosis (mortality/incidence: 0.95) and increasing incidence. [1][2][3] Hepatocarcinoma (HCC) represents 70 to 85% of all primary liver cancers diagnosed. 3,4 The initial process of HCC development has been mainly associated with liver cirrhosis, which often is related to chronic hepatitis virus B (HBV) infection, in a percentage range from 50 to 80% of HCC cases. The family history of primary liver cancer has also been associated with HCC, and a synergistic effect with HBV infection. [4][5][6] In addition to risk factors, it has long been known that cancer cells, such as liver cancer cells, undergo genetic and epigenetic changes. Genomic analyses have revealed the widespread occurrence of mutations in epigenetic regulators and a large number of epigenome alterations in tumor cells. [7][8][9] However, it is known that genetic and epigenetic mechanisms influence each other and work cooperatively to allow the acquisition of the pathological features of cancer. [8][9][10] Recent studies have shown that dysregulation of a group of proteins called histone methyltransferases (HMTs) leads to aberrant histone methylation patterns and contributes to the pathogenesis of many human cancers, 11 including hepatocellular carcinoma. 11,12 Methylation of the lysine residue in histones, which is controlled by histone methyltransferases and demethylases, is an important player in epigenetic regulation. More than 50 HMTs have been identified in humans. 12,13 Structurally, HMTs are a diverse group of proteins that can be broadly classified into 2 functional enzyme families, the SET domain-containing methyltransferases (variegation suppressor, zeste enhancer, Trithorax) and the DOT1-like lysine methyltransferases. 14,15 Emerging evidence indicates that genetic alterations of several HMTs that have oncogenic or tumor suppressor functions play an important role in initiation and progression of cancer. 8,13,16 These aforementioned alterations can affect gene transcription (increasing the expression of oncogenes and/or suppressing tumor suppressor genes), DNA repair (increasing genomic instability), cell replication (allowing evasion of various checkpoints during the cell cycle), and contribute to the development of the carcinogenesis process and resistance to conventional therapies. [17][18][19]
A Computational Approach to Predict the Role of Genetic Alterations in Methyltransferase Histones Genes With Implications in Liver Cancer
Tania Isabella Aravena 1* , Elizabeth Valdés 1* , Nicolás Ayala 2 and Vívian D'Afonseca 3 During the development of HCC, mutations and deregulations of histone-modifying enzymes have been described. For example, overexpression of EZH2, a methyltransferase of histone 3 lysine 27 (H3K27) that functions as a catalytic subunit of the repressive Polycomb complex 2, has been associated with poor prognosis and an aggressive phenotype in patients with HCC due to the regulation of genes related to resistance to chemotherapy. 20,21 A recent study demonstrated an interregulation between EZH2 and cell cycle-related kinase (CCRK) critical in the hepatocarcinogenic process and tumor progression. 22 Similarly, both JMJD1A, a histone 3 lysine 9 demethylase (H3K9) and KDM5B and LSD1 (histone 3 lysine 4 demethylases) are overexpressed in HCC patients and are associated with poor prognosis and increased invasiveness. 23 Another important HMT is KMT2C (lysine methyltransferase 2C) or also called MLL3 (mixed lineage leukemia 3), which is associated with the H3K4 methylation and is mutated in 8% of tumors in general. 24 Whole genome sequencing (WGS) studies have identified genetic alterations in genes and pathways involved in HCC development such as the KMT2B gene that is a cognate of the KMT2C gene. 25 In addition, the upregulation of histone methyltransferase SETDB1 (forked SET domain 1), an epigenetic regulator responsible for methylating the amino acid residues (lysine 9) on histone H3 (H3K9), is associated with HCC progression, aggressiveness, and poor prognosis. 26 SETDB1 inactivation prevents cancer cells migration by eliminating of lung metastasis in mice model. 27 Finally, another HMT group, the NSD family (NSD1-3), which plays an important role in the expansion of different tumors, 28 when they are overexpressed, as in the case of the NSD1 gene, in tissues and cell lines, they are associated with a poor prognosis for HCC. The deletion of this gene has been shown to inhibit the proliferation, migration, and invasion of cancer cells. 29 This new knowledge has positioned the enzymes involved in epigenetic pathways as new therapeutic targets for HCC. For this, it is important to know the behavior of methyltransferase alterations associated with various types of cancer in order to generate a specific genetic atlas that provides new target genes for cancer control. The computational analysis of the behavior of these HMTs genes in terms of their alterations, mutations, among other genetic and genomic aspects, opens the way to start new in vitro studies. Therefore, we propose to determine through a computational approach a specific genomic landscape for HMT in HCC, its relationship with patient prognosis, and to propose HMT genes as a new target for HCC management.
Categories of biological data associated to HCC used in the computational data mining
Only the samples that presented the data of interest (360 samples) were considered in our study. Follow is the categories of biological data used in this study. shallow deletion: −1; diploid: 0 (contains no alteration); gain: 1; and amplification: 2). c) mRNA expression: Identified by RNASeq V2 Illumina sequencing methodology. The HMT gene expression data used were normalized with diploid samples that do not contain SCNA-like alteration (data provided by cBioPortal). Data is provided as relative gene expression and values are represented as Z-scores. d) In addition, the clinical data of the patients were accessed, such as weight (kg), vital status (alive or dead), follow-up time of the cancer since diagnosis (months), genetic alterations mentioned above, and histological neoplasm grade (G1, G2, G3, and G4).
Genetic alteration map of HMT genes in HCC samples
We selected HMT genes with an overall genetic alteration rate greater than 14% in 360 hepatocellular carcinoma samples, proceeding with these genes in all subsequent steps of the study. The
Characterization of somatic mutation in HMT genes from HCC samples
Each HMT gene who presented a rate greater than 14% of genetic alterations in HCC samples reported in CBioPortal repository was evaluated regarding their mutation content. The types of mutations present in each gene were evaluated based on their descriptions. In addition, available annotations regarding biological function or effect of each mutation were evaluated for each gene studied. The figures of the mutated genes were generated in the MutationMapper program available within the repository used.
Correlation between mRNA expression and SCNA-like alterations for HMT genes in hepatocellular carcinoma samples
We evaluated the correlation between the relative values of gene expression and the presence of different SCNA-like alterations of the 10 HMT genes. For this analysis, Z-score data of relative expression of mRNA normalized with normal samples (diploid) and biological data of SCNA-like alterations (deep deletion, deletion, diploid, gain, and amplification) were used. For the correlation and validation analysis, the Spearman and Kruskal-Wallis non-parametric tests were used. Statistical significance was evaluated with P-values less than 0.05.
Stratification of patient data that presented genetic alteration in HMT genes
We evaluated the correlation between some clinical attributes of the patients such as altered genomic fraction, weight (kg) and histological neoplasm grade (G1-G4) with the presence and/or absence of the genetic alterations described in this study (somatic mutations, SCNA, and differential mRNA expression). The data of each studied gene SETDB1, ASH1L, SMYD2, SMYD3, EHMT2, SETD3, PRDM14, PRDM16, KMT2C, and NSD3 which presented genetic alteration were associated with the clinical data that contained the mentioned clinical information. A pool of 67 unaltered samples was used in this analysis. To validate the analyses, non-parametric statistics such as t test (Mann-Whitney) and chi-square were used. Statistical significance was evaluated with P-values less than 0.05 and Kaplan-Meyer test.
Computational analysis of patient survival
The analysis of survival probability in individuals with hepatocellular carcinoma represented in the 360 samples was evaluated through the genetic alterations in the SETDB1, ASH1L, SMYD2, SMYD3, EHMT2, SETD3, PRDM14, PRDM16, KMT2C, and NSD3 genes. We used as variables a group where the 10 gene samples are genetically altered with a group where these gene samples are not altered. For both groups, clinical data such as vital status (alive or dead) and lifetime (given in months) were considered. Individuals who died were censored from the statistics. This analysis provides a probability of survival of an individual over time. Statistical significance was evaluated with P-values less than 0.05. The error risks of the work are minimal, once the results have been produced computationally. The error risks that the work presents is, for instance, the propagation of an error from the database such as some wrong sample or wrong values. However, a statistic was applied to validate the analysis performed and reduce these risks.
Results
The database studied presents 366 samples from patients with hepatocellular carcinoma. However, we used 360 samples, which presented the genetic features selected for this study such as mutations, SCNA, and differential gene expression for HMT genes. 6 Other clinical data are summarized in Table 1.
Overview of genetic alteration in HMT genes from HCC
Among the 50 human HMT genes described in the literature, 10 of them were selected for the present study, based on their alteration rate found in HCC samples (more than 14%) ( Figure 1). These genes were SETDB1, ASH1L, SMYD2, SMYD3, EHMT2, SETD3, PRDM14, PRDM16, KMT2C, and NSD3, which were altered in 293 (81%) of HCC studied samples. Additionally, these set of genes presented some types 4 Cancer Informatics of genetic alteration such as somatic mutation, SNCA-like alterations, and changes of mRNA expression. The rate of alteration varied from 14% (PRMD16, KMT2C, and NSD3) to 43% (SETDB1). The SETDB1 (43%) and ASH1L (27%) were the most altered genes in HCC cohort. In contrast, SETD3 (19%) and NSD3 (19%) in terms of gene expression showed mRNA underexpression in several samples (heat map), Figure 1.
Analysis of SCNA-like revealed that SETDB1, ASH1L, and PRDM14 are amplified in hepatocellular cancer In particular, the 10 HMT genes showed different rates of SCNA-like alteration. However, genes with the highest amplification and deletion rates can be observed. ASH1L (48), SETDB1 (43), and PRDM14 (42) had high-level amplification in more than 10% of the samples, being considered the most amplified genes for this HCC cohort. In contrast, 3 HMTencoding genes, NSD3 (189), PRDM16 (145), and SETD3 (129), were deleted in more than 30% of hepatocellular carcinoma samples, demonstrating an expressive genetic loss for these elements.
Furthermore, regarding the level of mRNA expression, for example, SETDB1 (30%), SMYD2 (15%), and EHMT2 (14%) showed a distinct pattern of mRNA overexpression, with In contrast, the SETD3 and NSD3 genes showed the highest rates of mRNA underexpression in the HCC cohort, 13.7% and 4.5%, respectively. When comparing mRNA expression with the type of SCNA alteration, it is possible to observe a trend of increased expression of certain genes when there is gain or amplification and decreased expression when there is gene deletion. This type of gene response can be seen in Figure 3, which shows the analysis of correlation between mRNA expression and the type of SCNA alteration in the SETD3, NSD3, and KMT2C genes. The correlation between the genetic event of genomic gain or loss and level of mRNA expression is described for several genes present in malignant cells. 8,13 Altered HMT genes decrease the overall patient survival Concerning the analysis of overall patient survival, SETDB1, SETD3, PRDM14, and NSD3 altered genes were linked with decreased survival probabilities in patients with HCC over time (months). Here are represented the only statistically significant cases within the 10 HMT genes studied. This data can be seen in Figure 4. On the x-axis it represents the time given in months, while on the y-axis it represents the probability of the patient with HCC surviving, either it present a certain HMT altered or not. Both groups of samples were considered, samples where the HMT gene were altered and samples that this same gene does not present any alteration. The crosses in the figure indicate the patients who died from the disease. The blue line indicates patients who do not have altered HMT gene, while the red line indicates patients who have altered HMT gene. Therefore, the graph shows that over time there are probabilities that these patients will survive or not, regarding absence or presence of alterations in HMT genes. Since as time passes, the probability of these people dying decreases or increases. Our results show a higher probability of survival of the patient belonging to a group whose HMTs genes are unaltered compared to the group which HMTs genes are altered for the 4 indicated cases. For example, for SETD3 altered gene (Figure 4a), since the patient diagnosis time until 40 months after, it can be observed a decrease in survival probability, from 100% to 50%. In comparison with unaltered group, this decrease is weaker in this time (40 months), reaching around 70% of survival probability. After 40 months, the percentage of probability is the same in both groups. After 100 months, the probability to survival with SETD3 altered is around 30%. However, after 80 months can be observed a slightly better scenario for patients with SETD3 altered in contrast with unaltered group. Similar behavior is seen for SETDB1 gene (Figure 4b). For SETDB1, in the first 60 to 70 months from the time of diagnosis, there is a difference in the survival profile between the unaltered group and the altered group. The Cancer Informatics survival probability is less for the altered group, from 100% to 50% in contrast to the group unaltered from 100% to 60%. However, after this period, the survival percentage remains the same in both groups. For PRDM14 (Figure 4c) is observed a decrease of survival probability to 40% around 60 months and after 80 months the probability decrease until reach zero in the altered group. In the unaltered group, the probability also is lower after 100 months, around 15%. However, in the unaltered group, the patient can live more time in comparison the group with PRDM14 altered. Finally, when the NSD3 gene is altered before 20 months from patient diagnosis, the survival probability decrease from 100% to 60% in comparison to the unaltered group (Figure 4d). At 60 months, for the altered group, the probability decreases below 40%, and this pattern of decline continues over time, settling at 20% at 90-month follow-up.
Altered HMT genes are probably involved in HCC progression
The clinical data of the patients were stratified to perform various statistical analyzes to find the relationship between the genetic alterations in the HMT genes and some attributes of the patients such as weight and cancer progression described in the neoplasm histologic grade (G1-G4) ( Figure 5). We found that HMT genes such as SETDB1 and EHMT2 are more altered in patients with lower body weight whereas in patients who do not show genetic alteration in this group of HMT genes the weight is higher. For the SETDB1 gene (Figure 5a), the patients with this altered gene presented their body weight in a range of 60 kg to 80 kg. In the unaltered group, this range was from 60 kg to almost 100 kg. Similar results were observed for the EHMT2 gene (Figure 5b), where the altered group and the unaltered group presented a similar pattern for weight range values.
Another result found is related to the neoplasm histologic grade. For the SETDB1 and SMYD3 genes, it is observed that the advanced grade G3 mainly, and G4 for SETDB1 is found in a higher proportion in the altered group than in the unaltered group (Figure 5c and d), suggesting a tendency to cancer spread and progression in these altered group in comparison to unaltered group.
Discussion
In the last decades, a biological revolution has taken place, in which an enormous amount of biological/biomedical information has been generated and made available in public databases. Many databases and computational tools are created daily to harbor and deliver new biological information about all forms of life. For humans, many databases share biological/biomedical information in raw or processed format, be it genomics, proteomics, metabolomics, and drug design, for example. However, rather than making the data available, it is important to process and extract useful information from these repositories, which could answer various questions about human health.
One way to get to this point is through in silico analysis of genetic and genomic data from diseases, such as cancer. Another important point is to choose well the useful data set to answer biological questions that affect human health through the computational approach. In our work, we selected a group of genes that could be involved in epigenetic regulation in both healthy cells and malignant cells, acting in different ways. Because epigenetic changes are reversible, they provide a unique opportunity for pharmacological intervention through inhibitors designed as a new class of anticancer drugs. 32,33 Recent evidence shows that the aberrant activity of HMTs, due to amplification, deletion, or mutation of their corresponding genes, contributes to the initiation and progression of cancer. 8,13,33 Consequently, a promising strategy could target populations of patients who are carriers of these alterations. For years, it has been known that different types of cancer share common molecular mechanisms whose dysfunction allows uncontrolled cell proliferation through the deregulation or mutation of genes that positively or negatively influence the
8
Cancer Informatics regulation of cell proliferation, migration and differentiation. 34 Although the genetic term generally leads to understanding cancer as a hereditary disease, this only occurs in a small percentage. In most tumors, the alterations described are only somatic and, therefore, cannot be transmitted to offspring 35,36 so it is important to study mutations linked to cancers. Here we evidence 55 mutations in the 10 HMTs genes studied. Although none of them had a biological effect annotation, it is known that indels (insertions and deletions) mutations, as well as premature termination of protein synthesis, have serious implications in the functioning of the affected protein. 37,38 According to the My Cancer Genome database (www.mycancergenome.org), the KMT2C gene encoding a histone methyltransferase H3K4 is mutated at a rate higher than 5% in many solid tumors. In the present study, the same mutation pattern was found, 39 which this gene presented a mutation rate of 5.6% with the presence of many nonsense mutations that could lead to its inactivation. It is known that the disruption of KMT2C could be related to the cancer process through transcriptional deregulation in several pathways. 40 This hypothesis is described for colorectal cancer, 40 for example, and the data analyzed have shown a similar pattern of genetic alteration, which could have implications in HCC cancer. In addition, for some time the mutagenic mechanisms have not explain all cancer of cases. Thus, from the etiopathogenic point of view, epigenetic changes have been implicated in the development of different cancers. 8,13 Another highly relevant alteration, are somatic alterations in the number of gene copies (SCNA), which have been widely described in cancer cells. The amplification of gene regions that encompass total genes are seen as driver alterations, they lead to the initiation of a process of cellular malignancy. We found that ASH1L, SETDB1, and PRDM14 are at a high level of amplification, which could reveal that they are located in genomic regions that are amplified in around 10% of the studied samples, often leading to increased of their expressions, as we demonstrated in that study. In contrast, there are the NSD3, PRDM16, and SETD3 genes, which present homozygous deletion in 30% of HCC samples, reducing their expressions in HCC studied samples. Both amplification (gain of function) and deletion (chromosomal instability and loss of function) could lead to cancer development processes, which is largely being determined experimentally. 41,42 With respect to alterations in the expression of the HMT gene, SETDB1, is upregulated in our study and several other human cancers, such as ovarian cancers, endometrial cancers, lung adenocarcinoma, breast cancers, and HCC. Upregulation of SETDB1 leads to several alterations in HCC tissues. 27,43 Another HMT gene found upregulated is the ASH1L, which encodes a member of the trithorax group of transcriptional activators, also is overexpression in liver cancer. 17,44 Furthermore, SMYD2 and SMYD3 genes presented high expression of in the current investigation. The SMYD2 gene have been reported with mRNA overexpressed in pediatric acute lymphoblastic leukemia, gastric and liver cancer. Some research articles linked SMYD2 gene with inhibitory functions in tumor suppressor proteins such as p53, Rb, and PTEN. [45][46][47] Additionally, SMYD3 has been linked to several human cancers. High levels of this enzyme are expressed in colorectal, liver, and breast cancers. 48,49 Three other HMTs genes presented overexpression, EHMT2 gene, has been related with fundamental functions in embryogenesis in genetic mouse models. Deletion of EHMT2 gene in mice resulted in embryonic lethality. 50, 51 The EHMT2 overexpression has been reported in several types of cancers such as lung cancer, multiple myeloma, ovarian carcinoma, and liver cancer. 52,53 In addition, such overexpression of the EHMT2 gene is associated with decreased of patient survival. 54,55 Already the PRDM14 gene plays an important role in resetting and maintaining pluripotency in embryonic cells. PRDM14 expression has not been detected in healthy adult tissues; however, genomic amplification, methylation and misexpression of PRDM14 have been detected in several cases of human tumors. In addition, PRDM14 gene has been associated with initiation of several cancers. 56,57 Finally, KMT2C are often deleted in myeloid leukemias 58 in our findings, this HMT gene present an elevation in its expression.
However, in our results, SETD3 and NSD3 showed an under expression in several samples. Generally, NSD3 gene are amplified in some cancer such as cancer colorectal. 13 Already SETD3 gene, a SET domain-containing 3 (SETD3), member of the protein lysine methyltransferase family and with function to catalyze the addition of methyl group to lysine residues 59 are over expressed in lymphoma, kidney tumor, and invasive breast cancer. 60,61,62 Another research showed that SETD3 gene level is correlated with cell proliferation of liver cancer cells in a xenograft mouse model. 63 Regarding the relation between alterations in HMT genes and survival patient, our study revealed that SETDB1, PRDM14, SETD3, and NSD3, could affect the survival patient leading to poor prognosis. SETDB1 is overexpression in several cancers such as breast cancer, non-small cell lung cancer, prostate cancer, colorectal cancer, acute myeloid leukemia, glioma, melanoma, pancreatic ductal adenocarcinoma, liver cancer, nasopharyngeal carcinoma, gastric carcinoma, and endometrial cancer. In colorrectal cancer, this genetic alteration is related with poor prognosis and decrease of survival of patients. 26 Similarly, NSD3 is described as a regulator of the apoptotic process of lymphocytes. The high expression of NSD3 could play an important role in the progression of breast cancer and colorectal cancer, leading to the worst prognosis. 13,64 Here, we report these genes as likely targets to better study HCC prognosis, since they are involved in decreased patient survival. In addition, these results could open up new treatment pathways and establish new protocols for evaluating the prognosis of patients. In addition, the SETD3 gene in patients with breast cancer (triple negative) is associated with a poor prognosis. If the patient harbors a mutation in the p53 genes in addition to altered expression of SETD3, their prognosis is worse even in patients with ER-positive tumors. 62 Finally, for the PRDM14 gene, an abnormal expression associated with metastasis and invasion is observed in patients with colorectal cancer. In addition, PRDM14 overexpression is related to stage III colorectal cancer, which enhanced the invasive, drug-resistant, and in vitro cell dividing properties of the colon cancer cells. 65 An important finding presented here is about the relationship between the patient's weight and genetic alterations in the SETDB1 and EHMT2 genes. Both genes are more altered in patients with lower weight compared to the control group (without alteration in both genes). In the control group, the patients showed a higher body mass index (BMI). It is known that the body mass index could be closely related to the prognosis and mortality of various diseases such as cancer. 66 For example, overweight men have a better prognosis than normalweight men with HCC; however, normal-weight women have a better prognosis than overweight women. 66 No study have related the weight of HCC patients with alterations in the SETDB1 and EHMT2 gene. It is likely that these findings could indicate a worse prognosis for patients with alterations in SETDB1 and EHMT2 and who presented a decrease in their BMI.
Our findings indicated that SETDB1 and SMYD3 are more altered in patients with histological grades 3 and 4 than in the control group (without alteration in any grade G1-G4 for these genes). This finding likely indicates that SETDB1 and SMYD3, when disrupted, might be involved in the process of HCC growth and propagation. SETDB1 was related to the histological grade (initial grade) in patients with breast cancer. 67 SMYD3 is a histone methyltransferase previously linked to cancer cell invasion and migration. In breast cancer, it promotes the epithelial-mesenchymal transition in breast cancer. 68 These alterations in advanced histological neoplasm degrees such as G3 and G4, as demonstrated here, could represent their roles as invasion and migration factors in HCC cells.
It is interesting to note that the SETDB1 gene probably has valuable importance in HCC. SETDB1 was one of the most altered genes presenting a high level of amplification and it is overexpressed in several samples. In addition, it was associated with decreased patient survival, it is altered in patients with reduced body mass index and its genetic alterations seen here are related to advanced histological neoplasm grades (G3-G4). All these features lead to the understanding of SETDB1 as an HCC cancer driver gene and in vitro experiments might be needed to better study the behavior of this gene in HCC.
Conclusions
For the scientific community, it is important that different types of biological data are available in public repositories. Computational analyzes allow us to understand the details of biological processes faster and more accurately. For several studies, the first steps are with a computational approach. This type of analysis can improve accuracy and drive experiments for further in vitro studies. Currently, instead of generating a significant amount of biological data, it is important to extract useful biological information from available genomic data, for example. Therefore, biological data mining can provide the crucial foundation for experimental research. In this approach, through the extraction of biological data from public repositories, we were able to identify that certain alterations in HMT genes could likely be involved with hepatocellular carcinoma.
Our findings strongly evidenced that genetic alterations such as somatic mutation, SCNA, and gene expression changes of HMT genes, may play an important role in the generation and development of hepatocellular carcinoma, laying a solid foundation for future studies. Furthermore, our work provides a genetic content for future studies, which could use HMT genes such as SETDB1, PRMD14, NSD3, and KMT2C in in vitro assays as interesting targets for HCC study.
Author contributions
VD: Conception of the research, organizing of database, analysis of data, writing the manuscript. EV, NA, and TIA: analysis of data, writing the manuscript. | 2023-04-01T15:20:11.722Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "e7c614ab0fa1eeb34f042f96c3a710d43c5ded7e",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9813a64e56806d61b9ede609a1852e72a640429b",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202599349 | pes2o/s2orc | v3-fos-license | Impacts of climate demonstration on seasonal rainfall patterns in the upper watershed of Senegal
Since the water cycle is one of the major components of climate, the implications of these changes for rainfall patterns are important. Rainfall is the most important factor in climate for both people and ecosystems and is easy to measure. All these reasons make most studies and analyzes look at precipitation much more than at other climate parameters. Characterizing the impact of climate variability on seasonal rainfall patterns is essential for the socio-economic development of the Senegal River watershed. “According to ArdoinBardin1 & Bodian et al.,2 the variability of climatic conditions in West and Central Africa in general and in the upper Senegal River Basin in particular, is no longer to demonstrate. “In the Senegal River Basin, climate variability is explained by an irregular precipitation and high average temperature (about 42°C). “Recent studies3 show that climate variability is a phenomenon that has long been studied and characterized. The most important question for both West Africa and other regions of the world is the search for explanatory factors. In this study, we will study annual rainfall data, temperature, relative humidity, and flow data. Correlation of meteorological and hydrological data will reveal the drivers of change. Indeed, the watershed, stretched in latitude, is a transition zone between several climates. The study area is the high watershed of the Senegal River (Figure 1), between longitudes 12°30’ and 9° 30’ West and latitudes 10° 30 and 12° 30’ North. The high watershed of the Senegal River occupies an area of 218000km2. With an elongated geographical configuration, the watershed is representative of the sub-Guinean, Sudanese and Sahelian climate. The Sahelian regime (desert climate) is characterized by two seasons, a rainy season from July to September (3 months) and a dry season from October to June (9months) which is accentuated by the Harmattan. The humid tropical regime (Sudanese climate) is located in the Guinean part of the basin. This climate is close to the subequatorial climate, characterized by an abundance of rainfall varying, on average (1955-2014), between 1400mm and 2000mm per year. This work led to the adoption of a work plan that is elaborated through a description of the methodology, presenting the results and the discussion. A conclusion will complete the work.
Introduction
Since the water cycle is one of the major components of climate, the implications of these changes for rainfall patterns are important. Rainfall is the most important factor in climate for both people and ecosystems and is easy to measure. All these reasons make most studies and analyzes look at precipitation much more than at other climate parameters. Characterizing the impact of climate variability on seasonal rainfall patterns is essential for the socio-economic development of the Senegal River watershed. "According to Ardoin-Bardin 1 & Bodian et al., 2 the variability of climatic conditions in West and Central Africa in general and in the upper Senegal River Basin in particular, is no longer to demonstrate. "In the Senegal River Basin, climate variability is explained by an irregular precipitation and high average temperature (about 42°C). "Recent studies 3 show that climate variability is a phenomenon that has long been studied and characterized. The most important question for both West Africa and other regions of the world is the search for explanatory factors. In this study, we will study annual rainfall data, temperature, relative humidity, and flow data. Correlation of meteorological and hydrological data will reveal the drivers of change. Indeed, the watershed, stretched in latitude, is a transition zone between several climates. The study area is the high watershed of the Senegal River (Figure 1), between longitudes 12°30' and 9° 30' West and latitudes 10° 30 and 12° 30' North. The high watershed of the Senegal River occupies an area of 218000km 2 . With an elongated geographical configuration, the watershed is representative of the sub-Guinean, Sudanese and Sahelian climate. The Sahelian regime (desert climate) is characterized by two seasons, a rainy season from July to September (3 months) and a dry season from October to June (9months) which is accentuated by the Harmattan. The humid tropical regime (Sudanese climate) is located in the Guinean part of the basin. This climate is close to the subequatorial climate, characterized by an abundance of rainfall varying, on average , between 1400mm and 2000mm per year. This work led to the adoption of a work plan that is elaborated through a description of the methodology, presenting the results and the discussion. A conclusion will complete the work.
Data
The data used must respect two important criteria: first, the length of the time series (60 years), and secondly, the quality of the data (the least possible missing or incomplete data). Thus, twenty-two rain stations were selected and four (04) weather stations (temperature and relative humidity data). They present time series over more than 50 years (from 1955 to 2014) and provide good coverage of the study area ( Figure 2). This study focuses on the upper Senegal River Basin. The data were homogenized by the Brunet-Moret 4 regional vector method (1978). The Regional Vector is above all a method of data criticism (and incidentally reconstruction of missing data), developed at the former ORSTOM -IRD in the seventies, with the aim of homogenizing rain data. Regarding flow data, we chose eight hydrometric stations: Dibia, Sokotoro and Oualia on the Bakoye; Bafing-Makana and Dakka Saidou on the Bafing; Kidira on the Faleme; Bakel on Senegal. The time series of the flow data is from 1955 to 2014. The Manantali dam influences the flows on the Bafing and Senegal. The regional vector method made it possible to obtain flow data of very good quality. All the data has been provided by the Organization for the Development of the Senegal River (OMVS). However, a hydro-rainfall map is produced for better spatial location of stations in the Senegal River basin ( Figure 2).
Methods
In this paper, we have adopted the Standardized Precipitation Index (SPI) approach. The Standardized Precipitation Index is a tool that has been developed for the definition and monitoring of drought. It is based on the appreciation of the level or frequency of drought in the upper Senegal River Basin.
Rainfall data processing
Over the period 1955-2014, the average rainfall at the twenty-two pluviometric stations is 1108.16mm/year. The spatial distribution of rainfall is very uneven: the Selibabi station, located to the north of the study area, recorded an average of 549.8mm, compared to 1824.5mm in Mamou, in the southern part (Table 1).
Standardized precipitation index
The Standardized Precipitation Index or Standardized Precipitation Index has the following formula: Where Xi is the cumulative rainfall for a year i; Xm and Si are respectively the mean and the standard deviation of the series of annual rainfall observed.
According to the work of Bergaoui et al. 5 Ardoin-Bardin 1 the rainfall index gives the level of severity of drought over a series of annual rainfall (Table 2). Negative values indicate dry periods and positive values mark wet seasons.
Average temperatures
The temperatures give an overview of the thermal character of each station. The evolution of temperatures is analyzed from the monthly average characteristic values presented in Table 3. The mean monthly maximum temperature values are between 32.6°C and 34.7°C and are located in the last part of the dry season, that is to say in March at Bakel Figure 3). During this period, the climate becomes mild (ie fresh) with the arrival of cold air masses from the northwest.
Relative humidity
Relative humidity is the ratio between the weight of the water vapor contained in the air and the one it would contain if it were saturated at the same temperature. 1 (Table 4). Average relative humidity is still above 50% during the rainy season. Figure 4 reflect this situation.
Upper basin precipitation study
In the study area, the standardized precipitation method indicates a situation predominantly dominated by drought ( Table 5 (Table 6). While over the same period, moderate conditions are recorded at around 8.72% and 25.11% (Table 6). These results explain the character of the severity of the climate in the study area. The climate crisis that has hit the Sudano-Sahelian environment has so far been manifested by an increase in moderate to severe droughts, and not by extreme droughts. Moderate drought, outside the first two decades (1955)(1956)(1957)(1958)(1959)(1960)(1961)(1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969)(1970)(1971)(1972)(1973)(1974) is largely dominant over most other decades except 1995-2004, with only 39.15% of cases of moderate drought, but 51.06% of drought strong. Figure 5 presents the evolution of the annual average values of the SPI index for four rainfall stations over the period 1955-2014. These stations are distributed more or less homogeneously over the upper Senegal River Basin. In the 1950s and 1960s, except for Bakel, everywhere there were many positive values, often greater than 1. Rainfall then became almost systematically negative until the beginning of the 1990s. This deficit was particularly marked during the 1990s. 1980. Towards the end of the observation period, rainfall remains in deficit, but index values rarely drop below one, while positive values are slightly more frequent. In the dry period, the Bakel station continues to stand out, since the deficit is not systematic. In Figure 5, only a few limited areas, with relative drought. The following decades appear almost everywhere marked by drought, especially in the 1980s. The mapping of the mean values of the index of coefficients of variation over different periods highlights the opposition between the dry years 1955-1964 and the drier years 1970-2014. This joins the results found in the literature Paturel et al. 6 Servat et al. 7 Lubes-Niel et al. 8 Ardoin-Bardin., 1
Annual flow analysis
In the tropical zone, flows are a direct response to rainfall impulses, the transfer of which can be subject to various modalities depending on the size, configuration, relief, geology and soils of the basin. The increase of the modules is due to the improvement of the rainfall, which however remains very fluctuating from one year to another, so that the tendency to rebuild the resources remains very uncertain and makes it more difficult to forecast the availability. Figure 6 shows a variation in annual average flow data. The Bakel station records 558.8 m 3 /s whereas at the Sokotoro station, the average annual flow represents only.
Synthetic analysis
A comparable drop in the river's water supply accompanies the drop in rainfall in the basin. The average annual flow of the river is part of a continuous cycle of decline since the beginning of the last century. This explains the character of the river's diet. This diet is unimodal because the only mode of feeding is rain. This rain is irregular and especially remarkable decreases with other parameters such as relative humidity. In addition to the downward trend in longterm flows, the river's hydraulic regime is characterized by high interannual (year-to-year) and annual (month-to-month) variability over the course of the year (same year). The saw tooth evolution of annual average flows is reminiscent of rainfall (see below). Whether in a dry or wet sequence, a year of high water level can be followed by a year of severe deficits. We are in the field of unpredictability. Through these graphs, we can see that from 2002, there is a drastic drop in the relative humidity, the average rainfall and the average flow of the basin. This situation of declining hydrological inputs mainly explains the advent of a new drought cycle.
Discussion
The IPS method has made it possible to highlight the general downward trend in rainfall from the 1961-70 decade, which worsened in the following decades. However, the entire basin was not affected in the same way given the influence of local climates. This result is consistent with statistical tests applied to annual rainfall. This climatic variability is manifested by a decrease in relative humidity and an increase in air temperatures, which results in an affection of the hydrological cycle in general and the formation of rain clouds, hence the low rainfall heights annual. It appears that the temperature and the relative humidity of the air are factors of the temporal variability of the seasonal rainfall regimes in the high watershed of the Senegal River. In fact, these atmospheric parameters strongly influence the temporal variability of rainfall variables. However, the previous results of the study show a decrease in rainfall variables as well as relative humidity and a rise in temperatures over the last decades in the watershed. On the basis of these observations, it can be said that the variability of seasonal rainfall patterns depends partly on the drop in relative humidity and, on the other hand, on the rise in air temperature. In a perspective of rising air temperature, we are in the process of dreading a change in seasonal climate patterns in the watershed. The results obtained can be related to other works. Based on the results of the work of Richard et al., 9 during the twentieth century, the frequency of ENSO-LNSO events is positively correlated with global temperature and their intensity is highest over the period after 1970. Recent studies by Richard,9 at the same time in southern Africa, their effect on rainfall has increased and droughts have become more pronounced and widespread in space. These results can be related to the increase in magnitude of the variations described around 1970 (Table 7).
Conclusion
The treatments carried out show a persistent reduction of rains over the 1970s, 1980s and 1990s compared to previous years (1955)(1956)(1957)(1958)(1959)(1960)(1961)(1962)(1963)(1964). In the high Senegal basin, according to the SPI, the drought generally keeps a moderate character and is very rarely extreme. Thus, this work made it possible to have an idea about the potential impacts of climate change on rainfall trends in the upper Senegal River Basin.
The analysis of hydrometeorological data shows a decrease in flows and precipitation. This decline is visible through the appearance of the drought cycle. The hydrological drought has caused enormous disturbances in terms of water control, agriculture, the degradation of wildlife reserves and especially to contribute significantly to food insecurity in the upper Senegal River Basin. | 2019-11-22T00:44:16.266Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "76ac14dccd76530902085f7347be6db0797e9eca",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/IJH/IJH-02-00149.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e74bc9aea2627d62d4df705d66002bb157347863",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
205339082 | pes2o/s2orc | v3-fos-license | Intermolecular Transmembrane Domain Interactions Activate Integrin αIIbβ3*
Background: The relationship between integrin clustering and activation has been controversial. Results: We show that intermolecular transmembrane domain interaction in integrin αIIbβ3 can induce integrin activation. Conclusion: Integrin clustering can enhance integrin activation. Significance: We provide a new mechanism for integrin activation. Integrins are the major cell adhesion molecules responsible for cell attachment to the extracellular matrix. The strength of integrin-mediated adhesion is controlled by the affinity of individual integrins (integrin activation) as well as by the number of integrins involved in such adhesion. The positive correlation between integrin activation and integrin clustering had been suggested previously, but several trials to induce integrin clustering by dimerization of the transmembrane domain or tail region of integrin α subunits failed to demonstrate any change in integrin activation. Here, using platelet integrin αIIbβ3 as a model system, we showed that there is intermolecular lateral interaction between integrins through the transmembrane domains, and this interaction can enhance the affinity state of integrins. In addition, when integrin clustering was induced through heteromeric lateral interactions using bimolecular fluorescence complementation, we could observe a significant increase in the number of active integrin molecules. Because the possibility of intermolecular interaction would be increased by a higher local concentration of integrins, we propose that integrin clustering can shift the equilibrium in favor of integrin activation.
Integrins, which are heterodimeric transmembrane proteins composed of ␣ and  subunits, are major receptors for cell adhesion (1). Integrin-mediated adhesiveness is determined by the affinity state of each integrin and the number of integrinligand bonds formed (valency) (2). The affinity regulation of integrins has been studied extensively, leading to a hypothesis that high affinity states facilitating ligand binding (activation) can be achieved by "inside-out" signals that can break the clasp holding membrane-proximal regions of the ␣ and  tails together and separate the transmembrane domain (TMD) 2 -tail regions (3,4). The valency of integrin-ligand binding can be increased by integrin clustering. Several reports suggested that there might be a positive correlation between integrin activation and clustering. For example, Li et al. suggested an insightful idea that integrin activation and clustering can be coupled by homomeric association of separated TMDs of active integrins (5). The idea of homomeric association is supported by several studies showing homomeric association in micelles and membrane bilayers (6 -8). However, a study that attempted to identify such a relationship between integrin activation and clustering by inducing ␣IIb-␣IIb clustering through an inducible homodimer system revealed that clustering did not induce integrin activation (9). In the study, integrin activation was measured by binding of PAC1 Fab (10), which can recognize only the active form of integrin regardless of the clustering state of integrins. Similarly, introducing an intermolecular disulfide bond between transmembrane domains of ␣IIb integrins does not enhance PAC1 Fab binding (11), suggesting that clustering may not be involved in integrin activation. In both of the studies above, clustering of integrins enhanced PAC1 binding, but the enhanced PAC1 binding was attributed to increased valency not to activation of integrins, because PAC1 IgM can form a pentamer with a total 10 available ligand binding sites and thus bind more efficiently to the clustered integrins (10,11). Therefore, it seems to be accepted that there is no correlation between integrin clustering and activation.
Our previous study showed that overexpression of the integrin ␣IIb TMD-tail or 3 TMD-tail construct can induce activation of integrin ␣IIb3 (12). The activation was proven to result from heteromeric interactions of the TMD-tail of each construct with native full-length integrins, thus breaking apart the intramolecular ␣ and  TMD-tail interaction within an integrin. This conclusion led us to hypothesize that when ␣ and  TMD-tails of integrins are separated during integrin activation, each TMD-tail may interact heteromerically with the TMD-tail of other integrins, and this may cause activation as well as clustering of integrins in proximity. Therefore, to test our hypotheses on the relationship between integrin clustering and activation, we sought a way to cluster integrins through ␣- heteromeric TMD interaction. One such way to achieve this involved the generation of chimeric integrin constructs. These were then used to show the intermolecular TMD-tail interaction between integrins. By using the chimeric integrin, we also showed that integrin activation can be coupled to integrin clus-tering. Based on our results, we propose that clustering of integrins not only increases the number of integrins but also shifts the equilibrium to active conformation, providing both increased valency and affinity for enhanced cell adhesive contacts.
EXPERIMENTAL PROCEDURES
Plasmids-To construct the chimeric integrin ␣5-␣IIb, we generated the TMD-tail region of the human integrin ␣IIb subunit by polymerase chain reaction (PCR) using the forward primer (5Ј-acaaagcttcagctgctccgggccttg-3Ј, HindIII cleavage site underlined) and reverse primer (5Ј-agacttctagatcactccccctcttcatc-3Ј, XbaI cleavage site underlined). The resulting PCR product was digested with HindIII and XbaI and cloned into pcDNA3.1. Next, the extracellular domain of the human integrin ␣5 subunit was also amplified by PCR using forward primer (5Ј-gggtcaagcttatggggagccggacgcca-3Ј, HindIII cleavage site underlined) and reverse primer (5Ј-ttgaagctttgtggccacctgacgctc-3Ј, HindIII cleavage site underlined), and the PCR product was cloned into pcDNA3.1 (Invitrogen) containing the ␣IIb TMD-tail. To construct another chimeric integrin 1-3, the extracellular domain of the human integrin 1 was first generated by PCR using forward primer (gggtcaagcttatgaatttacaaccaatt, HindIII cleavage site underlined) and overlapping reverse primer (5Ј-ctctggctcttctaccacatgaaccatgacctcgtt-3Ј, 1 sequence underlined). The TMD-tail region of the human integrin 3 was also generated by PCR using an overlapping forward primer (5Ј-aacgaggtcatggttcatgtggtagaagagccagag-3Ј, 3 sequence underlined) and reverse primer (5Ј-actgctctagattaagtgccccggtacgt-3Ј, XbaI cleavage site underlined). By using these PCR products as template, the 1-3 fusion was generated by overlapping PCR and cloned into pcDNA3.1. To construct 1-3-VN, the 1-3 fusion and VN were amplified by PCR, and the 1-3-VN fusion was generated by overlapping PCR as above. The GGGGS linker was introduced between 1-3 and VN, and the C-terminal end of the cytoplasmic tail of 3 (from 736 to 762) in 1-3 fusion was excluded to facilitate better VN-VC association.
Cell Lines and Antibodies-Chinese hamster ovary (CHO) and A5 cells were maintained as described previously (12). The CHO-␣-VC cell line was kindly provided by Dr. Sanford Shattil (University of California, San Diego) and maintained as CHO cells. For transient transfection, Lipofectamine and Plus reagents (Invitrogen) were used according to the manufacturer's instructions. An anti-human integrin 1 antibody (MAR4) was purchased from BD Biosciences. An antibody specific to the active form of integrin ␣IIb3, PAC1, has been described previously (10).
Immunoprecipitation and Affinity Capture Assay-A5 cells were transfected with the chimeric integrin constructs ␣5-␣IIb and 1-3. One day after transfection, cell lysates in lysis buffer I (50 mM Tris-Cl, pH 7.4, 1% Triton X-100, 150 mM NaCl, 0.5 mM MgCl 2 , and 0.5 mM CaCl 2 ) supplemented with EDTA-free protease inhibitor (Roche Applied Science) were clarified by centrifugation and incubated with the anti-human integrin 1 extracellular domain antibody (P5D2) followed by incubation with protein G-Sepharose (GE Healthcare). The resulting precipitates were analyzed by SDS-PAGE followed by Western blotting using antiserum against integrin ␣IIb cytoplasmic tail (Rb9449). An affinity capture assay was performed to measuring ␣ and  TMD interaction as described previously (12). Briefly, CHO cells were transfected with ␣IIb and 3 TMD constructs harboring mutations or not, and lysed with lysis buffer II (20 mM HEPES, 150 mM NaCl, 1% CHAPS, and 2 mM CaCl 2 ) supplemented with EDTA-free protease inhibitor. The lysates were incubated with calmodulin-Sepharose (GE Healthcare), and the bound proteins were analyzed by Western blotting.
Flow Cytometry-A PAC1 binding assay was performed as described previously (13). Briefly, 24 h after transfection of various integrin constructs into A5 or CHO-␣-VC cell lines, cells were trypsinized and incubated with PAC1 for 30 min at room temperature. The stained cells were washed twice with Dulbecco's modified Eagle's medium (DMEM) and incubated further with anti-IgM conjugated to phycoerythrin. After washing with DMEM, the resulting cells were analyzed using a FACSCalibur (BD Biosciences) for PAC1 or MAR4 binding in the FL2 channel and for P5D2 binding or Venus fluorescence in the FL1 channel.
TMD-Tail Separation of One Integrin Can Activate Neighboring
Integrins-Previously, we showed that overexpression of the integrin ␣IIb TMD-tail construct results in integrin activation by inducing the interaction between the expressed ␣IIb TMD-tail and 3 TMD-tail region of intact integrins (Fig. 1A). Thus, we hypothesized that the free tail of integrins may also activate neighboring integrins through the same lateral interaction (Fig. 1B). Similarly, we also hypothesized that clustered integrins may have a greater chance to be activated by this intermolecular TMD-tail interaction than nonclustered integrins, due to the increased chance of the lateral interaction in clustered integrins (Fig. 1C). However, as described above, several trials to induce clustering of integrins failed to produce such activation. According to our hypothesis, the integrin clustering resulting from homomeric TMD-tail interactions would not induce separation of ␣ and  TMD-tails and thus would not have any effect on integrin activation (Fig. 1D). Therefore, to clarify the effect of active integrins on the affinity state of neighboring integrins, we designed a chimeric integrin ␣5-␣IIb 1-3 that can allow heteromeric ␣- TMD-tail interactions between integrins. The chimeric integrin contains the extracellular domains of human ␣51 and the TMD-tail regions of human ␣IIb3. Inclusion of the extracellular domains of integrin ␣51 would ensure specific association between the chimeric integrin subunits, and the TMD-tail regions of integrin ␣IIb3 can interact with wild type integrin ␣IIb3, if such a lateral interaction occurs ( Fig. 2A).
Following transfection of the chimeric integrin into CHO cells stably expressing integrin ␣IIb3 (A5 cells), an anti-human integrin 1 extracellular domain-specific antibody that binds to the 1-3 subunit of the chimeric integrin can successfully precipitate ␣5-␣IIb subunit (Fig. 2B), demonstrating that two chimeric integrin subunits can be associated together as expected. Next, we transfected the chimeric integrin ␣5-␣IIb 1-3 into A5 cells and tested the effect of its expression on integrin ␣IIb3 activation using PAC1, which is a ligand mimetic antibody against integrin ␣IIb3 (10). When the wild type chimeric integrin was expressed, there was a little change in the activation status of integrin ␣IIb3 (Fig. 2C, top panel; Fig. 2D, black circles). However, the activation of integrin ␣IIb3 was significantly increased by expression of a chimeric integrin containing a deletion of cytoplasmic tail (⌬717) and a point mutation (G708I) in 3 (Fig. 2C, middle panel; Fig. 2D, red filled circle). These mutations are known to block ␣IIb-3 TMD association; this results in the provision of a free ␣IIb TMD-tail of the chimeric integrin to a neighboring integrin ␣IIb3 (Fig. 2C, diagram of middle panel). To confirm involvement of the chimeric ␣5-␣IIb integrin TMD-tail in the activation of ␣IIb3, we introduced additional mutations (G972L,G976L) into the chimeric integrin that block ␣- TMD-tail association (12) (Fig. 2C, bottom panel). As expected, the activating effect of the chimeric integrin is abolished by G972L,G976L mutations (Fig. 2C, bottom panel; Fig. 2D, red empty circle). The effect of chimeric integrins on the affinity state of integrin ␣IIb3 was evident when the expression levels of chimeric integrins were relatively high (Fig. 2, C and D), showing that a high concentration of integrin is required for lateral interaction of integrins. In conclusion, these results suggest that active integrins can activate neighboring integrins by inducing intermolecular heteromeric associations of TMD-tails.
Clustering of Integrins Enhances Integrin Activation-Next, we asked whether integrin clustering resulting from an ␣- intermolecular TMD-tail interaction among integrins would induce activation of integrins. To visualize and induce the intermolecular heteromeric TMD interaction of integrins, we utilized bimolecular fluorescence complementation using Venus fluorophore (14). For this, we used CHO-␣-VC cells that stably express the integrin ␣IIb subunit fused to the VC as well as integrin 3 subunit as reported previously (15). In addition, we generated a chimeric integrin 1-3 construct fused to the N-terminal region of Venus (VN) to make 1-3-VN. The interaction between VN and VC is known to be irreversible; once VN and VC are associated and the fluorescent Venus is assembled, the interaction is stably maintained (16). By using VN-VC dimerization to bring TMD-tails close to each other and thus induce TMD-tail lateral interaction, we asked whether lateral TMD-tail interaction can promote integrin activation (Fig. 3A). However, we note that the VN-VC interaction would hold two integrin together at the end of the cytoplasmic tails through a flexible linker, thus it may not force the intermolecular TMD-tail interaction between the clustered integrins directly but rather increases the probability of such lateral interaction by placing two integrins close together.
We transfected integrin ␣5-␣IIb and/or 1-3-VN into CHO-␣-VC cells and measured Venus fluorescence and human 1 expression on the cell membrane. As expected, it was possible to observe Venus fluorescence resulting from clustering of those integrins (Fig. 3B). However, transfection of 1-3-VN alone induced high Venus fluorescence. The Venus fluorescence in this condition looks from the endoplasmic reticulum or Golgi because 1-3-VN was not exported to the cell surface as determined by surface staining using anti integrin 1 antibody (Fig. 3B, left panel). This intracellular Venus fluorescence is presumably due to nonfunctional complex formation between ␣IIb-VC and 1-3-VN mediated by TMD-tail interaction without involvement of head domain interactions. In contrast, when both ␣5-␣IIb and 1-3-VN constructs were transfected, integrin 1 antibody staining on cell surface was increased (Fig. 3B, middle panel), showing successful and functional assembly of ␣5-␣IIb and 1-3-VN on the cell surface. These cells also exhibited spontaneous Venus fluorescence resulting from TMD-tail interaction between the chimeric integrin ␣5-␣IIb 1-3-VN and integrin ␣IIb-VC 3 (Fig. 3B, middle panel). As a control, we introduced W968C and I693C mutations in ␣5-␣IIb and 1-3-VN, respectively (Fig. 3C). These mutations are known to block intramolecular TMD-tail separation by producing a spontaneous intersubunit disulfide bond (17), thus blocking the intermolecular lateral interaction. However, despite blocking intramolecular TMD-tail separation by the disulfide bond, we observed Venus fluorescence from integrin ␣IIb3 and the chimeric integrin (Fig. 3B, right panel). We assume that both integrins fused to VN and VC can be closely localized inside the endoplasmic reticulum or Golgi during biosynthesis. Thus, close proximity during biosynthesis may enable the assembly of the complete Venus protein, presumably due to the intrinsic affinity between VN and VC (16) but not due to intermolecular interaction of those integrins. Alternatively, the intermolecular interactions between the chimeric integrin, integrin ␣IIb3, and Venus protein may be established earlier than the disulfide bond formation within the chimeric integrin. In either case, we were able to induce clustering of integrin ␣IIb3 and the chimeric integrin with the "lock" of TMD separation (Fig. 3C). After we confirmed the surface expression of chimeric integrin ␣5-␣IIb 1-3-VN and its association with integrin ␣IIb-VC 3 as shown above, we tested the effect of such clustering on integrin affinity state. CHO-␣-VC cells transfected with ␣5-␣IIb and/or 1-3-VN were detached and stained with PAC1 to measure integrin ␣IIb-VC 3 activation. Venus fluorescence was also measured to determine the amount of clustered integrins in the cells. In CHO-␣-VC cells 1-3-VN expression alone showed Venus fluorescence due to nonfunctional intracellular TMD-tail interaction as discussed above, and there was of course no increase in PAC1 binding on the cell surface (Fig. 3D, black circle; Fig. 3E, left panel). Interestingly, when both ␣5-␣IIb and 1-3-VN were expressed, PAC1 binding was significantly increased (Fig. 3D, red filled circle; Fig. 3E, middle panel). However, when the separation of TMD-tail regions was blocked by disulfide bonds in ␣5-␣IIb and 1-3-VN, there was no increase in PAC1 binding (Fig. 3D, red empty circle; Fig. 3E, right panel), showing that Venus-mediated clustering alone does not contribute to integrin activation. Therefore, we conclude that lateral interaction between integrins (or clustering of integrins) via TMD-tail regions can activate integrins, and the separation of TMD-tails is essential for the lateral interaction-induced integrin activation. We also note that α5-αIIb β1-β3 . TMD separation in one integrin can activate neighboring integrins. A, design of a chimeric integrin ␣5-␣IIb 1-3. Extracellular domains from integrin ␣5 and 1 ensure the association of chimeric integrin subunits, and TMD-tail regions from integrin ␣IIb and 3 can interact with integrin ␣IIb3 via the TMD-tail region. B, assembly of ␣5-␣IIb and 1-3 confirmed by immunoprecipitation (IP). A5 cells transfected with empty vector (vec) or the chimeric integrin ␣5-␣IIb 1-3 were lysed and incubated with anti-human integrin 1 extracellular domain-specific antibody (anti-1 ECD). Immunoprecipitates were analyzed by Western blotting (IB) with anti-␣IIb tail-specific antibody (left). Whole cell lysates (WCL) were also analyzed to detect expression of ␣5-␣IIb. Note that A5 cells endogenously express ␣IIb (arrows), and only the additional band corresponding ␣5-␣IIb (arrowhead) is observed in the immunoprecipitates. C, chimeric integrins containing various mutations transfected into A5 cells and their effects on integrin ␣IIb3 activation tested. Surface expression of human integrin 1 (comprising 1-3 integrin subunits) and affinity states of integrin ␣IIb3 were measured in flow cytometry and plotted as dot plots. In the dot plots, horizontal rectangle regions at different levels of the chimeric integrin 1-3 expression were set by using a custom Matlab script, and geometric means of PAC1 binding of cells within those regions were calculated and indicated as bold red dots in the FACS dot plots. D, A5 cells transfected, stained, and analyzed as in C. The average of mean PAC1 fluorescence intensities in each horizontal rectangle region in three independent experiments is plotted against the mean 1-3 integrin expression. Error bars represent S.E. of the mean PAC1 intensities from the three independent experiments.
increased PAC1 binding in the clustered integrins is not due to the increased valency that results from lateral association of the chimeric integrin and ␣IIb3, but is solely due to the activation of ␣IIb3, because PAC1 cannot recognize the chimeric integrin ␣5␣IIb 13.
To exclude the possibility that the absence of an activating effect of ␣5-␣IIb(W968C) 1-3(I693C)-VN in CHO-␣-VC cells is due to the inability of 3(I693C) to bind to ␣IIb, but not due to the inability of intramolecular TMD-tail separation, we tested the effect of the I693C mutation in ␣- TMD-tail interaction. The Tac (interleukin-2 receptor)-fused 3 TMD-tail construct (Fig. 4A) bearing the I693C mutation was transfected with the ␣IIb TMD-tail construct (Fig. 4A), and their association was measured by a pulldown experiment performed as described previously (12). As shown in Fig. 4B, the degree of ␣- TMD-tail interaction is similar regardless of whether or not the I693C mutation is present. Thus, lack of activation induced by I693C mutation in the  subunit can be attributed to a defect in TMD-tail separation.
DISCUSSION
Here, we demonstrated that there may be a lateral interaction between TMD-tails of two integrins and that the lateral interaction among integrins can induce the activation of ␣IIb3. Based on our previous observation of preferential ␣- heterodimeric TMD-tail interaction on CHO cell membrane (12), we favor the interpretation that the activating effect induced by the lateral interaction depends on the heterodimeric TMD-tail interaction between integrins. However, we do not rule out the possibility that homomeric ␣-␣ TMD interaction might also contribute to such activating effects, because the G972L,G976L mutations that inhibited the lateral interaction-induced integrin activation in our assay (Fig. 2, C and D) are also known to inhibit the homomeric TMD interaction (7).
Integrins exist in an equilibrium between "␣ o - o TMD-tailbound (inactive)" and "␣ o and  o TMD-tail-separated (active)," where o indicates original integrin pair. Thus, integrin activation can be viewed as the increased probability of the integrin being in the ␣ o and  o TMD-tail-separated state than in the ␣ o - o TMD-tail-bound state. We reasoned that the separated ␣ o and  o will have some increased likelihood to interact with another ␣ n or  n (where n indicates neighboring integrin pair) of a neighboring integrin in the TMD-tail separated state, if ␣ n or  n concentration is high in the vicinity of the separated ␣ o or  o . This interaction will prolong the ␣ o and  o TMD-tail-separated (active) state and favor the equilibrium toward activated integrin. Because the heteromeric intermolecular interaction between ␣ o and  n (or  o and ␣ n ) is identical to the intramolecular TMD-tail interaction between ␣ o and  o , the heteromeric lateral interaction-induced integrin activation, unlike that induced by homomeric TMD interaction proposed by others (5), can be only achieved when there are high concentration of integrins (␣ n and  n ) in the active conformation. Accordingly, the activation achieved by Venus-induced lateral integrin interaction alone is modest (Fig. 3, D and E), presumably due to the competition between intermolecular and intramolecular TMD-tail interactions and to the submaximal clustering induced by VN-VC dimerization. However, when lateral inter-action is induced by physiological clustering mechanisms, which may induce higher order clustering (18), and combined with other affinity modulation mechanisms, it may synergize with other mechanisms to dramatically increase integrin activation (9,18). Furthermore, we speculate that lateral interaction may also contribute to the growth of integrin clusters and integrin adhesion. For example, once an integrin active state is stabilized by binding to ligand, the TMD-tail separation of one integrin would be stabilized (19). Thus, the intermolecular interaction of integrins can be used for a "zipper-like" cell attachment; ligand binding of one integrin recruits another integrin by lateral interaction, and the interaction activates the integrin, thus forming the second adhesion site. Talin is known to bind integrin  tails (20), induce a change in the tilt angle of  TMD (21) that can separate ␣- TMD-tails (22), and activate integrins. Because the lateral interaction between integrins is predicted to involve the same ␣- binding interface, talin would not only inhibit intramolecular ␣- TMD interactions, but also the intermolecular ␣- TMD interaction. Therefore, we suggest that lateral interaction-induced integrin activation may be useful for transient cell-substrate interaction where rapid turnover of integrin activity is needed, whereas talin-induced integrin activation may be responsible for more stable cell adhesion.
In conclusion, we suggest here that intermolecular interaction between integrins through heteromeric TMD-tail interactions can activate integrins and that integrin clustering can facilitate the intermolecular interaction by increasing local concentration of integrins. Thus, integrin clustering may be closely related to integrin activation. . 3 TMD-tail construct, Tac-3 TM , contains the extracellular domain of Tac (interleukin-2 receptor) which is fused to TMD-tail region of integrin 3. B, wild type or GFFKR motif-deleted ␣IIb TMD-tail construct transfected with wild type or I693C mutation-bearing 3 TMD-tail construct. Interaction between constructs was measured as described previously (12). Note that the I693C mutation of the integrin 3 TMD-tail construct does not inhibit ␣- TMD-tail interaction. | 2018-04-03T04:13:22.313Z | 2014-05-16T00:00:00.000 | {
"year": 2014,
"sha1": "6268e80f38f9fd103bae62c234e994937310d9d1",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/289/26/18507.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "2f71ea1e2b1ed2275badabd9f5d5a92b84c8eae4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
216189763 | pes2o/s2orc | v3-fos-license | An Overview of Disaster Management during the April 25, 2015, Mw7.8 Nepal Earthquake
LESSON LEARNED At 11:46 A.M. local time on April 25, 2015, a destructive earthquake with a moment magnitude (Mw) 7.8 struck central Nepal. The epicentral region of the earthquake was located in the Gorkha region. Due to this event, more than 8000 people were killed and 22,000 were injured. Following this destructive earthquake, some affected areas including Kathmandu, Bhaktapur, Gorkha and Pokhara were visited by the first two authors during 6-11 May 2015 in order to assess the performance of disaster management and emergency responses to the 2015 Gorkha earthquake. Based on the observational assessments, the physical damages to buildings and historical monuments are pointed out briefly. The logistics, disaster management performance, along with emergency responses such as the shelters, health, food, treated water and medical supplies are also described. Meanwhile, several key lessons learned from the 2015 Nepal earthquake response are highlighted. In addition, a new design of emergency tents is proposed which is suitable for earthquake-prone areas specifically in south Asian regions like Nepal.
Disaster Management Efforts in Nepal
With regard to Nepal's severe vulnerability to natural disasters like earthquakes, landslides, avalanches, floods, etc, several central institutions are involved in disaster management at various levels and Regional Natural Calamities Relief Committee.
In 2009, the government released a National Strategy for Disaster Risk Management in Nepal (NSDRMN) (9), which is a national framework with a commitment of the government for protection, growth, and promotion of national heritages and physical infrastructures.
In addition, the government also launched the Nepal Risk Reduction Consortium (NRRC) in 2009. The establishment of the NRRC was probably the first effort that brought together national and international key partners and stakeholders in support of the Government of Nepal and engaged them in a number of concrete actions to make a safer and more disaster resilient society. In this respect, the NRRC developed a disaster risk reduction action plan in line with the Hyogo framework for action (10), focusing on urgent and viable flagship priorities (11): In order to check the NRRC's progress and achievements, the Government of Nepal performed a review on the efforts. (12)".
Damages of the April 25, 2015 Earthquake
Nepal earthquake was practically destructive, with widespread damages to many buildings, mostly within the central part of the country. In this section, based on a visit to the macroseismic region by the first two authors, the most important observations of the physical damages are mentioned very briefly.
Following the 2015 earthquake, several earthquake-induced landslides, avalanches, ground fissures, and fault ruptures. Based on the observations, the earthquake triggered some new landslides along the intercity road from Kathmandu westward to Gorkha and Pokhara. At about 62 km west of Kathmandu, a relatively large landslide could be observed with an approximate size of 100m width * 50m length (Figure 2). Because of the continued aftershocks occurred throughout Nepal in the first weeks, the country also had a continued risk of landslides. Moreover, the earthquake sparked an avalanche on Mount Everest and a huge avalanche in the Langtang valley, blocking mountain routes, killing several individuals and missing hundreds. Furthermore, a typical ground fissure was observed at a distance of about 9 km southeast of Kathmandu toward Bhaktapur with a NE-SW direction and 2-meter vertical displacements (Figure 3). In all the visited areas, there were many lowresilient buildings that were fully collapsed, tilted or severely damaged. The fourteenth day after the earthquake, large amounts of debris from destroyed buildings were not cleaned, making the affected areas to be so unsightly and polluted. The destructions in Gorkha District as well as Langtang village in a 60-kilometer distance northeast of Kathmandu. In Kathmandu, Unfortunately, disorganization and confusion in some developed regions were more evident than other less developed regions. For instancebusiness and shopping centers and residential complexes sustained major damages (Figure 4).
According to the geology of Kathmandu basin, an amplification effect is expected (13,14). The western part of Kathmandu is located in the vicinity of river flowing in the Kathmandu Valley where also the old and historic texture of the city exists. The geology of this region is prone to amplify seismic waves by fine and thick alluvial deposits so that the effects of the earthquake can be exacerbated, leading to more damages.
The 2015 Gorkha earthquake almost destructed all the world heritages in the Durbar square of Kathmandu, as well as the 183-year-old ninestorey Dharahara tower at the center of Sundhara ( Figure 5). A similar situation was observed in the Bhaktapur Durbar Square situated about 12 km southeast of Kathmandu ( Figure 6). Visiting the Swayambhunath Buddhist temple (also known as the Monkey Temple) (Figure 7), some important parts of the temple and one of its adjacent towers were found to be destroyed.
Logistics and Disaster Managements
During the recent decade, many Nepalese teams and specialists had active participation in different national and international training programs and studies, workshops, and The National Society for Earthquake Technology of Nepal (NSET) has published many training programs and studies in different aspects of disaster risk reduction and management such as Nepal earthquake risk management program, disaster preparedness for safer schools, comprehensive risk assessment and action planning, technical assistance to municipalities in building code implementation, a solid waste management project in Dharan municipality, Comprehensive list and details of these programs and studies are available online at www.nset.org.np. Such admirable programs and activities have helped Nepal to deal with disaster risks and improve preparedness. However, considering some catastrophic evidence and consequences of 2015 earthquake, the country still faces challenges such as lack of attention to building codes and development of unplanned urbanization. Moreover, the Nepalese modern professional NGOs are also taken into account as a small limited minority comparing to 26.5 million population of the disaster-prone country.
Following the 2015 earthquake, some meetings were held with authorities and Water Organization of Kathmandu to check the running emergency management performance. Although Nepal's National Strategy for Disaster Risk Management was adopted in 2009 and several governmental authorities are involved in this context, weakness of organizations and poor resources made it practically impossible to deal with such a major earthquake in an efficient way. In the first two weeks after the earthquake, there was almost a lack of well-trained efficient national relief teams and appropriate equipment for such an immediate emergency situation. However, organized Nepal's army played a key role as it was ordered by the government to cooperate in the relief efforts. It was actually the best thing to do in the earthquake conditions for the country.
Foreign relief teams played an important role in emergency management. The first team that arrivedin Nepal was from India (the neighbor country) and then from China, USA, Japan, France, Germany, England, Indonesia and Turkey. It was observed that during the earthquake, the UN played a very important role in the coordination of relief teams. Based on the authors' observations, there was a great discipline and organization in the UN office in Kathmandu (UN House) who was responsible to manage the large-scale relief operations in earthquake-affected regions. At Abu Khaireni, south of Gorkha district, the United Nations World Food Program (WFP) had established a logistic camp to control and distribute donations such as food, health packages and emergency shelters to the earthquake-stricken area (Figure 8). This logistic camp was also responsible to coordinate the headquarters of the national and international agencies and NGOs by holding a meeting between relief teams and response clusters at 8 o'clock each morning. Despite the good discipline among the working groups, there was a shortage of vehicles to transport the donated facilities. Only one helicopter and 20 to 25 trucks were employed per day to transfer relief packages to the epicentral area, meanwhile, the trucks were not always available regularly. According to the head of the camp, during the first 72 hours of the emergency period, one helicopter and 40-50 trucks were needed per hour, while after passing the period, it was estimated that more than seven times of this amount of equipment were needed in order to deliver the aid to begin reconstruction and rehabilitation in the affected areas. Meanwhile, the Nepalese government's facilities in this context are limited and it is not possible to increase the capacity and supportive equipment.
In addition to the lack of enough transfer facilities such as enough helicopter and trucks, intercity roads had low quality and minimum standards so that passing 140 kilometers distance from Kathmandu to Gorkha lasted about 5 hours. Due to the failure of communication in the first moments of the earthquake, it took an hour to send the first SOS messages and help requests from the epicentral region to Kathmandu and four hours to receive the first aids in the city of Gorkha and adjacent villages.
About the media and communications some points are highlighted: The first news about the earthquake occurrence and its damages were released around the world using the internet through photos and text messages published by the Nepalese social network users at the moment and early minutes of the tremor (Figure 9).
International media had an important role to cover the news and documenting almost every moment of this event around the world (15).
During this earthquake, Nepalese people had a highly sensitive reaction to Indian media's coverage of the earthquake. Nepalese took Indian media's coverage into account as advertising, offensive and smattering news and wanted the media back home using different banners installed around Kathmandu (with #IndianMediaGoHome in social networks).
It is expected that disaster managers face some post-disaster challenges in the next months of the earthquake. Remained debris should be completely cleaned to avoid aggregating harmful animals and insects which spread diseases. Halfruined buildings should also be demolished since they can be a death trap, killing the ones who do not notice the possible sudden collapse of these buildings in probable subsequent aftershocks. In addition, With respect to the cold winter and mountainous conditions of Nepal, disaster managers should provide and organize all the basic amenities and needs for the affected people especially those who are at higher risk such as children and the elderly. These logistics and basic needs can be prepared through the coordination with neighbor countries, China and India. Simultaneously, a preliminary assessment of the damage to all affected villages and urban areas should be finalized to provide an operational plan for the distribution of aids.
Responses
Following the April 25, 2015 earthquake, many national and international search and rescue (SAR), relief and emergency response clusters and NGOs arrivedin Nepal. The first foreign relief team was the Indian group that landed on Kathmanduin the afternoon of the mainshock day. Table 2).
Based on the observations in the first two weeks after the earthquake, about a quarter of the affected people had good access to facilities such as safe drinking water, warm food, and basic medicines. Among the visited cities, the Capital of Nepal, Kathmandu, had faced more damages with an asymmetrical pattern. In the following, different aspects of the emergency settlement within the visited cities and villages are described. The focus of the response shifted outside of Kathmandu to cover the most affected districts.
April 2015
In the first four days, the relief capacity provided by India was dominant in the early stages of the response: 13 aircrafts and 7 choppers, buses and trucks carrying medical supplies and relief items, and a mobile hospital were dispatched since the earthquake hit on Saturday. According to media reports the Government of India sent 22 tonnes of food, 50 tonnes of water, and 10 tonnes of blankets, together with other relief items.
Despite the heavy rain, the first UNHAS helicopter flight delivered food and shelter items from Kathmandu to Gunda VDC in the Gorkha District.
April 2015
The cluster provided transport to support the establishment of a base camp to be used by first responders. Transport was provided to deliver shelter kits from Saurpani VDC to Balua VDC in Gorkha District.
Search and Rescue (SAR)
Indian Super Hercules was sent at 6:00 pm local time as the first arrived foreign search and rescue (SAR) and relief team. Approximately 120 MT of food was available in the country. The food cluster dispatched food assistance to Gorkha and Dhading with existing in-country food stocks and was also organizing air support (via two helicopters) to get food to the areas unreachable via road transport.
28 April 2015 Around 30 MT of high energy biscuits and some fortified food (rice-soya blend) received from Dubai. Sindhupalchowk and Nuwakot were the priority districts for the cluster.
On 29 April, distribution of 100 metric tons of food began in Gorkha and Dhading district. Two helicopters were available to transport food to areas inaccessible via road.
April 2015
Two helicopters were on standby in Gorkha and Dhading districts to deliver additional food assistance in hard to reach areas. 30 April 2015 The five core response interventions (breastfeeding, complementary feeding, therapeutic feeding for children with SAM, supplementary feeding for children with MAM and micronutrient supplementation) started in the affected districts. 835 metric tons of food was delivered to 11 districts: Bhaktapur, Dhading, Dolakha, Kathmandu, Lalitpur, Gorkha Lamjung, Rasuwa, Ramechhap, Nuwakot, Sindulpalchowk.
1 May 2015 The Cluster has agreed to standardize the food packets distributed by all partners which will include: 400g of rice, 60g of lentils, 25g of oil and 7.5g of salt per
Effort
Date person per day
Water, Sanitation, and Hygiene
The WASH cluster provided 20 tanks (30,000 L) that would reach 1500 people in 3 camps in Kathmandu Valley.
April 2015
The WASH cluster agencies agreed to provide additional water tanks to all 16 campsites.
ENPHO was requested to monitor water safety compliance. The cluster identified 11 priority districts with supply lists.
KUKL started working to restore water supply, supplemented with water tanks.
Government assessment teams were deployed to eleven affected districts with support from cluster agencies meanwhile the distribution of emergency WASH supplies (Hygiene kits) was Limited in Dhading and Bhaktapur. 27 April 2015 The related Cluster required 25,000 Hygiene kits, 20,000 Tarpaulin, 40 tanks of 2500 ltrs, two Generators for the operation of deep boreholes for water supply, 20,000 packets of aqua tabs, household water storage containers, additional emergency pumps, and equipment.
Cluster members put up 90 toilets in temporary camps and distributed 200 hygiene kits in Bhaktapur district.
Piyush (water purification drops) was distributed by several partners in the affected areas and a water treatment plant was prepositioned.
Water tankers started the distribution of water supply in the temporary camps.
Partners distributed 100 hygiene kits in Sindhupalchok, built temporary toilets in Tundikhel and provided WASH supplies for 500 families in Sinamangal. Aqua tab and hygiene kits were dropped by helicopter to seven remote villages in Dhading.
April 2015
To this date, the Cluster provided 11,552 individuals access to a sufficient quantity of water for drinking, cooking and personal hygiene in the Kavre and Kathmandu districts. The Cluster also provided a total of 100cu.m water supply in Tundikhel, Kirtipur Post-earthquake diseases were a concern. Immediate priorities were managing dead bodies and injured people (many head and spinal injuries required airlifting). Diarrhea was an issue in Kathmandu Valley due to exposure to elements Four (national) teams were sent to Gorkha where it was estimated that in some areas, 80 percent of houses were gone and one international medical team mobilized to Dhading district to support health response efforts.
There was a need for surgeons, orthopedics, and paramedics, as well as logistics support.
World Health Organization (WHO) prepositioned surgical kits which were distributed.
USAID and DFID had medical teams in, also a UNICEF WASH team.
Only 27 drugs out of 40 and 70 free drugs and consumables list were available. Drugs and consumables could be supplied in UNICEF emergency health kits, with one kit serving around 10 thousand people for three months.
The health Cluster mobilized ten tents for the MoHP and central-level hospitals and delivered four IEHK to the MoHP. Five surgical kits have been distributed to different hospitals and 450 body bags handed over to Nepal Army.
April 2015
(WHO) provided funds and the emergency team which coordinated health response.
The Government requested that any foreign medical team that has not yet arrived in Kathmandu to stand by.
April 2015
More than 20 Cluster partners supported the response by providing specialized personnel and medical supplies and body bags, water filters and purification materials, setting up and providing basic construction materials for field hospitals.
Injured people from Sindhupalchowk and Dhading were airlifted to Dhulikhel hospital and field workers were deployed to more remote affected areas.
Foreign Medical Teams set up temporary hospital facilities in the affected areas (e.g. Pakistani Army ran a temporary hospital in Bhaktapur).
Medical teams and field hospitals were ready and waiting for the Government's advice to dispatch if and when needed and reproductive health kits have been provided which will address needs for 90,000 people.
Effort Date
Temporary health services were provided in makeshift tents outside the district headquarters in the worst affected areas.
Surveillance of acute diarrhea was established in the 16 Kathmandu camps and affected districts.
April 2015
Field hospitals were established in Dhunche (in Rasuwa District), Chautara (Sindhupalchowk) and Bidur (Nuwakot District). The Cluster provided support to establish a surveillance system for epidemics. 1 May 2015 The Government investigated reports of a diarrhoeal outbreak in an IDP camp on the southern outskirts of the Kathmandu Valley. Samples were collected and sent for laboratory testing.
Emergency shelters
In all Nepal's earthquake-stricken cities, finding appropriate locations for setting up emergency shelters was performed in two ways: a): If the private damaged houses did not collapse fully and it was possible to bring the essential appliances outside, the owners tend to stay beside their houses; so they set up their temporary shelters next to their houses. Most of these emergency temporary shelters were only plastic sheets that were distributed throughout the cities and people themselves had set them up as tents without side walls using remained bamboo reeds and wooden beams of their houses ( Figure 10). A large number of these kinds of emergency shelters were observed in which there were no basic hygiene amenities such as toilets and bathrooms. The residents refuged to these tents only at night or during rain. The Food and clean drinking water were provided by the affected people themselves in an extremely difficult way and there was no humanitarian assistance in these shelters. Figure 10. Handmade temporary tents that owners had made using plastic sheets, bamboo reeds and wooden beams beside their houses in urban areas.
b):
If the houses were completely destroyed and it was not possible to find intact appliances or due to the specific circumstances of the region (narrow alleys and houses with very low area), it was not possible to find an open space and provide safe emergency shelters and tents, and the people tended to migrate to their relative houses in the other regions or stay in public camps in the cities. In Tundikhel Parade, Nepal's Army was responsible to manage a camp and a Japanese team was responsible to provide health services. In Bhaktapur, a public emergency camp was set up in a small park in which a German team named NAVIS, was fully responsible to manage all the issues of the camp. In both of the public emergency camps in Kathmandu and Bhaktapur, two types of tents donated by China and India were used. It was observed that the Chinese tents had better quality and design than the Indian ones. All the Chinese tents were made of resistant steel frames with a two-layer cover, a surface waterproofing cover and an internal thermal insulation cover, while the Indian tents were made of a single layer canvas cover. With regard to the weather conditions in Nepal such as cold winter and heavy rainfall, the quality of the used tents is very important for the residents of the camps. Although the Chinese tents had a very good quality, unfortunately they were not set up correctly in Kathmandu so the rainwater had penetrated into the tents and all parts of the the tents and also the clothes got wet ( Figure 11). In the affected villages of the Gorkha district, the damage was estimated to be more than 80%. During our visit, there were no emergency shelter tents and only some plastic covers were distributed instead of tents. In order to set up temporary shelters in these villages, some people had made frameworks using bamboo trunk which were covered by plastic sheets. Most of these shelters lacked side walls and would simply be blown away by the first gust of wind. Some of the villagers skilled in carpentry and masonry often tried to build very basic cottages using the remaining materials of their building debris. Most of these houses had a wooden skeleton with bamboo straw sidewalls. Doors and windows of the destructed houses were also used as the doors and windows of the new houses and metal corrugated sheets or plastic coating were used to cover the ceiling ( Figure 12). In none of these villages, there were no appropriate tents such as tents which were installed in the cities of Kathmandu and Bhaktapur. problems in Nepal. About 50% and less than 30% of the population has good access to treated water during rainy and dry seasons, respectively. After the earthquake, treated water was distributed daily by local and international NGOs among the affected people of Kathmandu. According to a senior official of the water organization in Kathmandu, the country is only able to provide one-third of the needed treated water during the annual rainfall and only a quarter of the treated water in dry seasons for the population. Although, the depth of the groundwater level in some parts of Kathmandu was measured to be about 3-6 meters ( Figure 13), it is not useable due to the water pollution especially the existence of heavy metals. Thus, treated water is distributed throughout the city using tankers. In the two emergency public camps of Kathmandu and Bhaktapur, the supply and distribution of treated water were performed by German and French teams. In Kathmandu, drinking water was treated by a German team in the location of water organization in Central Kathmandu and carried to the public camp by tankers ( Figure 14). The German team had provided a water filtration system at Kathmandu's Water Organization which could provide 10,000 liters of clean water per hour for people's usage. German specialists expressed that they could enhance the capacity of their water filtration system into 120,000 Lit/day but there were not enough facilities to distribute even that 10,000 Lit/Hour of clean water. Kathmandu's Water Organization had promised the German team to have 5 trucks to distribute clean water daily, but actually, less than 5 trucks were available. Without the occurrence of an earthquake, Kathmandu is a city where there are so polluted water supplies extracted from wells. Moreover, about 360 million liters of clean water are needed per day; while only 130 and 90 million liters of clean water are daily available in rainy and dry seasons respectively. In Bhaktapur, water was extracted from in-situ wells ( Figure 15) and was refined and distributed by the German team. In rural areas, although water distribution networks were very basic and public water taps were often used throughout the villages, fortunately, this primary water distribution network was not damaged seriously and still provided the services. Therefore, water supply and distribution in rural regions was not the primary priority of crisis management.
Providing health services/toilets and bathrooms
In the camp of Kathmandu, residents had to use the public restrooms of Tundikhel Parade, while these services lacked appropriate quality and standards. In the camp of Bhaktapur, two emergency health services were initially set up which were closed during the first week of use, due to the wrong location of these two services (up of the slope toward the camp), and the wastewater penetrated into the camp. After a while, the health services installed for tourists were used instead ( Figure 16). After the earthquake, one of the basic needs of the contact groups is to use proper facilities for washing and bathing. Lack of attention to these basic needs may not have tangible feedback during the first two weeks, but over time, due to the incidence of skin diseases, especially infectious diseases like fungal, it poses serious problems to crisis managers which will be very costly and time-consuming. Unfortunately, there were no bathrooms in both camps of Kathmandu and Bhaktapur and families act to wash their children in the outdoor area ( Figure 17). In addition, no toilets or bathrooms were set up in the visited villages, and most villagers had to use the rural health services commonly which were remained intact after the earthquake.
Collection and disposal of wastewater and sewage
In both camps of Kathmandu and Bhaktapur, wastewater of toilets were driven into some absorption wells. These absorption wells were designed and dug for health services. Considering the number of camp residents, these temporary health services would be out of service soon. Thus, camp managers should think about some new absorption wells or water treatment devices. There was no sewage system to collect the wastewater generated from washing in the camp of Kathmandu and this wastewater was abandoned in the area as surface water. This wastewater was full of mosquito larvae and will certainly trigger outbreaks in the camps during the next few months. But in the camp of Bhaktapur, better quality services were provided.
Health and medical needs
In aspect of environmental health, camps were entering a phase of a high risk just in two weeks after the earthquake. Existence of surface wastewater in the environment, lack of a hygienic bathroom, lack of periodic spraying to control insects and the lack of basic facilities for washing clothes and dishes may cause a serious problem in the camps.
Free distribution of medical services especially for women as well as the distribution of birth control devices was an important issue. In both camps, there were some special tents for medical needs where German and Japanese doctors took care of patients.
In the villages, the distribution of basic amenities of life was neglected by crisis managers and the distribution of hygiene and medical supplies was very unfavorable, lower than the basic necessities of life. This may lead to widespread diseases in the affected areas in a few months, impress especially malnourished children as the first target and intensify respiratory prevalent diseases in the region.
Food supply
In both camps of Kathmandu and Bhaktapur, the production and distribution of hot food were performed by local and international NGOs (Figure 18). Although the food was distributed in the camps freely, families often were reluctant to receive these foods and they tend to cook in their tents. Therefore, managers of the camp had started to distribute raw materials for cooking by families.
Apart from the lack of interaction and coordination for the management of donation distribution in the affected rural areas, fortunately, food, especially rice and cereals, were distributed well in the affected villages. The volume of input aids was considerable and most of the donations were delivered to the UN camp (established in at the beginning of sidetrack of the main road to the city of Kathmandu and Pokhara toward Gorkha) to be distributed throughout the affected region. Some parts of the donations that were sent to the region by donors directly weretaken over by the Nepal military at the entrance of Gorkha and were distributed in the damaged villages.
Providing basic amenities for life
Given that all camp residents in the city of Kathmandu and Bhaktapur were families whose houses were completely destroyed, providing basic amenities of life for them might be certainly a priority for crisis managers while this could not be seen in the first 12 days after the earthquake. Supplying waterproof mats, blankets, warm clothes, underwear for women and children and basic cookware are the main priorities that should be supplied as soon as possible (Figure 19). There is a complicated and dangerous wire system transmitting electric power in Kathmandu ( Figure 20). In the emergency shelter, wooden beams remained from the roofs were used for the main framing of the unit and also the remained bricks were used for walls. In the construction of the entrance door and the roof of this unit, free plastic and canvas coverings distributed in the area by the government were used. Being able to withstand harsh conditions in mountain areas, particularly cold weather and torrential rains, is one of the advantages of such units ( Figure 21).
In this emergency shelter, wooden beams remained from the roofs were used for the main framing of the unit and also the remained bricks were used for walls. In the construction of the entrance door and the roof of this unit, free plastic and canvas coverings distributed in the area by the government were used. Being able to withstand harsh conditions in mountain areas, particularly cold weather and torrential rains, is one of the advantages of such units ( Figure 21).
Proposal: Temporary accommodation
Emergency shelter is the first and the most important need of a family affected by an earthquake. Most of these shelters must be able to provide a secure place for survivors to live in until the damaged houses are reconstructed.
Due to the fact that Nepal could be considered as an economically poor country and the earthquake-affected people often live in rural areas whose economic conditions are poor, any emergency shelter must be designed at low cost and the cheapest materials one can find in these areas after an earthquake are those that remained unharmed and could be found in building rubble.
It is worth to say that most of the buildings, especially in rural areas have walls built of stone and brick, and the roofs often have wooden beams with metal or gravel roofing. Having analyzed the building rubble in earthquake-stricken areas in Nepal, it was attempted to design an emergency shelter at Earthquake Hazards Reduction Society of Iran. The materials needed for construction of this shelter are collected from the materials remained in building rubble of destroyed buildings.
Conclusions
On 25 April 2015, an intense M w 7.8 earthquake struck central Nepal imposing large damages to the structures and infrastructures, causing life and property losses. Nepal's infrastructures are mostly traditional and so weak based on a process of nonsustainable development. Thus, Nepal can be considered as a symbol of vulnerability and resiliency of all the developing countries in terms of natural disasters especially earthquakes. This complicated situation could also be seen on 12 January 2010 M7.2 Haiti earthquake which claimed about 300,000 out of 2 million people of Port-au-Prince.
In this study, based on the visit of 1 st and 2 nd authors to the epicentral region including rural areas and cities containing Kathmandu, Bhaktapur, Gorkha and Pokhara during 6 to 11 May 2015, the states of disaster management, logistics were assessed and the emergency and disaster management measures are evaluated. It seemed that national organized emergency plans could not be implemented efficiently in the case of this major disaster. The United Nation organization arranged the disaster management efforts. In the context of emergency responses, international teams and NGOs performed well, while during our visit time, some problems like distribution of treated water, lack of enough helicopters to deliver relief packages, repeated power outages, health problems in the temporary shelters (shortage of shower/no controls wastewater) and remained earthquake debris left in the demolished sites were still evident. Finally, some key lessons that can be learned from the earthquake were mentioned and we proposed a new design of emergency tents that can be used in earthquake-stricken regions of Nepal during the future month. | 2020-04-02T09:21:48.376Z | 2020-01-10T00:00:00.000 | {
"year": 2020,
"sha1": "f8ad72ad6d0eed01bfd57476eaae2df9e0172ee4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18502/jder.v3i1.2564",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5923e83eb7071368f3f2d4714dab433b29ba2b88",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Geography"
]
} |
14915778 | pes2o/s2orc | v3-fos-license | Observations of the Binary Microlens Event MACHO-98-SMC-1 by the Microlensing Planet Search Collaboration
We present the observations of the binary lensing event MACHO-98-SMC-1 conducted at the Mt.~Stromlo 74"telescope by the Microlensing Planet Search (MPS) collaboration. The MPS data constrain the first caustic crossing to have occurred after 1998 June 5.55 UT and thus directly rule out one of the two fits presented by the PLANET collaboration (model II). This substantially reduces the uncertainty in the the relative proper motion estimations of the lens object. We perform joint binary microlensing fits of the MPS data together with the publicly available data from the EROS, MACHO/GMAN and OGLE collaborations. We also study the binary lens fit parameters previously published by the PLANET and MACHO/GMAN collaborations by using them as initial values for $\chi^2$ minimization. Fits based on the PLANET model I appear to be in conflict with the GMAN-CTIO data. From our best fit, we find that the lens system has a proper motion of $\mu = 1.3\pm 0.2 \kmsk$ with respect to the source, which implies that the lens system is most likely to be located in the Small Magellanic Cloud strengthening the conclusion of previous reports.
Introduction
The Microlensing Planet Search (MPS) Project monitors microlensing events discovered in progress by the EROS, MACHO, and OGLE experiments in search for the microlensing signature of planets orbiting faint lens stars or "non-standard" microlensing light curves which can provide an additional constraint on the distance and mass of the "dark" lens systems. The MPS project primarily monitors lensing events toward the central regions of the Galaxy where the microlensing events are most numerous. However, "non-standard" events detected towards the Magellanic Clouds present a unique opportunity to learn about the composition of the dark halo that dominates the mass of the Milky Way, and these events are observed at a high priority. The binary microlensing event MACHO-98-SMC-1 was one such case.
The measurements of the microlensing optical depth towards the Large Magellanic Cloud (LMC) indicates that there is a previously unknown "dark lens population" toward the LMC (Alcock et al. 1997a). If the microlensing population is dominated by Galactic halo objects, the time scale of the microlensing events indicates their typical mass to be ∼ 0.5M ⊙ , which may be low mass stars, white dwarfs, or primordial black holes (Nakamura et al. 1998). A large population of low mass stars or white dwarfs in the Galactic halo would likely have other observable effects, and it has been speculated that the LMC microlensing events are due to normal stars in the LMC itself (Sahu 1995). The possible confusion between LMC self-lensing and lensing by Galactic halo objects derives from the fact that the distance and the mass of the lensing objects cannot be directly measured for most of the microlensing events. For a "standard" microlensing event, the only constraint on the three unknowns of distance, velocity and mass of the lens system comes from a single observed quantity, the "Einstein ring radius crossing time" t E .
In a caustic crossing binary lensing event, one can measure one more independent parameter, namely, the "source radius crossing time", t * , and thereby estimate the relative proper motion µ of the lensing object with respect to the source star by independently determining the angular size of the source star from its brightness and color. A measurement of the relative proper motion, µ, allows the determination of the angular Einstein ring radius, θ E = µt E . Once θ E is known, the mass of the lensing object is expressed as a simple monotonic function of the distance to the lens (if the distance to the source is known). If D ℓ and D s are the distances to the lens and the source star, and δ ≡ D ℓ /D s , then (1) D ℓ is not known, but it is strongly correlated with the proper motion, µ. For example, if we take our best fit value of t E = 70.5 days and assume D s = 60 kpc and µ = 1 km s −1 kpc −1 , then the lensing object will be a binary in the SMC with the total mass M ≈ 0.36M ⊙ for D s − D ℓ = 2 kpc. For a typical halo lens we expect D ℓ ≈ 10 kpc and a transverse velocity of ≈ 200 km s −1 assuming a standard isthermal sphere halo model (Binney & Tremaine 1987). This yields µ ≈ 20 km s −1 kpc −1 for a typical halo lens (which would imply a lens mass of M = 0.81M ⊙ from eq. 1.) Of course, in order to compare to our measurement of µ, we should compare to the predicted µ distributions for halo and SMC lenses. This has been done for some simple SMC and halo models by Graff & Gardiner 1998;Albrow et al. 1998;Alcock et al. 1998;Honma 1998, and their results indicate that for most values of µ, either a halo or SMC lens is strongly preferred. However, depending on the halo and SMC models used, there is an overlap region at µ = 2 − 4 km s −1 kpc −1 which is marginally consistent with either a halo or SMC lens at the 2 − 3σ confidence level. (Honma (1998) also points out a selection effect that will tend to bias µ measurements towards smaller values.) In the case of MACHO-98-SMC-1, model II of the PLANET collaboration (Albrow et al. 1998) yields µ = 2 km s −1 kpc −1 which does not allow a definite determination of the lens location in the in the halo or in the SMC (Honma 1998).
The main features of a binary lensing event are determined by the location of the caustic curve in the source plane indicates the location of the source with respect to the lens system projected to the position of the source. The caustic curve is where the number of images of the source changes by two. In binary lensing, the caustic curve is made of one, two, or three closed curves, and the number of images is 5 inside the closed curves and 3 outside. The caustic curves for MACHO-98-SMC-1 (according to the MPS fit) are shown in Figure 1. When the source moves inside one of these caustic curves, two new images are created, and the magnification of these new images is singular at the point of the caustic crossing. Because of this discontinuity (intrinsic width zero), the finite angular size of the source star is necessarily resolved during a caustic crossing. At the same time, this discontinuity makes it difficult to observe the first caustic crossing (going into the caustic). However, there is always the second opportunity to monitor a caustic crossing once the first caustic crossing has occurred because of the closedness of the caustic curve, and the second caustic crossing (exit from the caustic) time can be predicted through real-time data reduction and binary lens fitting as the source proceeds inside the caustic. The timely pre-caustic crossing announcements from the MACHO/GMAN group (Becker et al. 1998;Bennett et al. 1998a) allowed intense monitoring of the second caustic crossing of the MACHO-98-SMC-1 by the microlensing community around-the-clock from all three (temperate) continents of the Southern Hemisphere (Afonso et al. 1998;Albrow et al. 1998;Alcock et al. 1998). This resulted in a light curve which is well sampled in the second caustic crossing region.
According to our fit, the binary lensing event MACHO-98-SMC-1 was magnified by ≈ 70 times at the maximum of the second caustic crossing. Such extreme magnification is also useful in studying the properties of the lensed star (Lennon et al. 1996;Alcock et al. 1997c). In order to obtain an accurate model of the lensing event, which is necessary to determine µ, however, it is not enough to have only meticulous measurements of the second caustic crossing. The main contribution of the MPS data is to constrain the time of the poorly sampled first caustic crossing and directly rule out the "outlier" PLANET model II. The crosses indicate the locations of the lenses, and the straight line indicates the path of the source star with respect to the caustic curves. The red dots on the source star path indicate the location of the source at various dates given in June, UT. The distance scale for the axes is the Einstein ring radius, R E . Note that the actual size of the source star is only about 0.0015R E that is much less than the thickness of the curves in the Figure.
MPS Observations and a Constraint on the First Caustic Crossing
The Microlensing Planet Search project has been allocated approximately 100 nights on the Mt. Stromlo Observatory (MSO) 1.9m telescope for the 1997 and 1998 Galactic bulge seasons. Ongoing microlensing events announced by the MACHO, OGLE, and EROS collaborations are monitored at intervals of 1-2 hours using the Monash Camera which is a Cassegrain imager fitted with a SITe 15 micron 2048 x 4096 AR-coated thinned CCD. The data is reduced within a few minutes after it is taken using automated Perl scripts written by one of us (ACB) which call a version of the SoDOPHOT photometry routine ). This allows the immediate discovery of any unusual microlensing features that might be in progress.
MPS made its first observation of event MACHO-98-SMC-1 about one day after MACHO microlensing alert issued May 25.9 UT and continued its observations as a medium priority target. One of these observations was obtained at June 5.549 UT which turned out to be the last observation prior to the caustic crossing. After the caustic-crossing binary lensing alert issued June 8.99 UT, MACHO-98-SMC-1 was upgraded to a high priority target. However, we were not scheduled on the MSO 1.9m until June 18, so our coverage of the event while the source was inside the caustic curve was minimal. On the 18th, the imager was available again, and the MSO staff kindly altered the telescope pointing limits to allow us to observe the SMC almost completely under the pole at an airmass of 3.2. We made the first observation at June 18.332 UT about 40 minutes after the trailing limb of the star cleared the caustic (according to our best fit which indicates the second caustic crossing endpoint at June 18.304 UT). Although we missed the second caustic crossing, we kept MACHO-98-SMC-1 at a high priority to cover the "cusp approach" lightcurve feature. This was a rise to a gentle peak and subsequent decline that occur as the source passes in front of one of the sharp "cusps" of the caustic curve (see Figure 1). Good coverage of this feature is important if we hope to constrain the global parameters of the lensing event. Unfortunately, due to poor (la Niña) weather, our coverage of the "cusp approach" is not very good.
The intense worldwide monitoring of the event was concentrated around the second caustic crossing making it the best covered caustic crossing in microlensing history. However, a reasonable amount of data around the first caustic crossing is necessary to pin down the lens parameters. The OGLE observation June 6.40 UT and the MACHO/GMAN observation at June 6.45 UT that indicate that the first caustic crossing must have occurred by June 6.0 or so. A lower limit on the time of the first caustic crossing is set by the MPS observation at June 5.549 UT which is the last observation before the first caustic crossing. The measured flux of this MPS observation is consistent with the slow variation of the lightcurve for a source approaching a binary caustic prior to the first contact of the caustic with the stellar limb. Thus, the first caustic crossing is constrained to have been completed within the window of ∼ 20 hours between June 5.55 -6.40 UT.
Binary Lensing Analysis
A binary lensing event involves seven parameters. These include three parameters that also exist for single lens events: the Einstein ring crossing time, t E , the "impact distance, " u min , from the origin of the coordinate system, and the time of the closest approach to the origin, t 0 . We choose the lens system center of mass (c.m.) as the origin so that t 0 would be the most reasonable generalization of the time for the maximum amplification of a single lens. (The c.m. resides inside the caustic here. It always does when a ≤ √ 2.) This would also be the most convenient coordinate system if we were to consider the lens system orbital motion. There are three additional parameters intrinsic to a binary lens: the fractional mass, ǫ, of the first lens, the lens separation a, and the intersection angle of the source trajectory with the lens axis, θ. (The first lens is the one on the left in Figure 1). The final parameter is the source radius crossing time t * which is obviously critical for the lens proper motion determination.
In addition to these microlensing parameters, we must have additional parameters to describe the unlensed brightness of source star in each pass band, from each observing site (since the instrumental pass bands from different telescopes are never identical). Also, since the microlensing events are found in crowded stellar fields, it is usually the case that the lensed source is blended with other unlensed sources that happen to fall within the same seeing disk. Thus, we require an additional parameter for the brightness of any unlensed sources which are blended with the lensed source. These parameters need not be included for the non-linear χ 2 minimization process, however, because the observed flux depends linearly on the brightness of lensed star and its unresolved companions. Our χ 2 calculation routine automatically minimizes χ 2 with respect to these linear parameters for every set of intrinsic microlensing parameters that is considered. This makes our fitting routine converge to the best fit much more quickly than it would if these were included as nonlinear fit parameters. However, it also complicates the interpretation of our error estimates because the error estimates for the blending parameters are calculated with the intrinsic lensing parameters held fixed.
When a source is inside a caustic curve, there are two extra images in addition to the three "normal" images, and when the caustic curve crosses the source star, the two extra images are only partial images joined together along the critical curve. The time it takes for the stellar diameter to cross the caustic, 2∆t, can be measured using only observations near the time of the caustic crossing. However, t * can be determined from ∆t only if we know the angle, φ, between the source trajectory and the caustic curve at the crossing: t * = ∆t sin φ. φ can only be determined by a fit to the entire microlensing lightcurve, so measurements of the caustic crossing alone are not sufficient to determine t * . It is possible to constrain t * without a determination of φ (Afonso et al. 1998), but this constraint may be very weak.
The modeling of a binary lensing event presents a number of difficulties. First, the caustic crossings mean that binary lensing lightcurves generically have very sharp features, and since the photometric measurements discretely sample the lightcurves, there can be large changes in χ 2 caused by small changes in the parameters that happen to move a caustic past the location of a data point. The singular nature of microlensing magnification also causes difficulties for the integrations necessary to calculate the microlensing magnification of a finite size source star and prevents the use of fast high order methods (Rhie & Bennett 1999).
Yet another difficulty with binary lens fits is that the location of the caustic crossing in the lightcurve depends in a complicated way on the microlensing parameters. The time of the caustic crossings can generally be pinned down to reasonable accuracy simply by inspection of the microlensing lightcurves, but it is difficult to translate this into a constraint on the microlensing parameters: ǫ, a, θ, u min , t 0 , and t E . However, since the times of the caustic crossings can readily be calculated for any set of parameters, it is possible to shift t 0 and rescale t E to put two caustic crossings at specified locations in time. We use such a procedure to replace the parameters t 0 and t E by the first and second caustic crossing times, t cc1 and t cc2 , for many of our binary lens fits.
The χ 2 minimization for our microlensing fits is carried out with the aid of the MINUIT routine (James 1994). The fitting proceeds in several stages. First, in order to find candidate global fits, we take the data sets and remove many of the data points from regions where the data highly oversample the lightcurve features in order to speed up the calculations in the early phases of the fitting process. We also remove all of the data points which resolve the caustic crossing so that the search for candidate global microlensing fit parameters can be done in the point source limit which typically speeds up the calculations by a factor of 10 or more. We then start a number of Monte Carlo parameter searches to find good starting points for the microlensing fits using MINUIT's SEEK routine. During the Monte Carlo parameter searches, the values of t cc1 and t cc2 are constrained to small time intervals which were determined by inspection of the individual lightcurves. This results in a number of candidate microlensing models which are passed to the second stage of the fitting procedure.
In the second stage of the fitting process, we include some of the data which resolves the caustic crossing and to fit all of the candidate microlensing models again with a finite value for t * . This procedure converges to the final fit much more quickly than if all the data were used at this stage. Once the finite source effects are included, it is necessary to take the limb darkening of the source into account. For our preliminary fits, we have used a standard "linear" limb darkening model, but we have also used the "square-root" model advocated by Diaz-Cordoves & Gimenez (1992) at the stage of the final fits which use the full data set. The limb darkening coefficients were taken from Claret, Diaz-Cordoves, & Gimenez (1995) and Diaz-Cordoves, Claret, & Gimenez (1995).
In addition to this procedure used to find new fits, we have also tried fits using initial conditions based upon the fits reported by the PLANET and MACHO/GMAN collaborations.
Previous Observations, Analyses, and Fits
Observations of MACHO-98-SMC-1 have been previously presented by the EROS, PLANET, MACHO/GMAN, and OGLE collaborations. (Afonso et al. 1998;Albrow et al. 1998;Alcock et al. 1998;Udalski et al. 1998) The EROS observations from La Silla covered a significant fraction of the falling curve of the second caustic crossing through the caustic crossing "end point" and several hours beyond, and it was the first time that the linearity towards the "end point" was observed. At the "end point", the source star completely exits the caustic, and the additional two bright partial images vanish, causing the curvature of the light curve to change abruptly. The "end point" was estimated to have occurred June 18.297 UT. From the linearity spanning 1.8 hours, the EROS collaboration suggested a constraint µ sin φ < ∼ 1.5 km s −1 kpc −1 . Since they reported on data only from the night of the second caustic crossing, EROS was not able to determine the caustic crossing angle φ, so their constraint on the lens proper motion was weak. However, the EROS data has the best coverage of the caustic crossing "end point" which proves very valuable when combined with other data sets.
The PLANET collaboration monitored the event since shortly after the binary lens alert and had excellent coverage of the second caustic crossing peak turn-over from the SAAO 1m. They also measured the spectrum at the light curve peak from the SAAO 1.9m. They presented two binary lens fits, which we will refer to as PLANET-I and PLANET-II, that resulted in t * = 0.122 and 0.0896 days. The models PLANET-I and II differ by ∼ 58 in χ 2 which is formally a 7.6σ deviation. However, the χ 2 per degree of freedom for each were fairly large (2.37 and 2.73 respectively), and they argued that both the fits should be considered to be viable fits (to account for unspecified systematic errors).
The MACHO/GMAN group reported their data from the Mt. Stromlo 1.3m and the CTIO 0.9m telescopes (Alcock et al. 1998) and presented a binary microlens fit to the data combined with the EROS data. Their fit differed from both PLANET-I and PLANET-II, and MACHO/GMAN suggested that both the PLANET models might be inconsistent with pre-caustic-crossing MACHO/GMAN data. Their estimate of the source radius crossing time was t * = 0.116 days. The CTIO 0.9m observations registered the caustic crossing "end point" at ≈ June 18.304 UT which agrees with the EROS data reduced with SoDOPHOT (see figure 4). The MACHO/GMAN fit indicates that the second caustic crossing peak amplification was ≈ 70 while PLANET-I indicates that the maximum amplification was ≈ 100. The main difference here is that the the PLANET-I indicates a fainter source star with more of the baseline flux coming from unlensed stars.
The OGLE collaboration reported their data from Las Campanus (1.3m Warsaw telescope) that includes the first observation after the first caustic crossing at June 6.40 UT. They did not perform any microlensing fits, but they suggested that model PLANET-I is more consistent with the OGLE data than PLANET-II. They also suggested that MACHO/GMAN fit may be off by 0.14 days for the first caustic crossing. In the MACHO/GMAN fit, the first caustic peak crossing occurred at ≈ June 6.24 UT, and hence, the suggestion by the OGLE team corresponds to the first caustic peak crossing at ≈ June 6.10 UT. In model PLANET-I, the peak crossing time was ≈ June 6.08 UT, and thus, the OGLE team concluded that the OGLE data is probably most consistent with model PLANET-I.
MPS fits, Analyses, and Comparison
In this section we present our binary microlensing fit results for the data set including the MPS data plus the publicly available MACHO/GMAN, EROS, and OGLE data, and we interpret the meaning of these results. We assume that the source star is a single lens star which was lensed by a binary lens with no significant orbital motion.
The most obvious result of the MPS observations is that the PLANET-II model is ruled out. The MPS observation at June 5.55 UT indicates that the leading limb of the source star has not yet crossed the caustic. This is inconsistent with the PLANET-II model which predicts the leading limb to cross the caustic at June 5.25 UT, the stellar center "caustic crossing time" at June 5.36 UT, and the first caustic crossing lightcurve peak to occur at June 5.43 UT. Figure 2 shows a comparison of the PLANET-II fit to the MPS data. In order to put it into the statistical perspective, we normalize the MPS data to the PLANET-II fit using the 34 other observations (which do give an acceptable fit to the data), and the PLANET-II prediction for June 5.55 UT exceeds the observed brightness by 29σ. Thus, the PLANET-II model is clearly ruled out. Note that in Figure 2 and in all subsequent plots, the MPS data have been binned into nightly bins for all nights with multiple observations except for the night of June 18 where 16 observations have been grouped into 4 bins.
The MPS observation on June 5.55 along with the OGLE observation at June 6.40 UT and the GMAN-CTIO observation at June 6.45 constrain the caustic crossing to have occurred close to June 6.0 UT. The MPS fit to the combined data set provides an acceptable fit to the data near the first caustic crossing and indicates that the first "caustic crossing time" was June 5.91 UT, and PLANET-I and MACHO/GMAN also seem consistent with this data within the limit of the poor coverage. Therefore, we will focus on a comparison between the MPS, MACHO/GMAN and PLANET-I fits as well as the lightcurve details of the second caustic crossing where we hope to reconstruct the "missing peak. " (A future comparison with the PLANET data should test our ability to predict the features of the second caustic crossing peak from the other data sets which do not sample the peak.) Tables 1-4 shows the summary of the results of the microlensing fits we have performed on the combined EROS/GMAN/MACHO/MPS/OGLE data set. The MPS fit is the fit generated by our fit search procedure as discussed above. The fits labeled "PLANET-I * " and "MACHO/GMAN * " are fits in which we started with the binary lens parameters reported by these groups as initial conditions. The columns labeled "PLANET-I" and "PLANET-II" report results for the fit parameters found by the PLANET collaboration; the only additional fitting was to find the best Fig. 2.-This figure shows a comparison of the MPS data to the PLANET-II fit. We have allowed the fluxes of the source star and any unlensed stars in the same seeing disk to take the values which give the lowest χ 2 value. The observation at June 5.55 UT indicates that the caustic crossing had not yet begun, contrary to the PLANET-II model prediction. The attempt to fit this point results in a "best-fit" curve which does not agree with most of the other data points. fit fluxes for the lensed star and its unresolved companions.
The blend fractions or "fractional lensed luminosity" values listed in Table 3 require some explanation. These blend fractions have large uncertainties for many of the passbands because there are few or no observations when the source is not magnified significantly for most of the pass bands. The only tight constraint on the unlensed flux comes from the MACHO data where there are more than 600 observations in both MACHO pass bands when the source is unmagnified. The f s values in Table 3 can also depend on the seeing of the best images from each of the data sets. With routines such as DOPHOT, SoDOPHOT or ALLFRAME, the photometry is based upon the stars that can be individually identified in the best seeing frames. Thus, two data sets using the nearly identical passbands can yield different f s values if the seeing in the best seeing frames differs between the two data sets. Table 1 shows the summary of the lens parameters and statistics. t cc1 and t cc2 refer to the first and second caustic crossing times which are fit parameters for the MPS fits but not for the MACHO/GMAN or PLANET-I fits. The caustic crossing times appear to agree well between the different fits. The MACHO/GMAN and MPS fit parameters agree in general except in the mass ratio, but these fits differ more substantially from the PLANET-I fit. Of course, this is not very surprising since the MACHO/GMAN and MPS fits are based on data sets that have a lot of overlap with each other but no overlap with the data that generated the original PLANET-I fit.
Much of the difference between the PLANET-I and MACHO/GMAN and MPS fits can be traced to the fact that the PLANET-I fit indicates more blending. In other words, the lensed source implied by the PLANET-I model is fainter and has brighter unlensed neighbors than in the MACHO/GMAN and MPS models. This can be seen from the best fit blend fractions listed in Table 3. The fraction of the lensed light is f s (V m ) ≃ 0.57 and f s (R m ) ≃ 0.49 for the MACHO/GMAN and MPS fits of the MACHO data while for the PLANET-I fit the values are f s (V m ) ≃ 0.35 and f s (R m ) ≃ 0.30. So, the MACHO/GMAN and MPS fits imply that the lensed source is about half a magnitude brighter than implied by the PLANET-I fit. It is interesting to note that the χ 2 difference between the MACHO/GMAN and MPS fits and the PLANET-I fit is seen only in the MACHO and CTIO data sets, which are also the data sets in which the unmagnified fit fluxes are the same for the different fits. For the EROS, MPS, and OGLE data, the unmagnified brightness of the blended stellar image is predicted to be substantially fainter for the PLANET-I fit than for the MACHO/GMAN and MPS fits. Thus, additional data from EROS, MPS, OGLE, and perhaps PLANET as well should help to distinguish between these fits.
The form of the fit curves near the caustic crossings depend on the assumed form for the limb darkening. Following the PLANET collaboration, the PLANET-I and PLANET-II χ 2 results reported here assume no limb darkening. For most of the fits that we've done, we have assumed the common "linear" limb darkening model, but the fit labeled MPS-sqrt was done using the square-root model of Diaz-Cordoves & Gimenez (1992) which is expected to be more accurate. The parameters used for each pass band are listed in Table 4, and they are appropriate for a star with an effective temperature of T = 8000K and a surface gravity of log g = 4.5. (See section 3.3 for a discussion of the properties of the source star.) The modeling of the lightcurve near the second caustic crossing peak is subject to some systematic uncertainty due to the features and limitations of the MACHO and EROS data which bracket the peak. The MACHO/GMAN paper noted that there is an apparent lightcurve deviation near June 17.7 that might be explained as a caustic crossing due to a faint companion to the source star. Another possible explanation might be systematic photometric errors. In either case, this deviation will add to the uncertainty in our prediction for the lightcurve during the missing peak of the caustic crossing. Another contribution to this uncertainty is the fact that the publicly available EROS data was all taken on the night of the caustic crossing. It includes the last half of the caustic crossing, but there are no other lightcurve features visible in this data set. Thus, the modeling of the EROS data will be quite sensitive to possible errors in the limb darkening model. Because of these potential problems, we include an additional systematic error of ±0.1 for our measurement of t * .
The timing of the second caustic crossing is seen to be very close to the last pre-caustic crossing prediction from MACHO/GMAN: t cc2 = June 18.18 UT vs. the prediction of June 18.2 UT (issued via email on June 17).
The peak magnification of the caustic crossing is predicted to have occurred at June 18.055 for the MPS-linear fit and June 18.045 for the MPS-sqrt fit. The lightcurve peak assumed by PLANET seems to be earlier than this by ∼ 0.03 days which agrees with our prediction when we account for the systematic errors mentioned above.
As a way to judge the overall merit of the different lightcurve fits, we compare the fit χ 2 values for each of the models. The MPS-linear and MPS-sqrt χ 2 values differ by only 1.5 which is not statistically significant. The χ 2 value for the MACHO/GMAN fit is larger than the MPS-linear value by 21.3 which is formally equivalent to a 4.6σ deviation while the χ 2 value for the PLANET-I fit is larger by 85.9 or 9.3σ. Thus, the PLANET-I fit is clearly disfavored, but it is premature to dismiss it as we have not yet included the PLANET data itself in our fits. The inclusion of the PLANET data plus additional data from the other groups in our fits should resolve this question, however.
Source Star Characterization
In order to estimate the proper motion from the microlensing fits, we must estimate the angular radius of the source star. This can be accomplished with estimates of the stellar temperature, brightness and the amount of extinction. The brightness estimate depends on the amount of blending as determined by the binary microlensing fit, but the temperature and extinction can be estimated from the broad band colors and a spectrum. The PLANET collaboration has spectrum from the SAAO 1.9m near peak magnification which indicates that the source star is an A star with T ≈ 8, 000K. The color of the star has been estimated by PLANET to be V − I = 0.31 ± 0.02 while MACHO estimates V − R = 0.03 ± 0.03. These colors are somewhat difficult to reconcile, and we suspect that one or both color estimates may be subject to systematic errors larger than the estimates above. If we attempt to find a reasonable fit to both color estimates, then we must assume a relatively small amount of extinction to be consistent with the MACHO color and the PLANET spectrum. We take A V = 0.12 ± 0.1.
From the MACHO photometric calibrations and the MPS fit, we estimate the unlensed magnitude of the source at V = 21.98, and if we use the PLANET photometric zero point, we get V = 21.91. We adopt V = 21.95 ± 0.15. The source star is expected to be a member of the SMC, but if the lens is in the SMC as well, then the source star is likely to be located on the far side of the SMC. Since it does appear that the lens is likely to be located in the SMC, we will assume a distance of 62.5 ± 2.5 kpc to the source. This yields an absolute magnitude of M V = 2.85 ± 0.2. From the Bertelli et al. (1994) isocrones, we see that this is compatible with a metal poor ([Fe/H] = −1 ± 0.3) A star with a radius of θ * = 8.2 ± 0.8 × 10 −8 arc sec, or R = 1.1 ± 0.1R ⊙ assuming a distance of 62.5 ± 2.5 kpc. Our best fit value for t * = 0.108 days (using the square-root limb darkening model), but this value is sensitive to uncertainties in the blending for the EROS data. The publicly available EROS data consists of only data taken on the night of the caustic crossing, and it has essentially only two features: a linear decline followed by a period of constant brightness. This means that if we fit only the EROS data with an unknown amount of blending, there will be a fit degeneracy that will allow a change in the caustic crossing time scale to be compensated by a blending change. This will be constrained by the shape of the fit curve in other pass bands near the caustic crossing peak, but the MACHO data seems to show an anomaly near the peak. Because of these uncertainties, we will add an additional 0.015 days as a systematic uncertainty to our measurement of t * . This yields µ = 1.31 ± 0.22 km s −1 kpc −1 and v = 82 ± 14 km s −1 . These are consistent with the µ and v estimates from the PLANET-I and MACHO/GMAN models, but it is substantially less than proper motion predicted from the PLANET-II model (Albrow et al. 1998;Alcock et al. 1998).
Conclusions
The MPS data adds a constraint on the first caustic crossing and rules out PLANET-II model. Since the PLANET-II model was the only proposed model which indicated a relative proper motion significantly different from our value of µ = 1.31 ± 0.22 km s −1 kpc −1 , this result significantly decreases the uncertainty in µ. As discussed previously (Afonso et al. 1998;Albrow et al. 1998;Alcock et al. 1998) this proper motion value clearly favors a lens in the SMC, and it does not require that the SMC be tidally disrupted as seemed to be necessary for the PLANET-II model to make sense.
While our analysis clearly favors the MPS fit over the MACHO/GMAN and PLANET-I fits, it would be best to do joint fits with all of the available data before making a final judgment.
Particularly valuable would be the PLANET data and additional EROS data. One significant difference between the MPS and MACHO/GMAN fits and the PLANET-I fit is that the PLANET-I fit implies that the lensed source is more severely blended and is therefore significantly fainter. From Table 3, we see that PLANET-I fit predicts that only 35% of MACHO-V m band flux is lensed while the MPS and MACHO/GMAN fits predict 58% and 56% respectively. Future HST images of the lensed star should resolve lensed star from its nearby unlensed companions and determine the correct blend fractions in the different pass bands.
While the observations of MACHO-98-SMC-1 have clearly established that the lens is in the SMC, the implications for the interpretation of the lensing excess seen by the MACHO Collaboration towards the LMC are not clear. The standard model of the LMC is that it is basically a disk galaxy that is inclined by about 27 • from face on to the line of sight. Gould (Gould 1995) has showed that the microlensing optical depth of such a galaxy is constrained by its line of sight velocity dispersion. This suggests that the self-lensing optical depth of the LMC is quite small, but it is conceivable that the LMC disk is not the whole story. For example, Weinberg (1998) suggests that the tidal interactions of the LMC and the galactic disk might give the LMC a larger self-lensing optical depth, but it is not known if this suggestion is consistent with the observed line of sight velocity dispersion of the LMC ≈ 20 km s −1 (Meatheringham et al. 1988).
Unlike the LMC, the SMC is thought to be extended along the line of sight, and some estimates of the self-lensing optical depth of the SMC (Afonso et al. 1998;Alcock et al. 1998) are very similar to the measured microlensing optical depth of the LMC. However, a recent n-body model of the SMC predicts a somewhat smaller microlensing optical depth (Graff & Gardiner 1998), although this prediction, τ SM C = 0.4 × 10 −7 , is larger than most predictions for τ LM C . So far, there are two microlensing events detected toward the SMC: MACHO-98-SMC-1, discussed here, and MACHO-97-SMC-1 (Alcock et al. 1997b). It has been suggested that MACHO-97-SMC-1 might also be due to an SMC lens due to its long timescale (Palanque-Delabrouille et al. 1998). However, attempts to make this argument more quantitative have invoked the assumption that the lens is a main sequence star which can not be considered a consistent assumption in the context of the dark matter problem. There has also been one caustic crossing binary event seen towards the LMC (Bennett et al. 1996b), but the lightcurve sampling of this event was not sufficient to yield an unambiguous determination of the location of the lens system.
For MACHO-98-SMC-1, we have no such ambiguity because of the complete lightcurve coverage. We can conclude with high confidence that the lens system resides in the SMC. Since this is the only Magellanic Cloud event with a reliable location, we cannot reach any conclusion about the location of the other Magellanic Cloud events. Furthermore, the rate of binary lensing events discovered towards the Magellanic Clouds is only about 0.3 per year, so the current generation of microlensing surveys is not likely to solve this problem. Fortunately, there are plans for second generation microlensing surveys (Stubbs 1998) which should increasing the microlensing detection rate towards the Magellanic Clouds by more than an order of magnitude. This will generate a large enough sample of microlensing events with distance estimates to resolve the puzzle presented by the microlensing results towards the LMC. MPS will contribute to this effort by expanding to include observations from the Boyden Observatory near Bloemfontein, South Africa in 1999. | 2014-10-01T00:00:00.000Z | 1998-12-13T00:00:00.000 | {
"year": 1998,
"sha1": "521d47a990c74f6417003d435e040a23181c04ca",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/9812252",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "521d47a990c74f6417003d435e040a23181c04ca",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
254149024 | pes2o/s2orc | v3-fos-license | Recognizing elderly peoples by analyzing their walking pattern using body posture skeleton
The increasing age of the population has become a significant concern internationally. During the COVID-19 pandemic situation, it has been seen that the most sensitive and affected class of the population is the class of Elder’s. It is therefore necessary to track the movement and behavior of the old persons. This kind of monitoring could help them in providing assistance in their needy time. Our objective is to develop an approach to classify elderly people using skeleton data for their assistance. OpenPose algorithm is used here to detect human skeletons (joint positions) from the video sequences. OpenPose algorithm with a sliding window of size ‘N’ is used to achieve a real-time posture recognition framework. Posture features from each extracted skeleton are then used to build a classifier for recognizing elderly people. We also introduce here a new dataset that includes old person walk and young person walk video’s. The experimental outcomes reveal that the proposed method has achieved up to 98.45% training accuracy and 96.16% testing accuracy for deep feed-forward neural network (FFNN) classifier. This asserts the effectiveness of the approach.
Introduction
According to the 2011 Population Survey (Velayutham et al. 2016), approximately 104 million people were aged 60 years detecting human presence and recognizing elderly people based on their movement. This could ensure the safety and comfort of all the elderly people living on their own. The primary health problem of elderly people is that they are prone to fall, which leads to very long-term injuries, fear and even death in some cases. Falling incidents lead to fractures and psychological consequences that lessen their independence. According to the study (Visutsak and Daoudi 2017), 28-34% of older persons fall at least once a year, of which 40-60% of those falls resulting in injury. Therefore, this paper proposed a practical assistive-technology based surveillance system to identify young and old people's in real-time video sequences. One of the popular pose estimation techniques, namely the OpenPose algorithm (Cao et al. 1812), is used here to derive skeletons in terms of posture joints. Since the body movements of younger and older people are different, they can be identified as younger or older based on joint movements.
The list of the contributions presented in this article is as follows: (a) We present an activity recognition framework that analyzes 2D skeletal data and classifies related actions. (b) We present skeleton pre-processing and feature extraction methods to extract relevant features from a sequence of skeletal data. (c) We perform a variety of experiments on a synthesized video dataset to access the walking patterns of elderly people.
Computer vision based proposed assistance mechanism is beneficial for the caregivers to take care of older people at homes or hospitals. The remaining of the article is organized as follows. After the brief literature discussion in Sect. 2, Sect. 3 describes the proposed methodology for recognizing older people based on their movements. In addition, this section also includes a brief discussion of the various techniques involved as part of the proposed methodology. The experiments and discussion are presented in Sect. 4. Finally, the last section involves the conclusion of the paper.
Related works
Nowadays, researchers are widely adopting sensor-based and vision-based approaches to recognize human acts in realtime scenarios. The sensor-based approach uses wearable sensors like accelerometers, gyroscopes, etc., to track an individual's activity. In contrast, vision-based approaches mostly use a Convolutional neural network (CNN) to classify human activities in real-time video sequences. Convolution neural network (CNN) (Ansari and Singh 2021;Ojha et al. 2017) is an influential innovation in computer vision.
A CNN is a deep learning approach that automatically learns spatial hierarchies of features through back propagation by using input, Convolutional, pooling, fully connected, and output layers. Some of the researches based on sensors and vision are discussed as follows: Pienaar and Malekian (2019) suggested human activity recognition (HAR) system to track the basic activities of a person by analyzing raw sensor data. Here, an open-source dataset introduced by Wireless Sensor Data Mining Lab is used that consists of six activity levels. The outcomes show that the model has achieved an accuracy of more than 94. Chernbumroong et al. (2013) suggested a HAR system to detect activities of daily living by analyzing wrist-worn sensor data. The data produced by sensors is first pre-processed and then passed to the feature extraction module to extract relevant features. Next, the extracted features are used to build a classifier to categorize the involved activities. This method shows proficient outcomes in terms of accuracy up to 94%. Putra and Yulita (2019) proposed HAR model based on the bed-wake gesture. Here, a multilayer perceptron network was used to predict activity based on sensor readings. This work has achieved proficient results in terms of accuracy of up to 90.17% for MLP and 84.46% for Naïve Bayesian. Ma et al. (2019) employed two-stream deep ConvNets to build an expert HAR system system using Inception-style temporal Convolutional Neural Network and a Recurrent Neural Network. Both networks are used to extract spatiotemporal information and exploit Spatio-temporal dynamics to enhance the whole system's performance. This method has provided excellent accuracy, up to 94.1% for on UCF101 dataset and 69.0% for HMDB51 dataset. Ji et al. (2012) developed a novel 3D Convolution Neural Network for recognizing human activities in surveillance videos. The deep 3D-CNN model evaluates appearance and motion features from each video frame. This method has attained superior performance with an average accuracy of 90.2%. Zhang and Tian (2012) study various Spatio-temporal features-based descriptors for activity recognition. They found that the probabilistic graphical models are good enough to recognize activity patterns over time than SVMs. Li et al. (2018) used deep neural networks to recognize various actions based on modeling human body posture. The method integrates RGB and optical flow streams with 2D posture features to perform human activity classification.
The literature, as discussed in sensor-based HAR systems, requires wearable sensors such as accelerometers, glucometers, proximity sensors, etc., to recognize human actions. However, the range of detecting human actions is limited in sensors-based HAR systems. Other side, vision-based HAR systems can identify a wide range of human acts using camera-based surveillance. They use complex convolutional neural architectures to learn temporal relations for human acts. However, improvements are still being required to enhance the performance of existing HAR systems. Therefore, this work proposed a cost-effective solution to differentiate human actions by analyzing human 2D body joints. The details related to the proposed system are presented in Sect. 3.
Proposed methodology
As mentioned earlier in the introduction section, this manuscript proposes an assistive technology based surveillance system to classify young and old persons in real-time scenarios. The overall workflow of the proposed system is presented in Fig. 1. The system takes video stream as input through the camera, examines each frame, and categorizes younger and older people by analyzing their walking styles. So that caregivers can be more attentive in the case of older people. The OpenPose algorithm is used to generate the human skeleton (pair of joints locations) in each frame. The OpenPose algorithm inputs an RGB frame of size " w × h " and provides the joint locations to form a skeleton for each individual within an image. After getting the skeleton joints, the skeleton data is aggregated for the first N frames using a sliding window of size N. Here, N skeletons are first pre-processed and then passed to a feature extraction module that extracts relevant features from them. Further, the extracted features are used to build a classifier for categorizing young and old walks. To achieve a real-time recognition framework, the window slides frame by frame along the video's time dimension and outputs a label for each video frame.
Human detection and skeleton generation
Human detection and skeleton generation are the primary tasks for identifying old and young walks. The work presented here uses the OpenPose algorithm (Cao et al. 1812) for human detection and skeleton generation from an image. It can jointly detect the human body and involve key points to generate skeleton. The OpenPose provides two Heat Maps, one for evaluating joint positions, i.e. Confidence Map (S), and the other for associating the joints, i.e. Part Affinity Field (PAF) Map (L) in a human skeleton. The OpenPose algorithm takes an image as input and spots skeletons for the person found in that image. An extracted skeleton involves 18 joints, including head, neck, arms, and legs, as shown in Fig. 2a. Each joint position is represented using spatial pair of coordinates, i.e. (x, y). Therefore, each skeleton is represented using 18 pairs of coordinates (a total of 36 values), as shown in Fig. 2b.
Pre-processing for features extraction
After extracting the raw skeleton, the pre-processing stage suppresses the unwanted distortion from the skeleton data. The pre-processing stage helps to enhance the characteristics of skeleton data, which helps in more accurate classification at later stage. The pre-processing includes four steps summarized as follows: • Considering all head's joints: Along with the body and limb configurations, the head position can help a lot for the classification. Therefore, the five joints on the head are added manually to make the features more meaningful. • Coordinate Scaling: The x and y coordinates for representing joint position do not follow the same scale. Therefore, these points need to be normalized in the same unit to deal with different width and height ratios. • Discard frames that do not have neck and thighs: If OpenPose does not recognize a human skeleton or if the identified skeleton does not have a neck or thighbone inside the frame, the frame is considered invalid and dropped. The sliding window slides to the successive frames. • Fill the missing joints: OpenPose may fail to recognize a full human skeleton in an occluded environment, which results in blanks at joint positions. To keep a fixed-size feature vector for the classification purpose, these joints must be filled with certain values. Here, the position of the missing joint is determined by its relative position to the neck in the preceding frame.
Features extraction
After pre-processing, the joint positions are completed and ready for use in the feature extraction process. Therefore, we used the sliding window of size N with N = 5 to extract relevant features from extracted joint positions that help to identify the action types. A better presentation with skeleton from five consecutive frames is illustrated in Fig. 3. The salient features are constructed using normalized joint positions by calculating the moving velocity of the joints and the angle of each joint for N window size. Further, a feature vector is created by concatenating these features, and then the extracted vectors are fed into a deep FFNN Classifier for training. The algorithm for finding more salient features from the raw skeleton data is discussed as follows:
Deep feed-forward neural network
A deep feed-forward neural network (FFNN) is a deep neural network comprised of two or more layers of neurons. Feedforward Network consists of an input layer and an output layer. The input layer is responsible for receiving the signal, while the predictions about the input are made in the output layer. There are an uncertain number of hidden layers between input and output layers in which the actual computation has to be performed. Only one hidden layer in FFNNs is proficient in approximating any continuous function. FFNN learns to simulate the correlation between inputs and outputs by training on a collection of input-output pairings. The model parameters, or weights and biases, are adjusted throughout training to reduce the error. Backpropagation is utilized to adjust the weights and biases relative to the error, and root mean squared error is used for measurements. FFNN updates the partial derivatives of the error function for many weights and biases using back propagation and the chain rule of calculus. Figure 4 shows FFNN architecture with hidden layers.
In this work, the deep FFNN model has been used to detect and recognize elderly people that contain one input layer, three hidden layers, and an output layer. Three times dropout has been used to prevent the model from overfitting. The input layer has 314 nodes which are features extracted from the raw skeleton data. The rectified linear unit (ReLU) is used here as an activation function to deal with non-linear input data. There are 100 nodes in each hidden layer and 2 nodes in the output layer. The output layer has 2 nodes for 2 different classes. The sigmoid function has been used as an activation function that is used to calculate the probabilistic value of each class with a learning rate of 0.0001. The highest probabilistic class will be considered as an output to the corresponding input image.
Experimentation
Google Colab platform has been used to perform a wide range of experiments. PIL (python imaging library) and OpenCV have been employed to open, save, and manipulate images. Keras library is used for classification purposes by incorporating SVM with linear/RBF kernel and Deep Neural Network. The Matplotlib library is used here to visualize model accuracy and loss curves. Scikit-learn is employed to produce the confusion matrix, and TensorFlow is used as a data flow.
Dataset
Dataset is synthesized by ourselves in an indoor and outdoor environment using a 16 MP Mobile camera. The dataset contains two types of people walk such as elderly people walk and younger people walk. Each class consists of a variable amount of videos, ranging from 30 s to 2 min in length. Videos are captured at the resolution of 640 × 480. For machine understanding, the created dataset requires proper formatting and labeling for training the model. YAML (Yet Another Markup Language), a comprehensible information serialization language is used for this purpose. The images that are going to be used for training and their label are configured in a text file containing the information like class name, starting and ending index of the video corresponding to that class. The distribution of our dataset is presented in Table 1. Figure 5 shows some instances/samples of video sequences representing old people walking and young people walking. The clips are shot in different scenarios like indoor, outdoor, low lighting, etc.
Training
The entire dataset is divided as 70% for training and 30% for testing. After pre-processing and features extraction from the raw skeleton data, the next phase forms a model to classify the data. The classification has been done using different classifiers, including Neural Network (FFNN Classifier), Support Vector Machine (SVM), and SVM with kernel method. Setting up the hidden layer and balancing the learning rate (Putra and Yulita 2019) are worked out for efficient modeling. To obtain the best outcome for each parameter, the experiment was repeated several times to find the best amount of hidden layers and the best learning rate. For updating the parameters, the loss function partial derivate Figure 6 illustrates the performance of a posture recognition system on a synthesized dataset where people perform the walking action. The trained model is fine enough for detecting the old walk or young walk.
Testing
The model is tested for 2930 test images containing two classes, each having 1357 and 1573 images to represent old people walk and young people walk, respectively. The test images are passed to our method proposed for prediction. The model first processes each test image through the OpenPose to detect the human skeleton in that image. The skeleton data is passed to pre-processing module, feature extraction module, and classification module. The model weights are adjusted during the training phase, and the window slides frame by frame along the video's time dimension in which the prediction has been made. The proposed recognition system's performance for elderly people has been tested on our dataset. Testing videos are also similar to training videos.
Result
This section shows the experimentation outcomes of the proposed model. Table 2 shows the recognition accuracy of the proposed method over different classifiers. The outcomes show that the proposed method trained over the FFNN classifier is quite higher than both variants of the SVM classifier.
The Confusion matrix of the Deep Neural Network model is given in Fig. 7. Diagonal values of the matrix represent correctly classified testing outcomes. Non-diagonal values represent miss-classified outcomes. Miss-classified means predicted value and actual value are not matching.
Different performance measures (Singh 2015(Singh , 2017Reddy and Geetha 2020) like Precision, Recall, F-Measures, and Support are used to evaluate the performance of this proposed model, illustrated in Table 3.
The performance of the proposed method is compared with other existing methods in Table 4. The result drawn in Chernbumroong et al. (2013) works on analyzing the sensors' data for assisted living and provides an accuracy of 90.23%. In Reddy and Geetha (2020), activities are modeled using video-based classification that offers up to 92.16% accuracy. However, the proposal in Li et al. (2018) classifies activities by modeling body posture using a deep neural network and stands good compared to others with an accuracy of 93.61%. The last and our proposed method achieves around 96% of accuracy for almost the same activities and behavioral modeling.
Conclusion
These manuscripts proposed a system to spot human presence and recognize whether the particular person is an older adult by analyzing the human walking style. This system constructs features from the pre-processed skeleton data constructed over video sequences. The developed method aggregates the skeleton data of a 0.5 s window for feature extraction. The model used raw features from five consecutive frames to improve the models' performance. We evaluated the developed model over our synthesized dataset. This paper considered a deep neural network model to recognize elderly people in the video. There are also two other variants of the SVM classifier, such as SVM with linear kernel and RBF kernel, which achieves good accuracy in elderly people's recognition. The result shows that the deep neural network model outperforms Linear SVM and RBF-SVM on our dataset, demonstrating good results in real-world environments. In the future, the system can be used to deploy in different applications like smart homes, theft detection, augmentation reality and many more. Additionally, more adaptive techniques like advanced CNN architecture can be used to upsurge the performance of the proposed system. | 2022-12-03T05:06:59.997Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "d57b7f2fb2a6837dc7b4d61b4c2cdcf7111b0bb3",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13198-022-01822-y.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d57b7f2fb2a6837dc7b4d61b4c2cdcf7111b0bb3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7314749 | pes2o/s2orc | v3-fos-license | Acoustic wave science realized by metamaterials
Artificially structured materials with unit cells at sub-wavelength scale, known as metamaterials, have been widely used to precisely control and manipulate waves thanks to their unconventional properties which cannot be found in nature. In fact, the field of acoustic metamaterials has been much developed over the past 15 years and still keeps developing. Here, we present a topical review of metamaterials in acoustic wave science. Particular attention is given to fundamental principles of acoustic metamaterials for realizing the extraordinary acoustic properties such as negative, near-zero and approaching-infinity parameters. Realization of acoustic cloaking phenomenon which is invisible from incident sound waves is also introduced by various approaches. Finally, acoustic lenses are discussed not only for sub-diffraction imaging but also for applications based on gradient index (GRIN) lens.
Introduction
Metamaterials made of periodic or random artificial structures, defined as "meta-atoms" with size that is larger than the conventional atom and much smaller than the radiated wavelength, are used for deeply control and manipulation of waves. Since the properties of the metamaterials are governed by the meta-atom structures rather than their base materials, by careful designing and engineering the parameters of the meta-atom structures such as shape, geometry, size or orientation, fascinating functionalities beyond the capability of conventional materials can be realized. The concept of metamaterials was first proposed by Veselago [1] in 1968 for electromagnetic waves, but it needed to wait for around 30 years for the next step when Pendry reported artificial designs with effectively negative permeability and permittivity in 1999 [2,3]. Metamaterials were then experimentally demonstrated by Smith and Shelby [4,5] for negative refractive index structures and have since been a subject of numerous studies in a wide variety of wave-matter interaction, including not only photonics but also acoustic wave science.
Acoustic wave science studies the propagation of matter oscillation through an elastic medium such as air or water and therefore explains energy transfer through the medium. While the movement of oscillating materials is limited through its equilibrium position, vibrational waves can propagate in a long distance and can be reflected, refracted, attenuated or, more generally, manipulated by the medium. According to the oscillation frequency, acoustic waves have been classified to different fields that cover the audio, ultrasonic and infrasonic frequency range, or seismic waves at much larger scale which are waves of energy travelling through the Earth's layer.
The advent of fabrication technology [6][7][8] together with development of simulation techniques such as finite element method (FEM) and finite difference time domain method (FDTD) have led to a revolution of metamaterials in controlling and manipulating acoustic waves in new ways not previously imagined [9,10]. For instance, in acoustics, it is now possible to design acoustic lenses for sub-diffraction imaging [9][10][11][12] or design acoustic cloaking which is able to make an object acoustically invisible by bending the waves [13][14][15][16]. Also, an assembly of rubber-coated spheres into a bulk metamaterial can exhibit locally designed resonant structures [17].
Our objective here is to present a unified discussion of the advances of metamaterials in acoustic wave science.
The review is organized as follows. We focus on the acoustic metamaterials in Sect. 2 with discussions in theory about acoustic parameters such as mass density and bulk modulus. The section is then followed by our review of metamaterial designs for controlling these two parameters to achieve unusual negative or near-zero values that cannot be found in nature. As a next part, acoustic cloaking is discussed in detail with different approaches. Lastly, superlens and hyperlens for sub-diffraction imaging are organized then, Luneburg and Eaton lens which are based on the concept of (GRIN) lens are introduced.
Acoustic metamaterials
Propagation of acoustic waves including sound waves in the audio frequency range is controlled by the mass density and the bulk modulus of a material through acoustic wave equation where P is the pressure and ρ, B are the mass density and bulk modulus of materials, respectively. Physically, the mass density is defined as mass per unit volume and the bulk modulus reflects the medium's resistance to external uniform compression. These two parameters are analogous to the electromagnetic parameters, permittivity ε and permeability µ, as can be seen in the following expression of the refractive index n and the impedance Z.
The mass density and the bulk modulus are always positive in conventional media and hard to modify because the material properties are directly associated with the chemical composition and bonding structures of the constituted atoms. However, a variety of effective acoustic parameters including negative values which never existed in nature can be obtained by metamaterials whose properties are mainly governed by the meta-atom structures that behaves like a continuous material in the bulk. According to the sign of the mass density and the bulk modulus, acoustic metamaterials can be classified to negative mass density, negative bulk modulus, double negative parameters, near-zero and approaching-infinity mass denstiy as shown in Fig. 1. These types of acoustic metamaterials together with each corresponding applications will be directly in the next sub-sections.
Negative mass density
When an atom is deviated from the equilibrium state, it will be pulled back to the balance position by a central force explained by Newton's second law F = mẍ. Although the mass of an atom must be always positive, negative effective mass density can be achieved in a periodic structure comprising of artificial meta-atoms near its resonant frequency. The physical nature of effective mass density was theoretically explained by Milton and Willis [18] through a mass-spring system. A simple massspring system consisting of mass M 2 positioning inside the cavity of mass M 1 and coupling with the mass M 1 through a spring of strength K is shown in Fig. 2a. If we assume that the masses vibrate without friction under an external force F (ω) with an angular frequency ω, moving equations given by Newton's second law are described as where x 1 , x 2 are displacements of M 1 , M 2 , respectively and ω 0 = √ K /M 2 is the local resonance frequency. By assuming x 1 , x 2 and F are time-variant values and solving these differential equations for the external force F (ω), we have The above equation indicates that, in the view of external force, the two-object system M 1 -M 2 can be considered as a homogenous one-object system with the resonant frequency ω 0 and an effective mass is One can deduce from this equation that the effective mass M eff can be negative if the external force oscillates near the resonant frequency of the system, particularly, in the range ω 0 < ω < K /M 1 + ω 2 0 as can be seen in Fig. 2b. Finally, we have an effective mass density ρ eff by dividing M eff by the system volume. The term "effective" will be often omitted when describing effective mass density and effective bulk modulus in this review.
The periodic 1D mass-spring system was experimentally visualized by Yao et al. [20] and recently summarized in [21,22]. Figure 3a represents the experimental setup which consists of seven unit cells, air track and the harmonic oscillation generator MTS Tytron 250. More particularly, each unit cell is composed of three blocks of length 30 mm, in which first and last blocks are constrained to an aluminum sheet on the top, while the middle block can move freely. The three blocks are attached each other by two soft springs G and unit cells are connected to each other by a spring K. The dynamic system is finally excited with a harmonic external force with non-friction condition by the MTS Tytron 250 and air track. Actual picture of the experimental setup is shown in Fig. 3b and corresponding measurement results for a single unit cell (Fig. 3c) indicates a strong resonance near 6 Hz. Harmonic movement of the whole system with seven unit cells is also measured as shown in Fig. 3d. As where M 1 , ω 0 and K are set to 0.002, 0 and 1, respectively. a is adapted from [19] a result, negative mass density was found with a ban-gap near the resonant frequency from about 6 to 7.6 Hz and transmittance defined as the amplitude ratio of X N /X 0 was obtained as around −30 dB. Generally, negative mass density in acoustic metamaterials can be realized by replacing the mass-spring system to any kind of system having constitutive compositions corresponding to a mass and a spring. For example, a membrane system having a unit cell made up of a rigid grid is reported in [23] where the rigid grid and membrane play the role of the mass and spring, respectively. Such a membrane system with negative mass density has been applied to realize sound absorbers [24][25][26][27][28][29].
Negative bulk modulus
The property of bulk modulus indicates how the material resists to an external pressure, which is given by where ΔP, ΔV/V and B denote the pressure change, volume strain and bulk modulus, respectively. Like negative mass density, negative bulk modulus can also be realized by introducing the definition of negative effective bulk modulus in acoustic metamaterials. A simple example of the negative bulk modulus system is a Helmholtz resonator that is basically made up of a large cavity Experiment of 1D spring-mass system. a Setup scheme. b Actual picture of the experimental setup. c Ratio between displacement amplitudes of masses m and M 0 for a single unit cell. d Left-hand side shows the dispersion relation predicted (q is the Bloch wave-vector and a is the lattice constant). Right-hand side shows the transmittance for the whole system and the negative transmittance indicates a negative mass density. a-d are adapted from [20] and a narrow neck as shown in Fig. 4a. The effective bulk modulus is expressed by Fang et al. [30] where F is the geometrical factor, ω 0 is the resonant angular frequency and Γ is the dissipation loss in the resonating Helmholtz elements. Once again, we can see from the above equation that effective bulk modulus can reach a negative value when the external force oscillates near the resonant frequency. This phenomenon relates to the fact that the cavity is expanded due to an outward restoring force in near the resonant frequency, which indicates the negative bulk modulus. Whereas, being shrunk of the cavity due to an external compressive force indicates the positive bulk modulus in a conventional case. The incoming sound through the neck and the cavity inside are analogous to a mass and a spring, respectively. The negative bulk modulus system was experimentally demonstrated by Fang et al. and Lee et al. [30,31]. Fang's group conducted an underwater ultrasonic transmission The experimental setup shown in Fig. 4b consists of a transducer for underwater sound source and two hydrophones for detection of the signals. Extremely low transmission was observed as visualized in Fig. 4c, indicating that the propagation wave was transformed to evanescent form due to the negative bulk modulus of the metamaterials. Moreover, a formation of negative phase velocity was also confirmed in this experiment due to the loss of friction in the system. Other related works for different types of Helmholtz resonators can be found in [32][33][34][35].
An example of application technique for negative bulk modulus is reported by Kim [36] for an air transparent sound proof window. Figure 5a represents the device scheme and the corresponding measurement results are shown in Fig. 5b. One can see that the amplitudes of sound waves are exponentially reduced by demonstrating a successful realization of negative bulk modulus. Moreover, wind pressure can be dropped because the air flow is led to the air holes smoothly. Such a device is useful for the place against huge wind pressure environments such as in hurricane and typhoon.
Double negative parameters
We have explained in previous sections that either effective mass density or effective bulk modulus of acoustic parameters can be negative near resonant frequency of a periodic artificial structure and then a fully opaque acoustic material is possible. However, an inverse effect in which sound wave energy propagates instead of attenuation will occur when both these two parameters are negative simultaneously. In a mechanical system, a dipole resonance is related to the effective mass density because the resonance vibrates along a certain direction, resulting in the inertial response and oscillating like a spring-mass system [17,23,37,38]. A monopole resonance, however, vibrates in all directions associated with a compressive or expansive motion which functions like the change of volume of Helmholtz resonator and is thus related to the effective bulk modulus [37,39,40]. Therefore, to realize double negative parameters scheme in Fig. 1c, two resonance symmetries including dipole and monopole resonances must be exploited. The two resonance types can be obtained using membrane and Helmholtz structures. In this manner, Lee et al. [41] demonstrated double negative parameters system for negative phase velocity by combining Helmholtztype and membrane-type pipes with periodic side holes and membranes as represented in Fig. 6c. Fok and Zhang [42] also tried to demonstrate double negative parameters using rod-spring and Helmholtz structures, but they pointed out that negative refractive index can still be achieved by designing an acoustic metamaterial with negative bulk modulus and positive mass density due to large material loss.
The above methods are limited to a extremely narrow frequency range and more recent researches have continued to overcome this limitation, leading to novel class of acoustic metamaterials so called "space-coiling metamaterials" having negative refractive index over broad range of frequency [43][44][45][46][47][48]. This kind of metamaterial is realized by coiling up space with curled channels and no requirements for creating local resonances, and can be constructed easily not only for two dimensions but also for three dimensions. We will go back to this type of metamaterial later in Sect. 2.5. Another method for obtaining metamaterials with negative refractive index is to stack several holey plates forming hyperbolic dispersion with highly anisotropic structure [10,11]. The hyperbolic acoustic metamaterials will be discussed in more detail in Sect. 2.7.2.
Near-zero and approaching-infinity mass density
Another type of acoustic metamaterial is explained in Fig. 1e with near-zero effective mass density. Ideally, this class of metamaterials enables zero refractive index and infinity phase velocity, leading to wave propagation without any reflection and phase change [49][50][51]. The metamaterials have recently realized by squeezing the sound through ultra-narrow channels [52], embedding a single cylindrical defect which is almost ideal rigid with the sound hard boundary conditions [53] and coiling up space with curled channels [43]. A highlight work related to the near-zero mass density metamaterials is reported in [54]. The metamaterial structure using a thin perforated circular membrane is schemed in Fig. 7a. The setup is composed of a one-hole rigid mounted-circular [36] wall in a circular tube of 2.3 m length and 100 mm inner diameter and a membrane of 17 mm diameter hole at the center. Measurement results of instantaneous 2D pressure distributions at 1.2 kHz for normal incidence presented in Fig. 7b demonstrate a perfect transmission in which both amplitude and phase of the intensity distribution are nearly identical between the case without presence of wall and the case with presence of membranes of the structure.
Another interesting characteristic can be achieved when the effective mass density approaches to infinity. In this case, the impedance in the slab would be very large, leading to large impedance mismatch between the slab and background, and therefore resulting in the nearly total reflection on the interface. This characteristic is demonstrated based on membrane-type acoustic metamaterials and could be exploited in noise control [25,55].
Space-coiling metamaterials
Space-coiling metamaterials, known as a subset of double negative parameters in acoustic metamaterials (see Sect. 2.3), have recently drawn great of interest for the exploration of extraordinary constitutive acoustic parameters [43][44][45][46][47][48]. The concept is first proposed by Liang and Li [43] and the corresponding design as a single curled unit is represented in Fig. 8a. Instead of using local resonance structures such as membranes or Helmholtz resonators which are suitable only for narrow frequency range devices, the authors achieved the negative refractive index over a broad range of frequency simply by coiling the space inside the metasurface and prism as can be seen in Fig. 8b. The structure consists of thin plates arranged in periodic channels. In Fig. 8a, the zigzag arrows on the left-hand side denote a path of waves in the second quadrant inside curled channels and X-shaped blue region on the right-hand side shows a simple view of the path of the waves through the curled channels. Through the dispersion relation derived by Floquet-Bloch theory, unusual properties such as negative, higher and zero refractive index could be indeed realized to satisfy the dispersion relation. Negative and higher index are obtained below the band-gap, whereas zero refractive index are obtained at nearly one point of frequency range which is exactly a band-gap frequency. In fact, each curled unit cell deliberately leads to propagate the air flow in curled channels and elongate the path of air flow. Therefore, the phase delay occurs along the elongated path, resulting in high refractive index. If a phase change is given with a negative value, then the negative refractive index can be obtained. Also, zero refractive index can be realized by squeezing waves inside the metasurface at a specific frequency, which [54] shows a high transmission (Fig. 8c). This kind of symmetric geometry could be designed easily not only for two-dimensions but also for three-dimensions through the 3D printing technique.
Acoustic cloaking
The advent of transformation optics has offered great versatility in designing acoustic metamaterials for deep manipulation of acoustic waves [56][57][58][59][60][61][62][63][64]. Not only macroscopic parameters such as transmission, absorption or reflection energy but also sub-wavelength spatial manipulation of the waves can be controlled. The idea is based on coordinate invariance of Maxwell's equations on which the space of light can be squeezed and stretched by producing a desired spatial and right distribution of the permittivity and permeability through conformal transformations [58]. The metamaterial structures are designed thanks to the powerful ability of transformation optics to establish relationships between seemingly unrelated structures, particularly between complicated and simpler structures. For example, periodic plasmonic gratings can be generated from a simple slab through two conformal transformations [65], or high Q-factor whispering gallery modes are designed via transformation optics by linking to the fundamental whispery structure [66]. Figure 9 presents an example of the space distortion in the (x, y) plane of the Cartesian coordinates generated by conformal mapping. The zigzag arrows denote a path of waves in the second quadrant inside curled channels. X-shaped blue region shows a simple view of the path of the waves through the curled channels (right-hand side). b Pressure field of the space-coiling metamaterials (left-hand side) and the effective medium (right-hand side) which has same conditions without coiling. It shows both are well matched and the negative refractive index is obtained. c Pressure field for the cases of a hard solid plate (above) and coiling metamaterials surrounding a hard plate (below). High transmission with no reflection by coiling metamaterials is obtained. a-c are adapted from [43] Fig. 9 a A field line in free space with the background Cartesian coordinate grid shown. b The distorted field line with the background coordinates distorted in the same fashion. a, b are adapted from [58] By emerging this powerful tool into the acoustic wave science [67][68][69], acoustic applications for cloaking and super-resolution that require metamaterials containing complicated and hard to implement properties are now possible. The term "acoustic cloaking" refers to a phenomenon that a shell makes the surrounded object invisible from any directions of the incoming sound waves. In fact, the idea of acoustic cloaking was inspired from electromagnetics and optics where experimental cloaking phenomena have been realized at radio [59,70] and optical [71] frequency range.
The harmonic acoustic wave equation without a wave source is defined as [72] which described how to apply the cloaking phenomenon in electromagnetic waves to other types of waves, especially in the acoustic waves. Numerical studies for acoustic cloaking in two dimensions [13,14] and three dimensions [15] have also been conducted. The first experiment of acoustic cloaking was realized by Zhang et al. [16] with a design of 2D array of sub-wavelength cavities filling with water and connected channels with spatially tailored geometry (Fig. 10a). The design of cavities is referred to the concept of lumped acoustic elements which are analogous to electronic circuit elements (Fig. 10b). As a result, 2D acoustic cloaking with a proper array of the unit cells composed of cavities and connected channels was achieved with almost no scattering in front and rare of the steel cylinder as shown in Fig. 10c.
Another approach for acoustic cloaking inspired from carpet cloaking suggested by Li and Pendry in electromagnetic field [73]. With this concept, the first experimental 2D acoustic carpet cloaking was demonstrated by Popa et al. [74]. Subsequently, a 3D carpet cloaking which is an extension of the 2D one was demonstrated by Zigoneanu et al. [75]. The setups of 2D and 3D carpet cloaking are made of arrays of the perforated plastic plates with sub-wavelength holes that allow the penetration of airborne sounds. Metamaterials with highly anisotropic mass density are required for this approach so that it can uncover high-loss scattering on the perforated plastic plates. For example, a 3D omnidirectional acoustic carpet cloaking was designed with a pyramid-shaped structure (Fig. 10d). The scheme of experimental setup is illustrated in Fig. 10e and the results of instantaneous scattered pressure field are shown in Fig. 10f. Besides the cloaking devices based on transformation acoustics, acoustic cloaking can also be realized by using scattering cancellation method to eliminate the scattered acoustic field between background and system [76][77][78][79][80][81][82].
Acoustic lenses
Concepts of optical or electromagnetic lenses can also be applied to acoustics. In this sub-section, we will review multiple designs of acoustic metamaterials for realization of acoustic lenses, including superlens and hyperlens for sub-diffraction imaging, Luneburg lens for focusing acoustic waves without aberration and Eaton lens for control and manipulation of acoustic waves with arbitrary refraction angles in spherical geometry.
Superlens and hyperlens
Superlens, hyperlens or, more completely, super-resolution lenses are devices which are able to image beyond the diffraction limit in both near-and far-field. In general, superlens and hyperlens are for the near-field and far-field, respectively. The concept of superlens was first proposed and demonstrated by Veselago-Pendry with a negative refractive index [1,83] and has been the subject of intensive research due to a wide variety of applications in biology, pathology, medical science and nanotechnology. In his work, Pendry has shown that a negative index medium of superlens cannot converge diverging waves to a focal point in the far-field but can enhance their amplitude in the near-field (Fig. 11). Figure 12a represents an experimental setup of the first demonstration of acoustic superlens with a negative refractive index [84]. This idea was based on Helmholtz resonators and originated from 2D transmission line method in electromagnetic metamaterials [85][86][87] by relating the effective mass density and bulk modulus of the network structure in the acoustic lumped circuit to the inductor and capacitor in the L-C circuit. The acoustic inductor (neck part) and capacitor (cavity part) are simply assumed to be an open end and rigid end pipe (Fig. 12b). By alternatively positioning acoustic inductors and capacitors, the negative refractive index can be acheived. Finally, perfect lens phenomenon which is understood as the focusing of ultrasound was realized by structuring different designed Helmholtz resonators for forming a PI-NI (positive index-negative index) interface.
Inherited from studies in optics [88][89][90], hyperlens which is known as artificial metamaterials with hyperbolic dispersion has been also applied to acoustics as an alternative way to overcome the diffraction limit of a given imaging system in the far-field regime. The principle of the acoustic hyperlens can be explained through the dispersion relation in acoustics as following where k r , k θ are wavevectors in the radial and azimuthal direction, respectively. In conventional medium, since both radial and tangential mass density are positive, the dispersion profile representing k r as a function of k θ will be circular according to Eq. (11) leading to the existence of a cutoff wavevector that limits the tangential spatial Fig. 10 a The design of acoustic cloaking by Zhang et al. [16]. It consists of concentric layers with proper cavities and channels. b A unit cell of acoustic cloaking structure. The unit cell consists of a cavity with a large volume and four narrow channels as like as a shunt capacitor and serial inductors. c A measured pressure field plot at 64 kHz. The acoustic cloaking structure lies in the center of the water tank and the steel cylinder is positioned inside the structure. There is almost no scattering behind the structure, so that it shows the steel cylinder is well cloaked. d Scheme of physical structure designed (above) and a photograph of actual pyramid-shaped structure with perforated plastic plates (below). e Scheme of an experimental setup. There is a scanning microphone and A, B and C are specific points to be measured. f Instantaneous scattered pressure fields in each case. The case "Cloak" is well matched with "Ground" compared to "Object. " By arranging the pyramid-shaped structure, the inner space of the structure is recognized as empty one. a-c are adapted from [16] and d-f are adapted from [75] frequency, resulting in the diffraction limit. In the case of hyperlens, since ρ r is negative, the dispersion described in Eq. (11) will have a hyperbolic form in which the radial wavevector k r can still be positive for a very large value of the tangential wavevector k θ . In other words, the high frequency information of objects which cannot be resolved in the conventional system is transformed to propagating waves and brought to the far-field. Consequently, a magnified fine feature information can be acquired by using the hyperlens. Li et al. [9] first demonstrated an acoustic hyperlens which is able to work for the broadband wave frequency with low loss. The hyperlens consists of alternating brass and air stripes along the θ direction (Fig. 13a). Because of the huge difference of mass densities between brass and air, highly anisotropic dispersion relation is obtained, leading to imaging enhancement as shown in Fig. 13b. The negative refractive index and enhanced imaging were also achieved by arranging proper layers of perforated plates with hyperbolic dispersion [10,11]. More recently, Shen et al. [12] realized a hyperlens utilizing multiple arrays of clamped thin plates similar to membranes with the negative mass density, yielding a hyperbolic dispersion.
Luneburg and Eaton lens
Luneburg lens is based on the concept of gradient index (GRIN) lens, in which refractive index decreases radially [84] from the center to the outer surface [91][92][93][94][95][96]. For certain index profiles, the lens will form perfect geometrical images of two given concentric spheres onto each other, and are possible to guide and manipulate the incoming waves without aberration (see Fig. 14). In an ideal Luneburg lens (Fig. 14a), light trajectory (red rays) from various different positions can perfectly focus at one point without aberration.
Luneburg proposed this concept for the first time in 1940s [91], and it was well studied by Gutman [98], Morgan [99] in 1950s and Boyles [100] in 1960s. Zentgraf et al. [101] have realized the Luneburg lens in plasmonics. In acoustics, sound focusing based on GRIN lens was reported [102,103] and diverse GRIN lenses for flexural waves were also demonstrated numerically [104]. First two-dimensional acoustic Luneburg lens has been reported by Kim [97,105]. Such a Luneburg lens satisfies the equation of the refractive index given by a function of the radius where R is the radius of the lens and 0 ≤ r ≤ R. The wave equation of acoustic Luneburg lens is governed by mass density and bulk modulus. But, the bulk modulus inside and outside of the lens is assumed to be constant. Therefore, variable mass density inside of the lens is the main factor for acoustic Luneburg lens which can control the refractive index gradually. Recently, three-dimensional Luneburg lens was demonstrated at optical frequency range [106]. This kind of lenses in acoustics could be considered as a candidate for harvesting energy or sonar system in practical use.
Eaton lens as an extension of GRIN lens for arbitrary refraction angles in spherical geometry can also be realized in acoustics by controlling the mass density inside of the lens with constant bulk modulus. 180° acoustic Eaton lens has been recently reported but, the complete Hz. a is adapted from [93], b is adapted from [97] demonstration still seems to be remained [107]. More efforts of metamaterial engineering are necessary for realization of Eaton lens which is able to work with various refraction angles.
Conclusion
Together with the advent of electromagnetic and optical metamaterials, the field of acoustic metamaterials has expanded marvelously over the past 15 years. Although theoretical studies including analytical models and numerical tools have been well explored, many of significant challenges remain in the practical implementation of acoustic metamaterials. With the purpose to have a unified overview of the study progress, we have described research highlights with particular attention given to the sound waves in this review. Acoustic parameters, the mass density and bulk modulus, which are analogous to the permittivity and permeability in electromagnetic waves are identified as key parameters for acoustic wave science. We now know that various values of effective mass density and bulk modulus including negative values can be achieved by engineering mass-spring systems (or membranes) and Helmholtz resonators, respectively. Implementation of these structures for metamaterials with a single negative parameter, double negative parameters, near-zero and approaching-infinity mass density were then reviewed. In addition, space-coiling metamaterials were presented to realize negative, higher and zero refrative index not utilizing local resonance systems. We also reviewed some applications of acoustic cloaking with different approaches such as transformation acoustics, highly anisotropic parameters and scattering cancellation method. And then, superlens and hyperlens for diffraction limit breaking were well explained. Lastly, Luneburg and Eaton lens based on gradient index profile for manipulation of sound waves were introduced in terms of focusing and arbitrary refraction angles, respectively. Nowadays, acoustic metamaterials inspired by electromagnetic and optical metamaterials recently started influencing to not only elasticity but also seismology and even thermodynamics. Although our review didn't include other fields of metamaterials, we also hope all research area of metamaterials will lead to advanced science and technology. | 2017-09-19T04:39:57.310Z | 2017-02-07T00:00:00.000 | {
"year": 2017,
"sha1": "bc810f09372c83fcce8f9ac9f0180b353ff1c628",
"oa_license": "CCBY",
"oa_url": "https://nanoconvergencejournal.springeropen.com/track/pdf/10.1186/s40580-017-0097-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc810f09372c83fcce8f9ac9f0180b353ff1c628",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
246296482 | pes2o/s2orc | v3-fos-license | Is social media, as a main source of information on COVID-19, associated with perceived effectiveness of face mask use? Findings from six sub-Saharan African countries
Background: The use of face masks as a public health approach to limit the spread of coronavirus disease 2019 (COVID-19) has been the subject of debate. One major concern has been the spread of misinformation via social media channels about the implications of the use of face masks. We assessed the association between social media as the main COVID-19 information source and perceived effectiveness of face mask use. Methods: In this survey in six sub-Saharan African countries (Botswana, Kenya, Malawi, Nigeria, Zambia and Zimbabwe), respondents were asked how much they agreed that face masks are effective in limiting COVID-19. Responses were dichotomised as ‘agree’ and ‘does not agree’. Respondents also indicated their main information source including social media, television, newspapers, etc. We assessed perceived effectiveness of face masks, and used multivariable logistic models to estimate the association between social media use and perceived effectiveness of face mask use. Propensity score (PS) matched analysis was used to assess the robustness of the main study findings. Results Among 1988 respondents, 1169 (58.8%) used social media as their main source of information, while 1689 (85.0%) agreed that face masks were effective against COVID-19. In crude analysis, respondents who used social media were more likely to agree that face masks were effective compared with those who did not [odds ratio (OR) 1.29, 95% confidence interval (CI): 1.01–1.65]. This association remained significant when adjusted for age, sex, country, level of education, confidence in government response, attitude towards COVID-19 and alternative main sources of information on COVID-19 (OR 1.33, 95%CI: 1.01–1.77). Findings were also similar in the PS-matched analysis. Conclusion: Social media remains a viable risk communication channel during the COVID-19 pandemic in sub-Saharan Africa. Despite concerns about misinformation, social media may be associated with favourable perception of the effectiveness of face masks.
Introduction
The novel coronavirus disease 2019 (COVID- 19) pandemic continues to pose significant challenges for health systems around the world (1). Despite the development of new vaccines, the emergence of new viral strains of concern, delays and logistic challenges inherent in large scale immunisation campaigns across countries of the world reinforce the need to strengthen existing nonpharmaceutical interventions (NPI) to limit disease spread (2,3). One such NPI that has gained public interest is the use of face masks by individuals in the community as a way to prevent disease spread, especially from infected persons who are asymptomatic (3)(4)(5)(6). The World Health Organisation (WHO) and other health authorities in various jurisdictions have made evolving and sometimes confusing recommendations about this issue (6,7).
There is growing concern about the role of social media in spreading misinformation about the effectiveness of face masks and other NPIs in preventing the spread of COVID-19 (8,9). While concerns about health misinformation via social media are not new, the COVID-19 pandemic has amplified these concerns (9)(10)(11)(12). Suboptimal regulation of information sources and the propensity for social media algorithms to prioritise the most popular posts make it inherently difficult for the public to verify health information via modern media channels like Twitter, Facebook and Instagram, and messaging platforms like WhatsApp (9,13,14). Yet these channels are major channels for risk communication and health promotion, especially in health emergencies like COVID-19 (10,11,15).
In resource-limited settings like sub-Saharan Africa, the importance of social media in health prevention and promotion, especially during COVID-19, cannot be overstated (16). However, social media has been seen as a medium for misinformation, especially about reduced vulnerability to COVID-19 and the availability of untested therapies (17,18). Concerted efforts at misinformation have been shown to be often politically motivated, especially in a health emergency like COVID-19, resulting in the development of an 'infodemic' -a situation defined by the uncontrolled spread of low-credibility, false, misleading and unverified information (11,12,17). Misinformation via social media is also suggested to be fuelling untoward perceptions of the effectiveness of NPIs, particularly the use of face masks (19)(20)(21). Despite these concerns, evidence is limited on the relationship between the use of social media as the main COVID-19 information source and perceived effectiveness of face masks as a public health strategy.
The limited and emerging evidence suggests that social media may play a role in informing people's perception of the effectiveness of face mask use (22). Yet, no study has specifically assessed this relationship in the sub-Saharan African region. This region may have escaped the first wave and second waves of the COVID-19 with relatively less morbidity and mortality than the rest of the world, but emerging data from the third wave is raising concerns as morbidity and mortality rates are on the increase (23)(24)(25). More evidence is required to inform ongoing public health engagement strategies that will continue to protect the health of Africans in subsequent waves. In this context, this study seeks to assess the association between use of social media as the main COVID-19 information source and perceived effectiveness of face mask use in six sub-Saharan countries.
Methods
Study design, setting and population IUHPE -Global Health Promotion Vol. 29, No. 3 2022 Zimbabwe. These countries, although largely diverse, share similarities. In terms of the variations, population sizes range from 2.2 million in Botswana to about 200 million in Nigeria (26). However, there is a shared growth in the adoption of mobile and internet technologies that facilitate access to social media platforms. For example, between January 2019 and January 2020, the number of internet users increased by 2.2 million (2.6%), 3.2 million (16%) and 595,000 (16%) in Nigeria, Kenya and Zambia, respectively (27). Large variations in education have been noted for the selected countries. For instance, less than 1% of Zimbabwean children of primary school age are out of school. The same applies to Malawi, where only 2% of children are out of school (26). However, 15%, 19% and 34% of children were reported out of school in Zambia, Kenya and Nigeria, respectively (26).
Sample size and sampling
We selected a sample of respondents from six countries in West (1), East/Central (1) and Southern Africa (4). These countries were selected to give a geographic representation across the different sub-Saharan African blocs that typically differ in national culture and context. For each country, since the population was greater than 20,000, we determined, at 95% confidence level, a sample of 384 respondents to have sufficient power to provide generalisable results in each country at a total sample size of 2304 (28).
Data collection
The survey was administered online, between 17 May 2020 and 15 June 2020 using structured questionnaires on Google forms (Alphabet Inc., Mountain View, CA, USA), with appropriate skip logics and patterns as indicated. Respondents were recruited via email listservs, Facebook, Twitter, Telegram and WhatsApp. Enrolment in the study occurred on a first-come, first-served basis. As part of the survey, we assessed respondents' perceived effectiveness of face mask use in limiting COVID-19, and their main source of information including social media, television, newspapers, employers, family, friends, and online/web channels. Further, data on respondents' sociodemographic characteristics, COVID-19 risk perception and attitude to COVID-19 were collected.
Analytic sample and study variables
Our study sample included all respondents who had valid responses to our outcome question, which assessed how much they agreed that the use of face masks was effective in limiting COVID-19 in their countries, on a 5-point Likert scale ranging from 'strongly disagree' to 'strongly agree'. Responses were dichotomised as 'agree' and 'does not agree'. Responses such as 'don't know', or 'does not apply to my country' were excluded from the analysis. For our exposure variable, respondents were asked to indicate their main source of information on COVID-19. Participants were allowed to provide up to three main sources of information on COVID-19. Potential confounders and predictors of the outcome were included based on an a priori framework informed by the literature (9,29,30) ( Figure 1). The following variables were included in our analysis: alternate sources of COVID-19 infor mation (including television, radio, newspapers, family/relatives, employers, and other online/web channels), COVID-19 risk perception, confidence in government COVID-19 response and attitude to COVID-19. Sociodemographic variables like age, sex, level of education and occupation were also included. Where potentially important sociodemographic variables like socioeconomic status were unmeasured, we ensured that we included proxy variables that could potentially account for these variables ( Figure 1).
Data analysis
Simple descriptive analysis was used to summarise the characteristics of study respondents using frequencies and proportions. Unadjusted odds of our outcome given the exposure and covariate were generated using logistic regression models. Thereafter, multivariate logistic regression models were used to estimate the adjusted effect of social media as a main COVID-19 information source on the perceived effectiveness of face masks, using odds ratios (ORs) and 95% confidence intervals (CI). After retaining confounders and predictors identified in the literature (9,29,30), automated backward elimination method based on the Akaike information criterion (AIC) was used to select the final model (31). We also assessed possible effect modifiers and covariate interactions including age, sex and country of residence. No significant interactions were identified, therefore the simpler model was considered as the final model. In terms of model diagnostics, we assessed the model using the area under the operating characteristics curve (AUC) (32), and the Hosmer-Lemeshow goodness-of-fit test (33). Collinearity was assessed using a cut-off for variance inflating factor as < 10.
To assess the robustness of our findings and our multivariate model specification, we conducted a propensity score (PS) matched analysis to balance covariates between the exposure and control groups (34). Covariate balance was assessed using a standardised mean difference (SMD < 0.2) with 1:2 nearest neighbor matching without replacement. All covariates from the main analysis were included in the PS logistic model. All analyses were tested at the 5% significance level and were conducted using R-4.0.2 (35).
Ethical approval
The survey protocol was approved by the Health Research Development Committee (HRDC) of the Ministry of Health and Wellness, the local institutional review board of Botswana (REF Number HPDME 13/18/1). Informed consent was collected electronically from respondents completing the survey. Participation was voluntary and those who consented were allowed to exit the survey at any time by simply closing the browser page. Association between social media and perceived effectiveness of face masks Table 2 illustrates the unadjusted and adjusted relationship between social media as main COVID-19 information source and perceived effectiveness of face masks. In unadjusted analysis, respondents who used social media as their main COVID-19 information source, had greater odds of agreeing that face masks were effective compared with those who did not (OR 1.29, 95% CI: 1.01-1.65). This association remained the same when adjusted for age, sex, country, level of education, confidence in government response, attitude towards COVID-19 and alternative main sources of information on COVID-19 (aOR 1.33, 95% CI: 1.01-1.77).
PS matching analysis
In sensitivity analysis using PS matching, we achieved considerable improvements in the balance of covariates between exposed and unexposed in the PS matched sample (all SMD < 0.2) compared with the main sample. Table 3 describes the PS-adjusted relationship between using social media as the main source of COVID-19 information and perceived effectiveness of face masks. Findings were similar to those obtained in the main analysis (aOR: 1.44, 95% CI: 1.04, 2.00).
Discussion
In this study, we found that over half of respondents used social media as their main source of information on COVID-19 and most respondents perceived facemasks to be effective as an NPI for preventing COVID-19. We also found that respondents using social media as their main source of information on COVID 19 had 33% (95% CI: 1-77%) greater odds of perceiving face masks as being effective in preventing COVID-19. This association was significant in the main analysis, and remained significant in sensitivity analysis using PS matching methods to ensure covariate balance between the exposed and control groups.
Findings from this study agree with emerging findings from Africa on the perceived effectiveness of face mask use in preventing COVID-19. For example, a study in Uganda found that over 80% of people perceived face masks to be effective in preventing COVID-19 infections (30). Our findings also support studies suggesting positive associations between information seeking on social media and various aspects of face mask use, including perceived effectiveness. A study in China linked information seeking on social media with perceived effectiveness and compliance with face mask use (36). Another study assessing content from Twitter related to face masks, revealed that clusters of conversations were facilitated by influential accounts run by citizens, politicians and popular culture figures (22). These conversations commonly encouraged the public to wear masks. Further, a study in the United States (US) described personal stories of loss from COVID-19 reported on social media as a motivation to support community use of face masks to prevent COVID-19 (5). Our study provides evidence of the association between the use social media as the main COVID-19 information source and perceived effectiveness of face masks in preventing disease spread, especially in the sub-Saharan context. Despite the obvious limitations in available evidence, plausible causal explanations for these associations have been proffered. It has been suggested that the personalisation and catchiness of information sharing experiences may explain the association (5). The emotional nature of the messaging in such contexts as exist on social media may also elicit feelings of worry, which have been described as a mediating factor for preventive behaviours such as compliance with face masks (36). However, this mechanism has been disputed, as beliefs about consequences and benefits of face masks may be more important than exposure and belief in misinformation (37).
Our findings support the role of social media as an effective COVID-19 risk communication channel. As successive COVID-19 waves exert their toll on already vulnerable health systems in sub-Saharan Africa, public health interventions leveraging social media may be useful, especially in urban centres where crowding and reliance on subsistent earnings may imply that lockdown measures and stay-athome orders may not be feasible for extended periods (38). However, health authorities must be aware of the debate about ongoing misinformation via the same channels (13). As has been described, suboptimal regulation, propagation of misinformation based on popularity metrics by social media algorithms and unwitting social media users often spread harmful messages that are often politically motivated (8,17,18,39). Concerted efforts by media, scientific organisations and government institutions are therefore needed to leverage the availability of social media in disseminating important information on the effectiveness of NPIs for COVID-19 including face masks (39), and the benefits of compliance (37).
Future research will be necessary to explore if perceived effectiveness of face masks ultimately result in compliance with mask use. Research will also be necessary to fully understand the mechanisms that result in perceived effectiveness of face mask use in preventing infections with social media use as main source of COVID-19 information. Efforts should also seek to understand the differences in this relationship between various social media platforms. Such information will be useful to inform replicable public health promotion strategies via various social media platforms that are better positioned to influence people's behaviour to achieve improved health outcomes.
The strengths of our study findings are inherent in the consistency of the observed association in sensitivity analysis using PS matching methods. The association remained significant in both analyses. Moreover, to the best of our knowledge, this is the first study assessing the relationship between social media as a main source of COVID-19 information and perceived effectiveness of face masks in sub-Saharan Africa. This is despite widespread debate about the role of social media misinformation, especially in the context of COVID-19 risk communication. However, our study must also be viewed in light of its limitations. First, our route of participant recruitment implies that the study respondents may not necessarily be representative of the study population of interest. For example, with our online recruitment strategy, respondents included were more likely be those who regularly access online services like social media, and 91% of our sample had tertiary level of education, whereas Nigeria for instance had only 62% adult literacy rates in 2018 (26). However, given that our findings remained consistent in PS analyses where we attempted to account for potential selection bias, we remain confident in our findings. Further, we only recruited 86.3% of our intended sample size and this may have limited the power of our study. We posit that existing fears about government involvement with such types of research may have discouraged participation. Finally, while we considered it expedient to dichotomise our outcome variable for ease of interpretation and applicability to policy discourse, we realise that this may result in loss of statistical information (40).
Conclusion
In this study of respondents in six sub-Saharan African countries, we found that people who used social media as their main COVID-19 information source were more likely to perceive face mask as effective in preventing COVID-19 spread and this association was statistically significant. With current fears of more deadly waves of infection in the subcontinent, health ministries and agencies may leverage social media to strengthen health promotion messaging on the effectiveness of face masks with a view on promoting widespread mask use.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors for this research. I.I is supported by the | 2022-01-28T06:17:11.100Z | 2022-01-27T00:00:00.000 | {
"year": 2022,
"sha1": "a453639e27b09b9f75847234c18064739235be1b",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/17579759211065489",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "9cc447c65d27f623203e16fa58f42b67d05bba55",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265321462 | pes2o/s2orc | v3-fos-license | Transition and continuity of care after hospital discharge for COVID-19 survivors
ABSTRACT Objective: To assess care transition quality and compare it with the clinical characteristics and continuity of care after hospital discharge of COVID-19 survivors. Method: This is a descriptive, observational and cross-sectional study, carried out with 300 patients with COVID-19 who were discharged from a hospital in southern Brazil. The Care Transitions Measure (CTM-15) and question guide about symptoms, difficulties and use of health services after discharge were used. Student’s t-test, Pearson and Spearman correlation were used. Results: The mean score for care transition quality was 74.2 (±18.2). Factors associated with higher quality were receiving care in intensive care (p = 0.001), using non-invasive mechanical ventilation (p = 0.05), using vasopressors (p = 0.027) and having an appointment at the hospital after discharge (p = 0.014). Positive correlated factors were length of stay (p = 0.017), and negative factors were post-discharge symptoms of fatigue (p = 0.001), weakness (p = 0.008), difficulty doing moderate activities (p = 0.003) and how difficult recovery is (p = 0.003). Conclusion: Most participants had a satisfactory perception of care transition. However, aspects such as care plans, referrals and follow-up after hospital discharge require improvements.
INTRODUCTION
The COVID-19 pandemic, caused by the SARS-CoV-2 virus, impacted humanity in large proportions, as it advanced quickly and lethally and highlighted the fragility of health systems in several countries, impacting epidemiological, social, economic, political and cultural aspects (1) .Brazil is one of the countries with the highest number of infected people in the world; as of January 2023, there were 36,768,677 confirmed cases and 696,603 deaths (2) .
Although the majority of patients infected with SARS-CoV-2 are asymptomatic or present mild symptoms, such as fever, rhinorrhea and cough, and recover without the need for hospital admission, some may progress to serious clinical complications, with involvement of the pulmonary, neurological, cardiovascular, urinary, among others (3) .In Brazil, a study identified a COVID-19 hospital admission rate of around 6%, with significant variation in the different phases of the pandemic.Among admitted to hospital patients, 20% require care in the Intensive Care Unit (ICU) (4) .
While efforts are expended in hospitals to save lives, little attention has been paid to the care needs of survivors returning home (5) .These patients are potential candidates for developing post-intensive care syndrome (PICS) and decreased healthrelated quality of life (6) .Many need to deal with their comorbidities (4) and may present, during recovery at home, complications related to the disease itself, the decompensation of previous morbidities and the treatment instituted (7) .Furthermore, several consequences persist after discharge, such as fatigue, weakness, dyspnea, neuropathy/myopathy, anxiety and depression (3,8) .
Discharge from hospital to home is a period of risk for patients, who must deal with new health problems and changes in the care plan, with adverse events, medication errors, difficulties in scheduling appointments and post-discharge examinations, readmissions and use of emergency services (9) .In the context of a pandemic, with social isolation, overcrowded health services and service restrictions, patients may present different post-discharge needs (10) .
Therefore, care transition actions are important to ensure continuity of care for COVID-19 survivors, in order to contribute to the physical, cognitive and psychological recovery and quality of life of affected patients, avoiding readmission in periods when which hospitals are overcrowded (7) .Nurses are central professionals in conducting care transition and managing hospital discharge, being able to enable continuity of care and contribute to comprehensive care (5,11) .
However, carrying out care transition actions is a complex process, even in the best of circumstances in hospital institutions (5) .In the context of a pandemic, the challenges are exacerbated, as services are forced to review dehospital admission processes to reduce hospital stay time, increase bed turnover and reduce hospital overcrowding.
Although care transition is an internationally explored topic, the literature is still emerging (12) , with a lack of studies that specifically deal with patients with COVID-19 in Brazil.Therefore, this study aims to assess care transition quality and compare it with clinical characteristics and continuity of care after hospital discharge of COVID-19 survivors.
Study deSign
This is a cross-sectional study, in which STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) guidelines were followed.
Place
The study was carried out from February to November 2021 at a large general university hospital and a reference in highly complex care for patients with COVID-19 in southern Brazil.
PoPulation and Selection criteria
Patients aged 18 or over, who remained admitted to hospital in an inpatient unit for a minimum period of 48 hours, with a confirmed diagnosis of COVID-19 and discharged from the hospital to their home were included.Patients who did not live in the city of Porto Alegre and the metropolitan region and those who remained admitted to hospital only in the emergency department were excluded.During data collection, if patients had cognitive or communication deficits that prevented them from responding to the survey, caregivers who accompanied the discharge process and recovery at home could be interviewed as a substitute respondent (proxy informant), as carried out in other studies (13,14) .
SamPle definition
The sample calculation was performed using WinPEPI (Programs for Epidemiologists for Windows) version 11.43.Considering an estimated population of 1,250 patients, obtained by weekly mean of hospital admissions for COVID-19 in the hospital studied, a 95% confidence level and a 5% margin of error, a minimum total of 295 participants was obtained.During the data collection period, 729 patients who met the inclusion and exclusion criteria were identified based on weekly reports from the computerized hospital management system.Of these, 353 (48.4%) did not respond to telephone contact in three attempts on different days and shifts of the same week; 20 (2.7%) did not agree to participate in the study; 54 (7.4%) were readmitted at the time of telephone contact; one (0.1%) was institutionalized; and one (0.1%) died after discharge.250 (83.3%) patients and 50 (16.7%)caregivers responded to the survey.
data collection
Data collection was carried out in two stages.The first stage took place from February to October 2021 through telephone contacts, 7 to 14 days after patients were discharged from the hospital.The Care Transitions Measure (CTM-15) was used, developed in the United States to assess care transition quality from patients' and caregivers' perspective (15) , which was adapted and validated for use in Brazil (14) .It consists of 15 items, which are organized into four factors, namely: Health management preparation; Medication understanding; Important preferences; and Care Plan (14) .Answer options are arranged on a Likert-type scale, in which a score is assigned according to participants' response, as follows: totally disagree = 1 point; disagree = 2 points; agree = 3 points; totally agree = 4 points.There is also an option, "do not know/do not remember/not applicable", which does not receive a score, as it is not included in the calculation of the final score.To calculate the mean score, according to the authors of the instrument, a formula is applied that transforms the results into scores from 0 to 100, with the higher the score obtained, the better care transition (15) .CTM-15 has been extensively tested and has proven to be reliable, accurate and valid for its purpose (9,12,14) .Also, questions were asked about symptoms, difficulties and use of health services after hospital discharge, following a structured script drawn up based on the literature (1,5,7) and the authors' experience.It is noteworthy that, as this is an emerging topic, there is no validated questionnaire to identify symptoms and continuity of care for COVID-19 patients post-discharge.Therefore, a pilot study was carried out with 10 patients, who were not included in the sample.The script is structured and organized as follows: 13 questions about COVID-19 symptoms and difficulties after hospital discharge on a frequency scale that varies from "all the time" to "none of the time"; a question about the perception of the difficulty of recovery at home on a scale with answer options ranging from "not at all" to "extremely"; and seven questions about use of health services after discharge, with answer options of "yes", "no" and "do not know/do not remember".The items that deal with symptoms were scored on a scale of 0 to 4, as follows: no part of the time = 0 points; a small part of the time = 1 point; some of the time = 2 points; most of the time = 3 points; and all the time = 4 points.The item asking about how difficult recovery was scored as follows: not at all = 0; a little = 1; moderately = 2; enough = 3; extremely = 4.The remaining items about use of health services were coded: yes = 1; no = 2; and do not know/do not remember = 99.
The second stage of data collection was carried out in November 2021.Patients' electronic medical records were consulted to identify sample characterization data, such as sex, age, marital status, race/color, education, comorbidities (according to ICD-10), length of stay in hospital, ICU admission, use of mechanical ventilation, vasopressors and/or dialysis method as well as the presence of emergency service or readmission within 30 days after hospital discharge.It is justified to collect this data after the telephone contact to have a difference of 30 days and obtain the readmission data of the last participant interviewed.
data analySiS and treatment
Data were exported to an Excel spreadsheet and analyzed using the Statistical Package for the Social Sciences (SPSS) version 21.0.Regarding data analysis referring to CTM-15, the simple mean response for each item was calculated as well as the mean of the total scale and by factor, using the formula indicated by the authors that transforms the means into scores from 0 to 100 (15) .
Regarding sample characterization, symptoms and continuity of care after hospital discharge, quantitative variables were described by mean and standard deviation or median and interquartile range.Categorical variables were described by absolute and relative frequencies.
Variable normality was assessed using the Kolmogorov-Smirnov test.Student's t-test was used to compare means, and Pearson or Spearman correlation was used for the association between numerical and ordinal variables.The significance level adopted was 5% (p < 0.05).
ethical aSPectS
The research was approved by the institution's Research Ethics Committee, under Opinion 4,462,671/2020, in accordance with Resolution 466/2012.An Informed Consent Form (ICF) was used, with verbal consent from participants at the time of telephone contact for data collection and sending an electronic copy of the ICF by text message.
RESULTS
In this study, it was observed that 168 (56%) patients were men, 135 (45%) were married, 247 (82.3%) were white, 99 (33%) had completed high school and had a mean age of 51. 93 Regarding care transition quality at discharge, the mean CTM-15 score was 74.2 (±18.2).Factor 1 (Health management preparation) obtained a mean score of 77.3 (±19.0);factor 2 (Medication understanding) obtained a mean score of 76.8 (±21.0);factor 3 (Important preferences) obtained a mean score of 76.1 (±18.3); and factor 4 (Care plan) obtained a mean score of 58.4 (±30.9).The item with the highest score was 14 (Understands how to take medications) and the lowest was 12 (Had written list of appointments and tests) (Table 1).Regarding continuity of care, it was identified that 192 (64%) patients had some contact (via phone, message, email, home visit) with a health professional after discharge.However, 16 (5.3%)received a visit from a community health workers at home; 46 (15.3%) received care at the primary care reference unit; 5 (1.7%) needed emergency care; 36 (12.1%) had an appointment at the hospital's outpatient clinic; and 30 (10%) had an appointment at a clinic or private office.
Furthermore, regarding recovery at home after hospital discharge, it was found that it was a little difficult for 85 (28.4%) patients, moderately difficult for 91 (30.4%), quite difficult for 24 (8%) and extremely difficult for 8 (2.7%).Table 2 shows the occurrence of symptoms after discharge, the most common being fear of reinfection and difficulty climbing several flights of stairs and difficulties in carrying out moderate activities such as moving a table, using a vacuum cleaner and sweeping the house.
Regarding the bivariate analysis of clinical variables in relation to the mean CTM-15 score, it was identified that caregivers had a worse perception of quality in care transition, with a mean CTM-15 score lower than that of patients (Table 3).Furthermore, receiving care in the ICU, using non-invasive mechanical ventilation and using vasopressors were associated with a higher mean of CTM-15.It was observed that the longer the length of stay, the higher the CTM-15 score.
In the bivariate analysis of variables relating to continuity of care, there was a weak negative correlation between the CTM-15 score and variables fatigue, weakness, difficulty in carrying out moderate activities and how difficult post-discharge recovery is (Table 4).Furthermore, having a hospital appointment after discharge was associated with a higher CTM score.
DISCUSSION
This study is a pioneer in assessing care transition quality at hospital discharge for COVID-19 survivors and comparing it with clinical and continuity of care characteristics.The results reflect patients' and caregivers' opinion about the transition made in the hospital to return home and the difficulties in following up care in health services in the context of a pandemic.
Findings regarding sex, age and comorbidities are also consistent with other studies with COVID-19 survivors described in the literature (3,16) .ICU admission, use of mechanical ventilation and length of hospital stay rates were higher than the values found in São Paulo (3) and the United States (10) .However, in addition to the differences in the social, economic and health resources of these locations, the studies mentioned collected their data in 2020, while this research was developed in 2021, period in which the Gamma variants circulated, with a higher hospital admission rate, and Delta, with a higher ICU care rate (4) .
The findings presented in this study showed that the main symptoms after hospital discharge were difficulty climbing several flights of stairs and difficulties performing moderate activities.Another study also highlighted some persistent post-COVID-19 complications, such as physical exhaustion, dyspnea and fatigue, joint pain, muscle pain or weakness, headache, sleep disturbances, dizziness, anxiety and depression (1) .Therefore, it is essential to be alert to the health situation of COVID-19 survivors so that they do not present situations that are harmful to the functioning of the biological system.
Furthermore, the percentage of patients who had hospital readmission is lower than that found in investigations with COVID-19 survivors (3,10) .However, it is important to highlight that 7.4% of the individuals contacted in this study were readmitted to the hospital during the data collection period and were excluded, which may perhaps underestimate the readmission rate identified.
It was also evident that the majority of participants had a positive perception of care transition at hospital discharge, according to CTM-15.Although the instrument does not have a predefined cut-off point, values above 70 are satisfactory (13) .Therefore, the results of this research indicate a satisfactory care transition quality for COVID-19 survivors, corroborating Brazilian studies with cancer patients (12) that found similar CTM-15 scores.Higher values were found with pediatric patients (17) and lower values with older adults (18) .It can be identified that the factors that deal with health management preparation, medication understanding and important preferences obtained satisfactory scores and with few differences in the means.The literature points out that these aspects are fundamental for a safe and effective care transition (5,9,11) .An evidence-based model was recommended for use with COVID-19 patients upon discharge from hospital to go home, including several actions that ensure patient-centered care, with preferences and goals taken into account in care plan as well as health education of patients and caregivers for symptom management and treatment compliance (5) .Despite the limitations imposed by the pandemic, such as restricted visits, which prevent the provision of guidance on care for caregivers throughout hospital admission, health teams used communication and information technologies to develop discharge education actions, such as online appointments, educational videos and podcasts, video calls, among others (19) .
On the other hand, the factor that deals with the care plan and referrals after hospital discharge was assessed as unsatisfactory by participants.Items 12 (Had written list of appointments and tests) and 7 (Had written care plan) received the lowest mean.A similar result was found in a study with older adults (18) .Lack of discharge planning, absence of protocols or systematized counter-referral instruments, little coordination and communication between services are weaknesses reported in Brazil (20,21) .Furthermore, many hospitals do not have a care transition program or institutional documents that guide discharge plan preparation (20,22) .Therefore, discharge planning activities depend on the individual efforts of nurses, which does not happen in the context of a systematized plan (21) .Therefore, the need for strategies to overcome this gap and provide continuity of care after discharge is reinforced.
It is noteworthy that, in this study, caregivers had a lower CTM-15 score than patients, indicating a worse perception of quality in care transition.In another care context, such as caregivers of patients with stroke sequels, caregivers had difficulties with post-discharge demands, which were related to weaknesses in care transition (23) .Weak transitions are associated with greater burden on caregivers (24) .Therefore, it is essential to include family members as early as possible in discharge planning, in order to improve care transition quality.
The literature is clear in stating that people with compromised health status may have a worse care transition quality (18) , considering that patients admitted to the ICU and with a longer hospital stay have a worsening in their health status and quality of life three months after being discharged (3) .However, this study identified that a better CTM-15 score is associated with patients treated in the ICU, using non-invasive mechanical ventilation and vasopressors, correlated with length of stay and indicating that critical patients with longer hospital admission have a better perception of care transition quality.This can be justified by the greater time dedicated by health professionals to prepare the discharge of these patients with long hospital admissions, as they require greater care and attention.
On the other hand, it was found that patients with more symptoms of fatigue, weakness and difficulty performing moderate activities had lower quality scores.Furthermore, the more difficult the self-reported post-discharge recovery, the lower the CTM-15 score.These data suggest that patients with post-discharge difficulties have a worse care transition quality.Therefore, outpatient follow-up after discharge is important to identify difficulties and monitor treatment and home care (25) .
It is noteworthy that in this study few patients received care at the primary care reference unit or consulted at the hospital outpatient clinic, clinic or private office after discharge, which demonstrates the need to improve elements of care transition, such as articulation and communication between the hospital and other services in the Health Care Network, in order to promote continuity of patient care.In a study in the United States, it was identified that only 26.8% and 1.6% of COVID-19 survivors had a scheduled appointment in primary care and with a specialist at the time of discharge, respectively (10) .
In this study, it was observed that those with a hospital appointment after discharge had a better perception of care transition quality.Another investigation found that higher care transition scores were associated with higher rates of follow-up appointment in primary care (26) .Therefore, despite numerous difficulties in carrying out post-discharge follow-up, it is recognized that follow-up through telephone contact, home visits and/or appointment in primary care can avoid hospital readmission and emergency care (27,28) .
However, this study has some limitations.First, it was carried out in a single hospital in the south of the country, therefore, it cannot represent the Brazilian reality.Second, 48.4% of eligible patients did not respond to telephone contact, a problem reported to be frequent in COVID-19 patients and is inherently associated with selection bias (3) .Third, it is important to consider that the results of care transition assessment using CTM-15 may have been influenced by participants' feeling of gratitude for the health service (17) .Finally, questions about symptoms, difficulties and use of health services after hospital discharge are not part of a validated instrument.
In relation to advances in nursing, this study presents findings that point out gaps in the care transition process, such as discharge plan elaboration and primary and secondary care follow-up, which can be strategically worked on by researchers, managers and nurses, in order to advance care transition in Brazil.
DESCRIPTORES
It is recommended to develop strategies for implementing discharge planning, using a systematized protocol or instrument to prepare an individualized, written discharge plan.The healthcare team should involve patients and their caregivers in developing the plan, providing written instructions and guiding them through necessary referrals for home care.Furthermore, follow-up by telephone or visit is suggested within two weeks after discharge to clarify doubts and monitor difficulties in recovery at home, contributing to care transition.
Table 3 -
Bivariate analysis of clinical variables in relation to the mean Care Transitions Measure (CTM-15) score -Porto Alegre, RS, Brazil, 2021. | 2023-11-22T16:07:03.000Z | 2023-11-20T00:00:00.000 | {
"year": 2023,
"sha1": "ee3c1b8b543cbe56153f25c5fb046dab627c9c48",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/reeusp/a/KphDqNNBH6zYY3nKYSpjgbk/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a330898dbdb9f5f2a7e8f2aa343c7f4ffdc2e91",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244550568 | pes2o/s2orc | v3-fos-license | Predictors of Death from Complicated Severe Acute Malnutrition in East Ethiopia: Survival Analysis
Background Severe acute malnutrition (SAM) is still the leading cause of global child morbidity and mortality, with a greater burden in sub-Saharan Africa. A facility-based treatment of SAM demands critical care for improved outcomes and survival of children. However, there is a need to understand predictors for time to death among SAM children for effective interventions. Objective To assess the predictors of death from complicated severe acute malnutrition among admitted children treated in East Ethiopia. Methods A 31-month retrospective cohort study was conducted among a total of 665 under-five children admitted with complicated SAM in Dilchora hospital, eastern Ethiopia. The data was extracted from the patient register and medical charts using the kobo tool. The life table, survival, and hazard curves were plotted. Kaplan–Meier with Log rank tests was used to estimate and compare the mean survival time. The bivariable and multivariable Cox proportional hazards models were used to identify predictors of time to death. Crude and adjusted hazard ratios with 95% confidence intervals and p-values were reported. Results A total of 665 full medical charts were reviewed with a total of 60 (9%; 95% CI: 6.8–11.2%) deaths were observed, where most of the deaths occurred during the first two weeks of admission, while 74 (11%) and 449 (68%) were cured and recovered (stabilized and transferred to outpatient), respectively. Admitted children having good appetite (AHR=0.15; 95% CI: 0.64–0.33), pneumonia (AHR=2.46, 95% CI: 1.436, 4.22), diarrhea (AHR=2.16, 95% CI: 1.16, 4.06), tuberculosis (AHR=2.86, 95% CI: 1.08, 7.63) and having a nasogastric tube inserted (AHR=2.33, 95% CI: 1.15, 4.72) were significant predictors of time to death among SAM children. Conclusion There is unacceptably high under-five mortality due to SAM, which is predicted by co-morbidities (pneumonia, diarrhea, and tuberculosis), with medical complications and nasogastric tubes.
Introduction
Acute malnutrition is characterized by a recent failure to receive adequate nutrition due to episodes of diarrhea and other acute illnesses. 1,2 It can be classified as moderate acute malnutrition (MAM) and severe acute malnutrition (SAM) using weight for height (WFH) or mid-upper arm circumference (MUAC) cutoff points. 3,4 SAM is diagnosed with low MUAC (MUAC <11.5 cm) and/or a WFH Z-score (less than −3). It is the most extreme and visible form of undernutrition that is characterized by profound wasting or edema, loss of appetite, comorbidities, and complications. It requires timely and appropriate management for the child to survive. 1,3,5 Globally, an estimated 49 million (7.3%) children had acute malnutrition, with 16 million children being victims of SAM, accounting for estimated three million child deaths. The continent of Africa bears 28% of the global malnutrition burden. 6 Estimated 3 million child deaths are attributable to SAM, with over 50% of child deaths occurring in developing countries attributable to undernutrition, in general. 7 A forecast in 2014 showed that estimated 28.8 million children were victims of SAM and projected to decrease to 21.7 million by 2030. Despite a decline in the burden, additional 8 million new cases occur in sub-Saharan Africa each year. 8 In sub-Saharan Africa, SAM affects about 3% of under-five children, with more than 400,000 child deaths each year. 9 In addition, estimated 6.9 million child deaths are attributable to malnutrition in low-and middle-income countries. 10 In Ethiopia, where the under-five child mortality rate is high (57 child deaths per 1000), malnutrition is a basic underlying cause, accounting for an estimated 57% of child mortality. 11 Malnutrition, primarily due to SAM, costs approximately 16.5% of the national GDP. 12 An estimated 9-40% of children are affected by acute malnutrition. [13][14][15][16][17][18] SAM is the third leading cause of mortality, accounting for 8.1% of under-five deaths. 19 It is also estimated that 4.8 million children need emergency nutrition support in 2019, which could aggravate the situation in Ethiopia. 20 Children with complicated SAM need to be treated as inpatients to manage complications and improve child survival, since severe infections including pneumonia, diarrhea, sepsis, or HIV have a higher case fatality of up to 40%. Children with complicated SAM have a high ongoing risk of mortality throughout. 21 The United Nations International Children's Education Fund (UNICEF) reported that in Ethiopia, there are 152,413 cases of SAM being treated from January to May 2021. 22 SAM could threaten the futures of millions of children worldwide. 23,24 In Dire Dawa, an estimated 4.2% of children are affected by SAM, which is above the national average of 3%. 13 In addition, being on the border with the neighboring countries in the horn, sporadic conflicts, and population displacement make the area more prone to the adverse consequences of SAM.
The current strategy for Community-based Management of Acute Malnutrition (CMAM) is a focused and holistic approach for better SAM case management. It aims to increase the capacity to manage SAM children properly for a better treatment outcome.
The proper implementation of SAM management has the potential to reduce under-five mortality from 55% to under 20% in Ethiopia. 1,40 However, a substantial number of deaths; 29%, 14 8.4%, 25 14.5% 26 and 5.8% 27 have been observed in Ethiopia, which is mainly related to facility readiness, staff capacity, and inter-regional differences related to clinical characteristics and severity of malnutrition. 28 Inpatient therapeutic feeding units are faced with a lot of challenges in handling cases of SAM. Some of the challenges are limited in-patient capacity, lack of enough skilled staff in the hospitals, late presentation of children, the high default rate among children, and the serious risk of cross infections for immune-suppressed children, which totally increases the mortality rate. 29 A mortality rate of below 10% is considered acceptable in humanitarian situations. 30 However, these parameters are still not achieved in many of the developing countries due to many causes. 3 In Ethiopia, majorities of children with SAM present to therapeutic feeding centers, but present with many medical complications, and many children are dying anyway. 31 However, the major predicting factors affecting the time to death are not well understood, particularly in Dire Dawa, which needs to be understood for targeted intervention for improved child survival and treatment success. 38 So, the purpose of this study was to identify the predictors of time to death from complicated SAM among under-five children managed at stabilization centers in Dire Dawa.
Study area and design
This retrospective cohort study was conducted in Dire Dawa city administration in Eastern Ethiopia. In the city, there were two public hospitals providing care for complicated SAM patients. However, due to the current Covid-19 pandemic, one hospital providing SAM care was changed to a Covid-19 treatment center. Thus, the current study was conducted in Dilchora Hospital's SAM unit (stabilization center), where the majority of the SAM cases are treated. Dire Dawa is one of the two city administrations in Ethiopia which is located 515 km away from Addis Ababa, the capital city. The city has an estimated total population of more than 506,936 as of 2019/20, with the majority of residents living in the rural part. There are two governmental hospitals, four private hospitals, five higher clinics, twelve medium clinics (private), 15 health https://doi.org/10.2147/IJGM.S337348
DovePress
International Journal of General Medicine 2021:14 centers, and 34 health posts with 100% health service coverage. 32 Dilchora hospital gives inpatient management for complicated SAM children (13 beds) in a stabilization center, in accordance with the national SAM management guideline. The data was retrieved from records from July 1, 2020, to August 30, 2020.
Study population and eligibility criteria
The results of this study are applicable to all children aged 0-59 months with complicated SAM admitted to the Dilchora Hospital stabilization center. All eligible records of 0-59 months' children with SAM admitted to the center from September 2017 to March 2020 were from the study population (as this may show the most recent burden of the problem). Records of SAM children with missing treatment outcomes, admission, and discharge date were not included in the study as these are the primary outcomes of the study to be addressed.
Sample size determination
The minimum sample size for detecting the predictors of time to death from complicated SAM was calculated using Stata version 14 (Stata Corp., STATA 14.0 for window) for sample, and power calculation module. The sample size for comparing the survival Cox model and comparing slope to the reference was used. In addition, samples were calculated for each predictor, and the maximum calculated sample was taken. By taking anemia as a predictor of time to death (AHR = 1.36), the ratio of exposed to non-exposed as 1, the probability of death (0.29), 31 a significance level (0.05), and power (80%), the minimum sample size to identify predictors of time to death among SAM children became 666.
Sampling technique
Simple random sampling using the serial numbers of SAM children in the SAM registry was used to generate a table of random numbers. Then, the unique medical record number corresponding to the selected random serial number is identified. Then, using the unique medical record number, the medical charts of randomly selected children were retrieved from the card room, and data was collected.
Data collection method
A cross-checked data abstraction format prepared in line with the SAM registry and the medical charts of children was used to collect data. The data was collected from the medical records and the SAM treatment registry through cross-validation. Graduate nurses and health officers were used for collecting the data.
Data quality control
To assure data quality, the checklist was cross-checked with the SAM register and medical charts. Data collectors were trained for one day, on how to extract the data from the patient registry. During training, data collectors exercised the data collection on at least five medical charts of children with the supervision of the researchers. Daily checkups and feedback were given by the investigators and supervisors on the appropriateness, completeness, and consistency of the collected data by taking a random sample of the collected data. The data was entered into the Kobo tool for controlled and quality data collection.
Variables of the study
The dependent variable was time to death from SAM among children. Meanwhile, the independent variables were demographic characteristics (age, sex of the child, place of residence), clinical conditions (vomiting, dehydration, loss of appetite, and hypothermia), presence of nutritional edema, co-morbidities (pneumonia, HIV status, diarrhea, anemia, malaria, tuberculosis, and hypothermia), routine medication intake (intravenous (IV) fluid intake, IV antibiotic treatment, blood transfusion, folic acid, vitamin A supplementation, deworming, and presence of nasogastric tube), and level of anthropometric deficits at admission.
Operational definitions
In this study, cure from SAM was achieved when the SAM child reached the discharge criteria; weight-forheight/length is ≥-2 Z-scores and they have had no edema for at least 2 weeks, or MUAC is ≥12.5 cm, and they have had no edema for at least 2 weeks without any acute medical complications. However, recovered (stabilized) is defined as when the child is treated for acute medical complications at a stabilization center and transferred to OTP (not cured yet) for continuous SAM treatment. 30,33 Censored observations were defined as those SAM children who were defaulted, transferred, disappeared, recovered, or non-responded in which the primary outcome of interest (death) was not observed. Anemia was defined with a hemoglobin level below 11 gm/dl (hematocrit level less than 33%) at admission, 33 while hypothermia is defined as when the body temperature is below 35.5°C.
Ethical considerations
An ethical support letter was obtained from the Dire Dawa University, College of Medicine and Health Science research and ethics review committee. The support letters were subsequently taken to the city administration health bureau and the hospital. Written informed consent was taken from the hospital administration on behalf of the clients that the information collected would be kept confidential and be used only for the proposed study. Specific personal identifiers of children were not collected to maintain the client's private health issues like HIV status.
Socio-demographic characteristics of SAM children
In this study, a total of 665 full medical records of children were reviewed, where, more than half, 363 (54.6%) were males. A total of 210 (31.6%) SAM children were aged below twelve months, and a mean age of 20 months (±15 months). Regarding residence, the majority of SAM children treated in the SAM unit, 385 (57.9%), were from rural areas.
Survival patterns of SAM children
Concerning the treatment outcomes of SAM children, the majority of children (67.5%) were recovered. Also, about 11.1%, 2%, and 10.4% of SAM children were cured, selfdischarged without clinical improvement, and defaulted in the course of SAM treatment. A total of 60 (9%: 95% CI: 6.8-11.2%) deaths were observed among SAM children during treatment at a stabilization center within the hospital ( Figure 1). The overall cumulative incidence density for mortality was found to be 0.022. A life table, survival chart, and Kaplan Meier tests were done to explore and understand the survival patterns of SAM children. A total of 60 (9%; 95% CI: 6.8%-11.2%) deaths were observed among admitted SAM children in the hospital. The vast majority of them, 78.3% of children, died within ten days of being admitted to a hospital where medical complications and mismanagement are common (Table 2 and Those SAM children with a failed appetite at admission had a significantly lower mean survival time (26.9 days) as compared to those with a passed appetite test (43.6 days) (p-value of log-rank = 0.0001). In addition, children with the diagnosis of tuberculosis had a significantly lower mean survival time as compared to their counterparts. Children with pneumonia and diarrhea at admission had a significantly shorter mean survival time than the others. Children who received routine medications (deworming and NG tube) had significantly shorter mean survival times than their counterparts.
Predictors of time to death among under-five children with SAM
Bivariable Cox proportional hazard regression was done to identify the potential predictors of time to death from complicated SAM among children. The risks of death were found to be lower among those over the age of two, males, and children of city dwellers. Those SAM children who were diagnosed with dehydration (CHR=2.0; 95% CI: 1.18-3.38) and pneumonia (CHR=2.68: 95% CI: 1.58-4.56) had a significantly increased risk of death from SAM. In addition, children with TB in the course of SAM treatment (CHR=2.85; 95% CI: 1.14-7.14) and those on IV fluid treatment (CHR=3.23; 95% CI: 1.94-5.36) had a significantly Notes: + -refers to pedal edema, ++ -edema of the leg, and +++ -generalized body edema.
Discussions
The purpose of this study was to identify the predictors of time to death from SAM among children treated in a stabilization center within the hospital, in Ethiopia. The findings of this study showed that a significantly higher number of deaths (9%: 95% CI: 6.8%-11.2%) were observed among admitted children. Despite the fact that Ethiopia is implementing a standard management protocol for complicated SAM management for children, the mortality is found to be high. Other similar studies also reported 7.6% in eastern Ethiopia, 34 and the Wolaita Zone (8.8%), 35 while it was higher than the result from North Ethiopia (3.8%). 36 However, a higher mortality rate was observed in Yaoundé, Cameroon (15%), 5 Zambia (46%), 37 northern Ethiopia (29%) and southwest Ethiopia (14.5). 26 The SPHERE minimum standard for humanitarian response states that the death rate should be below 10% as a target. 30 However, as the situation is not in an emergency setting, but in a relatively stable setting, a higher mortality incidence that needs due attention is found. This may be related to the frequent occurrence of
8769
medical complications, such as diarrhea, 38 pneumonia, and others impairing nutritional recovery, 3,39 and prolong hospitalization, which negatively affects nutrient intake. In addition, since the cure is expected at outpatient therapeutic feeding, where children are transferred from hospitals, the current mortality rate may be increased. That means a significant number of children are dying in the early phase of hospital admission, which should be addressed for improved health care and effective treatment. 40 In this study, the mean survival time was 40.43 days with most of the deaths occurring within the first two weeks of admission to the hospital. This might be due to the fact that children in the early stages of treatment are more prone to fatal medical complications and some mismanagement that may potentially increase the risk of death. [41][42][43][44] In addition, the presence of respiratory complications that are commonly observed in children, like pneumonia, TB, and loss of appetite, accompanied by comorbidities, decreases child survival and shortens the mean survival time. 33,45 This warrants careful identification, diagnosis, and management of such medical comorbidities in accordance with the national protocol.
A number of factors were found to be important predictors of child survival. A 2.4-fold increased hazard of death was observed among children with a diagnosis of pneumonia. In addition, a fourfold increased risk of death was also reported for pneumonia. 46 Other studies conducted in Northwest and Northern Ethiopia showed that children with pneumonia have a 29% and 56% lower likelihood of recovering from SAM and have an increased risk of death, respectively. 47,48,51 Furthermore, TB has the potential to increase the risk of death by three-fold (AHR=2.86; 1.08-7.63). Similarly, a study conducted at Sekota Hospital, northern Ethiopia (HR=2.88) 49 and (AHR: 1.6) 50 showed an increased risk of death by three and two folds. Frequent infections, particularly respiratory diseases, have the potential to limit dietary intake and child oxygenation, which may potentially increase the risk of early death.
It is known that reduced metabolic adaptation in SAM children usually makes the typical symptoms of comorbidities absent. This might lead to atypical presentations of children despite the fact that they have medical complications, but not clinically evident. But children may come up with a loss of appetite, an important clinical indicator of inherent medical complications. This study also found that children with loss of appetite had higher hazards of death. One study found that it increases the risk of death by almost threefold (AHR=2.7). 34 Thus, clinicians need to be careful about the possibility of hiding a serious infection and should routinely give presumptive antibiotics as per the guideline. 33,41 In addition, diarrhea was shown to increase the risk of death by 2.16 folds (AHR: 2.16; 1.16-4.06) compared to those who did not have diarrhea. Previous studies also pointed out that it increased the risk of death by 2.52. 34 Furthermore, taking a deworming medication can reduce the risk of death by 64% when compared to those who did 26 The presence of a nasogastric tube indicates that the child is unable to feed himself or herself consciously. Those children with NG tubes are more likely to be victims of medical complications and infections that warrant serious medical follow-up for a better clinical outcome. In addition, NG tube feeding in the presence of other complications increase child's risk of death by 26%. 48 This study indicated that a higher number of child deaths are observed, with important factors that determine child survival being identified. However, the findings of this study should be thought of in the light of some inherent limitations. Since the nature of the data is secondary data, the information for some children is lost and may be related to the outcome in that the current estimate may be underestimated. On the other hand, this study pinpoints important predictors for high mortality using appropriate statistical methods. This evidence will inform the hospital and others to focus on the most important areas to focus on for improved child survival. Further, there should be evidence on the facility readiness and quality of care for children that may contribute to increased risk of mortality, in addition to the clinical presentation of SAM children.
Conclusions and recommendations
In general, a higher child mortality rate and lower child survival were found among admitted SAM children in the hospital. In addition, loss of appetite at admission, the presence of medical complications (pneumonia, TB, and diarrhea), not getting deworming tablets, and being on NG tube feeding were important predictors of early death among SAM children. Implementation research on the level of implementation, quality of care, health professional's readiness, skills, and other infrastructures should be thought about for a better understanding of the causes of higher mortality, besides medical conditions of the child. In addition, there should be an enhanced outreach program for malnutrition screening to identify SAM and acute malnutrition earlier before medical complications happen, reducing late presentations. There should be a targeted intervention in accordance with the national protocol for the management of SAM within hospitals.
Data Sharing Statement
All data generated or analyzed during this study are included in this published article.
Ethics Approval and Consent to Participate
The research was reviewed and ethically adopted by Dire Dawa University, College of Medicine and Health Science research and ethics review committee. Then, a written informed consent to collect data was obtained from the hospital manager on behalf of the SAM children's record. All possible ethical cares were respected throughout the conduct of the research project. All relevant ethical principles under the Helsinki declaration were followed and respected. | 2021-11-25T16:15:13.221Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "e5825e65e270c3d58424019eb029408651a515e8",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=76119",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c2342c65f61b6479cf2bc6feb0e85eda5a87a00e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2118243 | pes2o/s2orc | v3-fos-license | Medical Care Experiences of the 30th Chinese Antarctic Research Expedition: A Retrospective Study
China has begun Antarctic scientific research expeditions since 1984. The 30th Chinese Antarctic expedition summer team boarded the Xuelong vessel at Shanghai, China on November 7, 2013. This expedition team spent more than three months in Antarctica until the offloading of supplies and scientific activities were completed. Effective medical care plays an important role in ensuring scientific research expedition teams to complete their missions.[1] However, medical care of Antarctic expeditions is complex because the characteristics of medical conditions in Antarctic expeditions are largely unknown.[2] The present study describes the characteristics of medical conditions encountered in the 30th Chinese Antarctic summer scientific research expedition. Given this study results, a better and more effective medical care may be provided in future Antarctic scientific research expeditions.
Clinical Practice
China has begun Antarctic scientific research expeditions since 1984. The 30 th Chinese Antarctic expedition summer team boarded the Xuelong vessel at Shanghai, China on November 7, 2013. This expedition team spent more than three months in Antarctica until the offloading of supplies and scientific activities were completed. Effective medical care plays an important role in ensuring scientific research expedition teams to complete their missions. [1] However, medical care of Antarctic expeditions is complex because the characteristics of medical conditions in Antarctic expeditions are largely unknown. [2] The present study describes the characteristics of medical conditions encountered in the 30 th Chinese Antarctic summer scientific research expedition. Given this study results, a better and more effective medical care may be provided in future Antarctic scientific research expeditions.
The 30 th CHINARE summer team included 162 men and seven women between 21 and 59 years old, with a mean age of 35.2 ± 10.0 years. There were five team members with underlying diseases; the others were healthy. Table 1 shows all the underlying diseases, treatments, and results in this expedition.
The team doctor provided medical care for all team members during the period of the expedition. The total number of medical room visits was 195. The most common reason for medical visits was injuries (31.3%). Skin problems (22.6%) were the second most common medical condition. Psychiatric disturbances accounted for 20% visits. Digestive system diseases and ophthalmic diseases constituted 15.4% and 5.6% visits, respectively. Respiratory system diseases accounted for 5.1% visits. Table 2 shows all medical visits in the expedition.
Traumatic injuries occurred in 61 (31.3%) team members in the expedition. These injuries were mild and treated by debridement and oral antibiotics. These team members recovered uneventful. One team member suffered from superficial second-degree burn (1% of the body surface area) by spraying machine oil in the expedition. The burn wound was healed by wound dressing and local usage of ointment for burns. There were no cold injury and ultraviolet ray exposure injury in this expedition.
Skin problems were the second common conditions and occurred in 44 (22.6%) team members in the expedition including fungal infections (14.9%) and seborrheic dermatitis (7.7%). All team members' symptoms were completely relieved by antifungal ointment or symptomatic treatment.
Psychiatric disturbances occurred in 39 (20.0%) team members in the expedition. Sleeping difficulty was the most commonly symptom (38/39), followed by depression (1/39). The team members with psychiatric disturbances did not need medication. They were counseled to form a balanced lifestyle including regular physical exercise and a schedule of work and recreation.
There were 24 (12.3%) team members who diagnosed with functional gastrointestinal disorders. The symptoms included abdominal distension (16/24) and constipation (8/24). Six (3.1%) team members with acute hemorrhoid attack were treated with musky suppository. The clinical severity of bleeding, anal discomfort, pain, and anal discharge diminished after treatment.
There were 11 (5.6%) team members who diagnosed with conjunctivitis. Compound chondroitin sulfate eye drops were very effective in treating this condition.
Epistaxis occurred in nine (4.6%) team members in the expedition. The team doctor added some humidity with a humidifier in the room and solved this problem easily. Only one (0.5%) case with upper respiratory infection was diagnosed in the expedition. Banlangen granules were administered with good response.
This study demonstrated the characteristics of medical conditions in the 30 th Chinese Antarctic scientific research expedition. Traumatic injuries were the most common emergencies, although there were no fractures in 30 th CHINARE. Historical data compiled from multiple nations' Antarctic expeditions demonstrated that injuries comprised 16%-40% of the clinical visits and had been observed to occur most commonly during offloading and antagonistic sports like basketball. [3] Due to above reasons, antagonistic sports were not encouraged in the 30 th CHINARE. Furthermore, health educations of injury and illness prevention were conducted before logistic or scientific tasks.
Cold and ultraviolet ray injuries in modern Antarctic expeditions are almost entirely preventable. [4] In the 30 th CHINARE, no case of such injuries occurred, although team members working outdoors regularly. This was because all team members wore full polar gear, including snow goggles, gloves, protective mask, and usage of sunblock lotion (SPF 50).
Although psychiatric disturbances occurred in 39 team members in the expedition, none of severe psychiatric disturbance was found in our study. There were lots of good recreation facilities on Xuelong vessel. These facilities included books and video library, gymnasium, and regular small party. Our results suggested that the social environment in Antarctic expedition was more important than the physical environment which was consistent with the previous study. [5] There was a relatively high incidence of functional gastrointestinal disorders in the 30 th CHINARE. Abdominal distension and constipation were noticed mainly in January (the polar day and the stressful period in Antarctica). The incidence of this condition was significantly higher in the ocean expedition team than in other teams. This may be related with sleep disturbance, the strength of stress, and long-time of working outdoors. [6,7] Our data suggested that upper respiratory tract infections were not common in Antarctica. The probable reason for this phenomenon may be that the Antarctica is a relatively sterile continent. This result was in agreement with the previous study. [4] Preventive care was the most cost-effective strategy of medical care because of extremely limited medical resources in Antarctica. For example, blood pressure is known to increase in Antarctic expedition team members, [8] and an increased incidence of cardiovascular attack has been noticed in temperate regions. [9] Thus, preventive cares like close monitoring of blood pressure were important to avoid acute cardiovascular attack. There was no associated acute attack in this expedition because of the careful preventive cares. Routine physical examinations were performed every month which included personal interviews, weight, blood pressure, pulse, and respiratory rate. By the end of 30 th CHINARE, none of the team members suffered from any severe injury, disorder, and associated complications directly attributable to their stay in Antarctica.
Our results also suggested that team doctors must receive extensive medical training before expedition because of the characteristics of the disease spectrum and limited medical resources. The training program should include general medicine, surgery, emergency medicine, nursing, and medical psychology.
In conclusion, traumatic injury remains the most common medical condition during the Chinese Antarctic expedition. Preventive care is the most cost-effective strategy of medical care. The team doctors should be well-trained to manage injuries and diseases with limited medical resources. | 2018-04-03T03:14:53.322Z | 2015-02-05T00:00:00.000 | {
"year": 2015,
"sha1": "34d72609ed365e0004d6e5fc3129467b6fa8120b",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0366-6999.150116",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34d72609ed365e0004d6e5fc3129467b6fa8120b",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"History",
"Medicine"
]
} |
235203071 | pes2o/s2orc | v3-fos-license | Association between musculoskeletal function and postural balance in patients with long ‐ lasting dizziness. A cross ‐ sectional study
Background and purpose: Reduced balance and musculoskeletal pain are frequently reported among patients with long ‐ lasting dizziness. However, the association between musculoskeletal function and postural sway among these patients has not been examined. The objective of this study was to examine if there is an association between aspects of musculoskeletal function and postural balance in patients with long ‐ lasting dizziness. Methods: This was a cross ‐ sectional study, using data of 105 outpatients with long ‐ lasting dizziness. Aspects of musculoskeletal function was assessed by examining body flexibility, grip strength, preferred and fast walking speed, in addition to musculoskeletal pain. Musculoskeletal pain was evaluated using the Subjective Health Complaints questionnaire. Postural balance was assessed by path length of postural sway by using a balance platform on both firm and soft surfaces, with eyes open and closed. The association between musculoskeletal function and postural sway was assessed using linear regression analyses. Results: When adjusting for age and gender we found that on a firm surface, there was an association between increased musculoskeletal pain and increased postural sway measured with eyes open ( p = 0.038). In addition, there
| INTRODUCTION
Dizziness is one of the most frequently reported symptoms in primary care and affects adults of all ages (Neuhauser et al., 2008).
Dizziness is often followed by other symptoms such as psychiatric comorbidity and musculoskeletal pain (Kvåle et al., 2008;Lahmann et al., 2015;Malmström et al., 2020) and reduced function such as balance deficits (Herssens et al., 2020;Söhsten et al., 2016;Wilhelmsen & Kvåle, 2014) and decreased gait speed (Marchetti et al., 2008). However, little is known about whether or how aspects of musculoskeletal function and pain affect balance in this patient group. Previously, gait function has been linked to falls and balance in patients with vestibular disorders (Whitney et al., 2000) and grip strength has shown association to postural control in older adults (Strandkvist et al., 2021;Wiśniowska-Szurlej et al., 2019).
Persons with persistent dizziness may be prone to musculoskeletal pain and reduced function (Kvåle et al., 2008) as dizziness often is aggravated by head movements. The patients may attempt to minimize head motion during movement leading to avoidance behavior and "en bloc" movement of the head and trunk (locking the head on the trunk) causing a more rigid movement pattern.
This may lead to muscular stiffness, pain, and reduced musculoskeletal function. To keep and maintain balance, we are dependent on a fine-tuned interaction between the visual, vestibular, and somatosensory systems (Kristjansson & Treleaven, 2009). Studies have implied that neck pain may disrupt afferent information from the cervical spine and thus influence balance (Knapstad et al., 2019;Treleaven, 2017). However, this relationship is not well explored in with musculoskeletal pain in general. It is possible to theorize that musculoskeletal ailments would influence balance, either through pain which may disturb the afferent proprioceptive information from the somatosensory system (Kristjansson & Treleaven, 2009) or through loss of function due to avoidance behavior. Compared to healthy controls, increased postural sway, a commonly used measure of postural balance, has been documented in persons with vestibular disorders (Allum et al., 2001), general musculoskeletal pain (Lihavainen et al., 2010), non-specific low back pain (Ruhe et al., 2011), idiopathic neck pain and whiplashassociated disorders (Silva & Cruz, 2013). Maintaining balance is critical to independence in daily life (Shumway-Cook & Woollacott, 2017, p. 153), and both dizziness and increased postural sway are strong predictors of falls, which can lead to further disability (Kollén et al., 2017). Knowledge about the potential associations between aspects of musculoskeletal function, pain and balance in a population prone to balanced disturbance is therefore valuable for practitioners treating this group. The aim of this study was to investigate possible associations between aspects of musculoskeletal function, such as gait, strength, flexibility and pain, and postural sway in persons with long-lasting dizziness.
| Study design
This cross-sectional study was carried out as part of the LODIP study (Kristiansen et al., 2019), a project examining the effect of vestibular rehabilitation combined with cognitive therapy in persons with longlasting dizziness in primary care. The study was conducted at the Western Norway University of Applied Sciences (HVL), Bergen, Norway. The present study analyzed data from baseline assessments.
| Recruitment and sampling
Participants were recruited through general practitioners, physiotherapists, and otorhinolaryngology clinic, newspaper ads, and on social media from the region in and around the municipality of Bergen, Norway, between 2016 and 2019. In total, 229 potential participants were screened by telephone interview, and 86 of these were excluded. After the initial screening, 143 persons were invited to further assessment, and another 36 persons were excluded. In total, 107 participants who fulfilled the following criteria were included: age 18-70 years, acute onset of dizziness, symptoms lasting for more than three months, and dizziness aggravated by head movements.
Exclusion criteria were non-vestibular causes of dizziness (e.g., neurological disorders), diseases with fluctuating vestibular function, patients who had had treatment for benign paroxysmal positional vertigo within one month of testing, diseases where fast head movements were contraindicated, severe/terminal pathology, inadequate verbal and written Norwegian language proficiency, or inability to attend test locations. The participants signed an informed consent and completed baseline assessments at HVL. Three experienced physiotherapists familiar with the testing procedures conducted the assessment.
| Sample size
This study uses the same study population as the main study (Kristiansen et al., 2019). The main study was designed as a randomized controlled trial comparing two intervention groups, with a power sample calculation estimating at least 96 participants. To account for possible dropouts the goal was to include 125 participants; however, 107 participants were finally included, still within the sample size calculation (Kristiansen et al., 2019). As the sample size of this study was based on already collected data, a post hoc sample size calculation was not recommended (Lydersen, 2015).
| Musculoskeletal pain
To assess musculoskeletal pain during the last 30 days, the subscale musculoskeletal pain (SHCmusc) from the Subjective Health Complaints (SHC) questionnaire was used. SHCmusc contains eight items related to muscle pain (headache, pain in the neck, upper and lower back, shoulders, arms, migraine, and legs). Severity of pain was scored on an ordinal scale ranging from zero (no complaints) to three (serious complaints), giving a sum score from 0 to 24 points. The structural validity and reliability of the scale were found to be acceptable for use among the general population (Eriksen et al., 1999).
| Body flexibility
The Global Physiotherapy Examination (GPE) was used to evaluate bodily aberrations, and the subdomain bodily flexibility (GPEflex), which includes testing of lumbo-sacral flexion, head-nod flexion, shoulder retraction, and elbow drop, was used. Each item was scored on a 15-step numbered scale ranging from −2.3 to +2.3. Optimal flexibility is defined as zero, while a higher score indicates less flexibility. The total sum score of the four tests was reported and included in the analysis. The GPEflex has demonstrated to be reliable and valid in patients with long-lasting musculoskeletal pain, and also, to discriminate between those with and without musculoskeletal pain (Kvåle et al., 2003).
| Grip strength
To assess musculoskeletal strength, the grip strength-test was administered using a handheld dynamometer (Mie medical research Myometer). The mean grip strength of two attempts in each hand was measured in kg. Grip strength is a reliable and valid test for musculoskeletal strength (Abizanda et al., 2012).
| Preferred and fast walking speed
The participants walked a 6-m pathway and were asked to walk at their preferred speed. To avoid the acceleration and deceleration phase, a 1-m start-up and slow-down at the start and end of the pathway were set. A stopwatch was used to measure the time in seconds. The mean preferred walking speed of two attempts was calculated in meters per second (m/s). The same protocol was used for fast walking, except that the participants were instructed to walk as fast as possible. The walking test protocols used in this study are considered to be valid and reliable (Hall & Herdman, 2006;Heitzman, 2013;Muhaidat et al., 2014).
| Dizziness symptoms
The Vertigo Symptom Scale-short form (VSS-sf) was used to assess self-perceived severity of dizziness. The scale consists of 15 items assessing symptoms during the past month. Each item is scored according to the frequency of symptoms on a five-point ordinal scale from 0 (never) to 4 (very often/almost every day), providing a total score ranging from 0 to 60 (Wilhelmsen & Kvåle, 2014
| Statistical analysis
Distribution of variables was examined using histograms and scatter plots. Descriptive data are presented as frequencies (percentages), means and standard deviations (SD), or median and quartiles as HAUKANES ET AL.
-3 of 9 appropriate. Linear regression models were used to test the associations between postural sway (dependent variable) and aspects of musculoskeletal function and pain (independent variables). The different measures of musculoskeletal function (walking speed, grip strength, body flexibility, and pain) were analyzed separately in each condition of sway (EOfirm, ECfirm, EOsoft, and ECsoft), in univariate and adjusted models. Age and gender were used as adjustment variables as balance and muscular function can be influenced by both (Gribble et al., 2012). Since the four postural sway variables were significantly skewed, they were log-transformed so that the residuals approached normality. To facilitate interpretation of the regression coefficients, they were back-transformed after the regression. Analyses were two-tailed with a p-level of less than 0.05. Statistical analyses were performed using IBM SPSS Statistics version 26 (IBM Corp.).
| RESULTS
Two participants were excluded from further analyses due to extreme scores in the four postural sway tests. Thus, the analysis included 105 participants, of which 79 (75%) were women and mean age was 49 years (SD = 12.9). Mean score of VSS-sf was 21. Duration of dizziness ranged from 3 to 482 months, with a median of 72 months.
Postural sway EOfirm showed the lowest values of sway with a median path length of 137 mm, while ECsoft showed the highest values of sway with a median path length of 844 mm (Table 1).
The adjusted models on a firm surface showed a statistically significant association between SHCmusc and postural sway in the EO condition, and a 1-point increase on the SHCmusc resulted in a 2% increase in sway. An inverse significant association was found between GPEflex and postural sway in the EO test (Table 2). A 1-point increase in the GPEflex resulted in a 6% reduction in sway with EO and a 5% reduction in sway with EC in the adjusted model.
In the adjusted models on a soft surface, there was a significant association between grip strength and postural sway in the EC condition, with a 1-point increase (kg) in grip strength resulted in a 2% reduction in sway, and finally, a significant association between fast walking speed and postural sway in the EO condition, with a 1-point increase (m/s) in walking speed was associated with a 52% reduction sway (Table 3).
| Main findings
The present study has demonstrated associations between musculoskeletal function, pain and postural sway in persons with longlasting dizziness. Analyses with adjustments for age and gender showed that increased musculoskeletal pain, decreased fast walking speed, and decreased grip strength all were associated with increased sway. Surprisingly, decreased body flexibility was associated with decreased sway.
We found that increased self-reported musculoskeletal pain was associated with increased postural sway (EOfirm). These findings are in line with studies reporting that geriatric patients with general pain have increased postural sway (Lihavainen et al., 2010), and similar associations were found with localized pain in patients with low back pain (Ruhe et al., 2011). Since balance is dependent on somatosensory input from the musculoskeletal system (Lihavainen et al., 2010), pain may cause disturbances (Brumagne et al., 2000), leading to a greater reliance on visual and vestibular information to maintain postural control.
Postural sway may therefore increase when sensory information is reduced. Surprisingly, no association was found between musculoskeletal pain and postural sway when visual input was removed, and somatosensory information disturbed (ECsoft). Previous studies have found associations between pain intensity in the neck and postural sway (Knapstad et al., 2019;Ruhe et al., 2013), and moderate to severe musculoskeletal pain has been found to associate with increased postural sway compared to patients with mild or no pain (Lihavainen et al., 2010). The mean SHCmusc score T A B L E 1 Clinical characteristics of the 105 participants with long-lasting dizziness -5 of 9 in our participants was 7 (score ranges from 0 to 24), while the mean SHCmusc in the general population is reported to be 4.7 (Ihlebaek et al., 2002). This indicates that the participants in our study were only mildly affected by pain and that higher levels of pain could demonstrate associations also when vision and sensory input were disturbed. Even though visual dependency is common in persons with vestibular disturbances (Maire et al., 2017), the same is not established in patients with somatosensory disturbances, making the interpretation challenging. Another factor is that pain levels are a difficult construct to measure, highly influ- (Fujimoto et al., 2009). In addition, grip strength has been associated with reduced balance (Alonso et al., 2018;Singh et al., 2015), supporting the results from our study.
Among patients with dizziness, restricted body flexibility has been described compared to healthy persons (Kvåle et al., 2008).
Therefore, an association between reduced body flexibility and increased postural sway was expected, as flexibility is deemed beneficial for balance strategies. On the contrary, we found that reduced body flexibility was associated with decreased postural sway (EOfirm). An explanation for this unexpected finding may be that reduced flexibility acts as a compensatory mechanism to reduce sway on firm surfaces. Also, the total score was in this study was 3.5, which is somehow higher than what Kvåle et al. (2003) found among healthy participants (3), but lower compared to a previous study in patients with dizziness (4.4) and patients with dizziness and additional neck pain (4.8) (Knapstad et al., 2020).
In the fast walking condition, we found an association between walking speed and postural sway (EOsoft), indicating that a reduced ability to walk at a more rapid speed was associated with increased postural sway. This association is interesting, as walking speed can be used to detect persons at a higher risk of major health-related events (Cesari et al., 2005). It has been stated that persons reduce their walking speed to cope with increased risk of falling (England & Granata, 2007). In addition, patients with dizziness may be more reluctant to move at higher speeds due to a pre-existing sensory deficit, which may explain the association with increased postural sway.
Last, it could be argued that the included population was only mildly affected by musculoskeletal pain, with scores in the lower aspect of the normal range within their age group, in for instant walking speed (Fritz & Lusardi, 2009) and grip strength (Massy-Westropp et al., 2011). This may have influenced the results, and perhaps higher disability would have resulted in a stronger relationship with postural sway. However, the aim of this paper was merely to examine if an association existed between the different aspects of musculoskeletal function and postural sway, and not if their physical function.
| Strengths and limitations
A strength of our study is the use of reliable and valid measurement instruments, in addition to the fact that the participants are likely to be representative of persons with long-lasting dizziness in terms of age and gender (Kvåle et al., 2008). In addition, the VSS-sf was high (mean score 21), indicating a high severity of dizziness at inclusion.
This and the large number of participants (n = 105), increases the external validity of the study. Despite this, there may be a selection bias as only the most motivated and least affected patients tend to volunteer for research studies (Rothwell, 2005). A participant pool of persons with reduced function and higher pain levels may have resulted in a stronger association between musculoskeletal function and postural sway.
It could be argued that walking speed is not a representative measure of musculoskeletal function. However, walking speed is a vital sign for overall health where a well-functioning musculoskeletal system is indeed important (Fritz & Lusardi, 2009). It is also considered as a strong predictor of ADL disability in communitydwelling older adults (Vermeulen et al., 2011). As neurological disorders were excluded from this study, we believe that gait speed is an important aspect of musculoskeletal function in the study population.
Four tests on a balance platform were used to examine postural sway as we wanted to include aspects of excessive reliance on vision and a possible reduction in proprioceptive input (Ruhe et al., 2011). In individuals with vestibular deficits, the Balance Evaluation Systems Test (Horak et al., 2009) has been described as valid to assess balance. There are disadvantages to using postural sway alone as a measure of balance, as it does not capture the myriad aspects of balance, and a combination of several balance tests may be preferable (Horak et al., 2009 Finally, the associations were weak, and the low explanatory power (R 2 ) underscores that musculoskeletal function may have a limited effect on postural sway in persons with long-lasting dizziness.
However, the aim of this study was merely to examine whether an association existed.
| IMPLICATION FOR PHYSIOTHERAPY PRACTICE
We found small but significant associations between postural sway and musculoskeletal function among patients with long-lasting dizziness. The results imply that reduced musculoskeletal function is to a certain degree associated with reduced balance, in persons with long-lasting dizziness. For clinicians treating these patients, the relationship may be of importance since balance is affected in patients with dizziness, and musculoskeletal issues may add to these balance issues. However, the detected associations were small, and the clinical significance is uncertain. Since altered postural control has a multitude of possible causes, these findings need to be corroborated in future studies. | 2021-05-27T06:19:25.449Z | 2021-05-26T00:00:00.000 | {
"year": 2021,
"sha1": "138f6e02c1d773dacea0144e1fd540db39ca439d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/pri.1916",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "33d5bd272bea37f70d0dccd3135ecc471935eac1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53377483 | pes2o/s2orc | v3-fos-license | Convolution operators defined by singular measures on the motion group
This paper contains an $L^{p}$ improving result for convolution operators defined by singular measures associated to hypersurfaces on the motion group. This needs only mild geometric properties of the surfaces, and it extends earlier results on Radon type transforms on $\mathbb{R}^{n}$. The proof relies on the harmonic analysis on the motion group.
Introduction
The classical Radon transforms satisfy L p improving properties (see [7]) and they are closely related to certain convolution operators associated to singular measures (see e.g. [13]). The above results have been extended in many ways, not necessarily related to convolution structures, see e.g. [5], [12], [15] and the references therein.
Our starting point is the following result, proved in [10].
Theorem 1 Let Γ be a convex compact curve in the plane and let γ be the arc-length measure of Γ. We identify θ ∈ [0, 2π] with e iθ ∈ S 1 (the unit circle). Let γ θ be the rotated measure, i.e. R 2 f (x) dγ θ (x) = R 2 f e iθ x dγ (x).
Consider the operator T defined by where x ∈ R 2 and * R 2 denotes the convolution in R 2 . Then The proof of this theorem relies on an estimate for the average decay of the Fourier transform γ proved by A.N. Podkorytov in [8] (see also [3]), which has been extended to several variables in [2]. The following statement is different from the one in [2], but it can be proved by a mild variation of the original argument.
Theorem 2 Let Γ be a compact convex submanifold of codimension 1 in R n (i.e. Γ can be seen as the graph of a convex function defined in a convex domain in R n−1 ). Let γ = χσ where σ is the surface measure on Γ and χ is a smooth cutoff supported in the interior of Γ. Then where dω is the normalized measure on the unit sphere S n−1 . Moreover the constant c depends only on χ and the diameter of Γ.
The above theorem easily implies the following extension of Theorem 1 (see [1]). For k ∈ SO(n) and γ a measure on R n , let γ k be defined by Theorem 3 Let Γ be a compact convex submanifold of codimension 1 in R n and let γ = χσ where σ is the surface measure on Γ and χ is a smooth cutoff function supported in the interior of Γ. Consider the operator T defined by where x ∈ R n , k ∈ SO(n) and * R n denotes the convolution in R n . Then The operator T in Theorem 3 can be seen as a convolution operator on the motion group M n , which is R n × SO(n) equipped with the group product (x, k) (y, h) = (x + ky, kh) and unit (0, e). Indeed the convolution of two functions F and G on M n is defined by where dh is the Haar measure on SO(n).
Note that if F (x, k) = f (x) and µ denotes the measure on M n defined by The above family {γ k } of hypersurfaces in R n turns out to be a manifold in R n × SO(n). Indeed for any k 0 ∈ SO(n) the coset {(x, k 0 ) : x ∈ R n } contains the n − 1 dimensional manifold Γ k 0 , i.e. the manifold Γ rotated by k 0 . The union of the manifolds Γ k 0 is a hypersurface X in R n × SO(n).
When n = 2, the Γ's are convex curves and their union can be seen as a 2-dimensional surface in R 2 × S 1 ; the picture shows this surface in the particular case Γ (t) = (t, t 2 + 1), together with the plane θ = π: In this paper we want to replace the above manifold X with a more general manifold Y in R n × SO(n), so that the action of Γ as a convolution operator on R n is averaged not only on rotations, but on a wider family of transformations. In order to deal with this more general setting it is natural to work in the Euclidean motion group M n rather than in R n × SO(n) and take advantage of the representation theory of M n .
Main result
The following is our main result. By (1) it is an extension of Theorem 3.
c (M n ) and let µ be the measure on M n given by Proof. Without loss of generality, we may assume that Y is the graph of the function where we use the notation where ν is the product of χ by a Jacobian term. For every z ∈ C, let i z be the distribution on R defined by We define the family of distributions µ z by where I z is the distribution defined by For any k ∈ SO(n) define the measure µ k on R n by Then define the distribution E z on R n by and let µ z k = µ k * R n E z . Then it can be easily shown that We introduce the analytic family of operators Then the proof follows from Stein's complex interpolation theorem and the following result.
The unitary dual M n (n ≥ 2) can be described in the following way (here [11] is a reference for the representation theory of M n , see also [14]). Let L = SO (n − 1), considered as a subgroup of SO(n). For each σ ∈ L realised on a Hilbert space V σ of dimension d σ consider the space L 2 (SO(n), σ) consisting of functions ϕ on SO(n) taking values in C dσ×dσ , the space of d σ ×d σ complex matrices, satisfying the condition which are also square integrable on SO(n): Note that L 2 (SO(n), σ) is a Hilbert space under the inner product For each λ > 0 and σ ∈ L we define a representation π λ,σ of M n on L 2 (SO(n), σ) as follows. For ϕ ∈ L 2 (SO(n), σ) and (x, k) ∈ M n let π λ,σ (x, k) ϕ (ℓ) = exp 2πiλℓ −1 e 1 · x ϕ (ℓk) , where e 1 = (1, 0, . . . , 0) and ℓ ∈ SO(n). If ϕ j (k) are the column vectors of ϕ ∈ L 2 (SO(n), σ) then ϕ j (ℓk) = σ (ℓ) ϕ j (k) for all ℓ ∈ L. Therefore L 2 (SO(n), σ) can be written as a direct sum of d σ copies of H (SO(n), σ) which is defined to be the space of square integrable ϕ : It can be shown that π λ,σ restricted to H (SO(n), σ) is an irreducible representation of M n . Moreover, any infinite dimensional irreducible unitary representation of M n is unitarily equivalent to one and only one π λ,σ . Finite dimensional irreducible unitary representations of SO(n) also yield irreducible unitary representations of M n . As they do not appear in the Plancherel formula we neglect them. We remark that when n = 2 the unitary dual L contains only the trivial representation. Given f ∈ L 1 (M n ) ∩ L 2 (M n ) we define the group Fourier transform of f by It can be shown that π λ,σ (f ) is a Hilbert-Schmidt operator on H (SO(n), σ) and we have the Plancherel formula where · HS denotes the Hilbert-Schmidt norm.
Applying Plancherel formula to T −(n−1)/2+is f we get where · OP is the operator norm on H (SO(n), σ). We shall show below that π λ,σ µ −(n−1)/2+is OP ≤ c n uniformly in λ and σ, so that We now prove (5). For ϕ, ψ ∈ H (SO(n), σ) we have Assume for a moment Re z > 0, then µ z is a measure and By analytic continuation, the equality where dζ k is the surface measure of the convex hypersurface in R n given by the intersection Y ∩ {(x, k) : x ∈ R n }. By Theorem 2 we get To end the proof we observe that By Fubini's theorem and the invariance of the Haar measure on SO(n) we get This ends the proof of the Lemma. Hence Theorem 4 is proved.
Remark 6 For functions on M n which are independent of the rotational variable, i.e. for functions F such that F (x, k) = f (x), Theorem 4 can be obtained from Theorem 3. Indeed where µ τ,σ denotes the measure µ τ rotated by σ. This yields the following weaker version of Theorem 4. For a general F let F (x, k) = sup τ ∈SO(n) |F (x, kτ )| then F * µ L n+1 (Mn) ≤ F * µ .
The above seems to be the best we can get by using earlier results such as the ones in [1].
Remark 7
A familiar example (the characteristic function of a small ball) and the previous remark can be used to show that the indices in (2) cannot be improved.
Remark 8
It is interesting to compare Theorem 4 with Theorem 1.1 in [9] where it is shown that the L p improving property of a measure is related to the fact that the supporting manifold generates the full group.
Remark 9
The techniques in our paper are L 2 in nature and they seem to provide only L p − L p ′ results. We do not know how to get mixed norm estimates similar to the ones which have been proved in [10] through certain L r estimates for the average decay of Fourier transforms (note that in general these L r estimates cannot be obtained by interpolation between L 2 and L ∞ , see e.g. [4]). | 2010-01-04T19:42:39.000Z | 2010-01-04T00:00:00.000 | {
"year": 2010,
"sha1": "a879c050660a0a8e408355c08f35fe3a6e4698d4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1001.0560",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a879c050660a0a8e408355c08f35fe3a6e4698d4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
208085909 | pes2o/s2orc | v3-fos-license | Highly Luminescent Ternary Nanocomposite of Polyaniline, Silver Nanoparticles and Graphene Oxide Quantum Dots
Quantum dots (QDs) with photostability show a potential application in optical sensing and biological imaging. In this work, ternary nanocomposite (NC) of high fluorescent polyaniline (PANI)/2-acrylamido-2-methylpropanesulfonic acid (AMPSA) capped silver nanoparticles (NPs)/graphene oxide quantum dots (PANI/Ag (AMPSA)/GO QDs) have been synthesized by in situ chemical oxidative polymerization of aniline in the presence of Ag (AMPSA) NPs and GO QDs. Ag (AMPSA) NPs and GO QDs were prepared by AgNO3 chemical reduction and glucose carbonization methods, respectively. The prepared materials were characterized using UV-visible, Fourier transform infrared (FTIR), photoluminescence and Raman spectroscopies, X-Ray diffractometer (XRD) and high- resolution transmission electron microscopy (HRTEM). HRTEM micrographs confirmed the preparation of GO QDs with an average size of 15 nm and Ag (AMPSA) NPs with an average size of 20 nm. PANI/Ag (AMPSA)/GO QDs NC showed high and stable emission peak at 348 nm. This PANI/Ag (AMPSA)/GO QDs NC can emerge as a new class of fluorescence materials that could be suitable for practical sensing applications.
Preparation of graphene oxide quantum dots. GO QDs were prepared by directly glucose pyrolysis.
Two grams of glucose were placed into a beaker and were heated to 250 °C onto a hot plate. After 5 min, the glucose was liquated. Subsequently, the color of the liquid was changed from colorless to yellow, and then to orange through 20 min. This orange liquid was added drop by drop into 100 mL of 12.5% ammonia solution under vigorous stirring. Then the solution was heated at 70 °C for 3 hours until the odor of ammonia vanished and the pH of the solution became 7. The volume of GO QDs solution was maintained at 50 mL. The GO QDs powder was separated by heating and evaporation of the GO QDs solution at high temperature for about 2 hours. preparation of AMpSA capped Ag (Ag (AMpSA)) nps. Ag (AMPSA) NPs were synthesized by the chemical reduction of silver nitrate using sodium borohydride as a reducing agent. 1.2 mL of freshly prepared 10 mM sodium borohydride was added to 36.8 mL of deionized water in ice bath under continuous stirring. Then, 0.4 ml of 10 mM AgNO 3 solution was added dropwise. The color of the solution gradually changed to yellow, indicating the formation of the Ag NPs. Finally, 0.3 ml of 10 mM AMPSA as a stabilizing agent was added dropwise to the mixture with a continuous stirring for 10 min. Ag (AMPSA) NPs were separated by centrifuging process (Focus serial No: 1107, Spain) at 8000 rpm for 10 min. The NPs were washed for several times using ethanol and deionized water. The collected Ag NPs were dried in a vacuum oven (GCA/precision scientific, model 10, Thelco) at 40 °C. preparation of DBSA Doped pAni (pAni). DBSA doped PANI solution was prepared by chemical oxidative polymerization of aniline. Aniline monomer (0.03 mL) was dissolved in 10 mL deionized water. Ten milliliters acidic solution of DBSA (0.3 g) and APS (0.1 g) were then slowly added through 1 h to the aniline solution with a continuous stirring at room temperature until the dark green color of colloidal solution was obtained.
Preparation of PANI/AMPSA capped Ag (PANI/Ag (AMPSA)) NC. PANI/Ag (AMPSA) NC was prepared by in situ oxidative polymerization of aniline in presence of Ag (AMPSA) NPs. Aniline monomer (0.03 mL) was dissolved in 10 mL previously prepared Ag (AMPSA) NPs. Ten milliliters acidic solution of DBSA (0.3 g) and APS (0.1 g) were then slowly added to the aniline solution with a continuous stirring at room temperature until the dark green color of the colloidal solution was obtained. The prepared PANI/Ag (AMPSA) NC powder was collected by centrifuging at 7000 rpm for 8 min and washed consecutively with ethanol and deionized water. The collected NC was dried in a vacuum oven at 40 °C.
Preparation of PANI/AMPSA capped Ag/GO QDs (PANI/Ag (AMPSA)/GO QDs) NC. PANI/Ag
(AMPSA)/GO QDs NC was prepared with the same procedure PANI/Ag (AMPSA) NC was prepared as above. The ternary NC was prepared by mixing 10 mL of AMPSA capped Ag NPs and 1 mL of the previously prepared GO QDs solution under magnetic stirring for 10 min. Aniline monomer (0.03 mL) was added to the above mixture under continues stirring for 10 min. Ten milliliters of DBSA (0.3 g) and APS (0.1 g) aqueous solution was added dropwise with stirring at room temperature until the dark green colored of the nanocomposite colloidal was obtained.
Characterization of fluorescent Ag (AMPSA) NPs, GO QDs, PANI/Ag (AMPSA) NC and PANI/Ag (AMPSA)/GO QDs NC. The absorption spectra were recorded with a UV-Visible spectrophotometer (Evolution 300, Thermo Scientific, USA). The aqueous colloidal solutions of the samples were used for obtaining the UV-Vis spectra in the range from 200 to 900 nm to determine their characteristic peaks. To examine the emission properties, a photoluminescence (PL) study of colloidal solutions was carried out. PL measurements were carried out at room temperature with fluorescence spectrophotometer (Perkin Elmer LS-55). Both the excitation and emission slits were set at 10 and 10 nm, respectively. The structural identifications and the surface modification of the samples were confirmed by the FTIR spectroscopic by using Fourier transform infrared spectrophotometer (Spectrum BX 11-LX 18-5255 Perkin Elmer). The spectra were recorded in the wavenumber range of 4000-350 cm −1 . The crystalline structures of the prepared materials were evaluated by XRD analysis (Bruker-AXS D8 Discover) at room temperature. The Bragg angle (2 θ) has the range from 5 to 90 degrees to determine the degree of crystallinity of the prepared samples. The X-ray source was Cu target generated at 30 kV and 30 mA with scan speed 4 deg/min. Raman spectra of GO QDs and PANI/Ag (AMPSA)/GO QDs NC were measured using triple monochromatic combined with a Peltier cooled charge-coupled device detector system (Senterra Bruker). The spectra were acquired in the back-scattering geometry while the 514.5 nm line of an Ar laser was focused on the samples for excitation at a power of 2 mW, measured directly before the samples. Morphology, particle size, and selected area electron diffraction (SAED) were investigated using high resolution transmission electron microscopy (HRTEM) (JEOL, JEM-2100 LaB6). The charge of Ag (AMPSA) NPs was measured using a Zetasizer Malvern Nano-ZS. Suspension was placed in a universal folded capillary cell attached to platinum electrodes.
The particle size distribution and average particle size of Ag (AMPSA) NPs were determined using particle size analyzer (Submicron Particle Size Analyzer-Beckman Coulture-N5) at 20 °C with 10.9 degree detection angle.
Results and Discussion
Optical properties of Ag (AMPSA) NPs, GO QDs, PANI/Ag (AMPSA) NC and PANI/Ag (AMPSA)/ Go QDs nc. The UV-Vis spectra of Ag (AMPSA) NPs, GO QDs, PANI/Ag (AMPSA) NC, and PANI/Ag (AMPSA)/GO QDs NC are illustrated in Fig. 1. The UV-Vis absorption spectrum of Ag (AMPSA) NPs presented in Fig. 1(a) demonstrates a strong absorption peak at 390 nm, which is ascribed to the surface plasmon resonance (SPR) of Ag NPs 11,12 . The shape of the plasmon band is symmetrical and narrow confirming that Ag (AMPSA) NPs have a narrow size distribution 13 . The stability of Ag (AMPSA) NPs was determined by the intensity of this absorption peak and zeta potential. It is observed that the peak intensity of Ag (AMPSA) NPs is slightly declined after 4 weeks by ~12%. The potential value of as prepared Ag (AMPSA) NPs is −26.4 mV (Fig. 1b). The potential values greater than +25 mV or less than −25 mV typically have more stability 14 . Moreover, Mau et al. 15 claimed the stability of prepared Ag NPs over one month despite decreasing their absorbance by almost 16%. The AMPSA capping agent acts as a stabilizer and preserves the Ag NPs from the photodegradation.
The UV-Vis absorption spectrum of GO QDs suspended solution exhibits two absorption peaks centered at 218 and 270 nm as shown in Fig. 1(c). These peaks are attributed to π electron transition in the C=O and C=C www.nature.com/scientificreports www.nature.com/scientificreports/ groups containing GO QDs. More specific, the 218 nm high peak has resulted from π → π* transition of C=C and the 270 nm small peak is due to n → π* transition of the C=O bond 4,16 .
The UV-Vis spectra of PANI/Ag (AMPSA) NC and PANI/Ag (AMPSA)/GO QDs NC are presented and compared in Fig. 1(d). The characteristic bands of doped PANI at about 270, 334, 414 and 790 nm are observed in the PANI/Ag (AMPSA) NC spectrum. Absorption peak located around 270 nm is recognized via the chain of the aromatic nuclei and corresponds to the π-π* transitions 17 . The small peak around 334 nm can be also attributed to the π−π* transition of benzenoid rings 18,19 . The small shoulder around 414 nm is due to the polaronic transition (polaron-π*) of protonated polyaniline 18. In addition, the broad peak located around 790 nm is attributed to polaron band transition (π-polaron) on PANI backbone 18 . It is noted that these characteristic peaks appear in PANI/Ag (AMPSA)/GO QDs NC with a small red shift in these peaks position after the addition of GO QDs. Moreover, the peak of Ag NPs (390 nm) is overlapped with the characteristic peaks of PANI (270-280) in PANI/ Ag (AMPSA) and PANI/Ag (AMPSA)/GO QDs NC. The absorption peak of GO QDs (270 nm) is also overlapped with the peak of PANI (420) in PANI/Ag (AMPSA)/GO QDs NC.
To examine the emission properties, PL spectra of a fixed volume (100 μL of stock solutions in 3 mL deionized water) of the prepared Ag (AMPSA) NPs, GO QDs, PANI/Ag (AMPSA) NC and PANI/Ag (AMPSA)/GO QDs NC are carried out at room temperature (Fig. 2). The emission spectra can be used to explain the recombination process of photogenerated electrons and holes by the fluorescence intensity. The high emission intensity corresponds to recombination of photogenerated charge carriers with a short lifetime. The separation of the photogenerated carriers, electrons (e − ) and holes (h + ), is high due to a longer lifetime, that leads to diminishing the intensity in the PL spectra 19 .
The PL spectra of Ag (AMPSA) NPs aqueous solutions at different excitation wavelengths are shown in Fig. 2(a). Ag NPs with size larger than 2 nm exhibit a localized surface plasmon resonance and are normally non-luminescent 20 . In this work, it is noted that fluorescence spectrum with a broad peak is recorded for Ag (AMPSA) NPs (20 nm). The HRTEM imaging and SAED techniques were explored to further probe the fine structures of obtained luminescent Ag NPs. The luminescent Ag (AMPSA) NPs have polycrystalline structures, as will be shown later from HRTEM images and contain small domains. These small size domains result in discrete energy states that lead to the luminescence 20,21 . In contrast, the average domain sizes of non-luminescent large Ag (AMPSA) NPs are greater than 2 nm.
By excitation of a sample of Ag (AMPSA) NPs with several wavelengths varied from 270 nm to 330 nm, broad emission bands from about 350 to 490 nm and a small sharp peak at 426 nm are shown. This sharp peak is attributed to the SPR of Ag NPs 20,22 . The peak positions of the PL emission of Ag (AMPSA) NPs are fixed as the excitation wavelength changes. Additionally, the intensities of the PL peaks decrease with progressively longer excitation wavelengths 23 . However, this excitation wavelength-independent PL behavior of Ag (AMPSA) NPs is in contrast with the other published data of Ag NPs, in which their PL emission peak positions are shifted and www.nature.com/scientificreports www.nature.com/scientificreports/ depended on the excitation wavelengths 20,22 . The maximum emission intensity of Ag (AMPSA) is found at λ ex of 270 nm. Figure 2(b) shows the PL spectra of GO QDs at different excitation wavelengths. Broad PL bandwidth can appear when the sample is excited under different wavelengths. The PL mechanism of GO QDs is a combination of PL components from four types of electron transitions, σ*-n and π*-n transitions dominated by the functional groups, π*-π transitions of the aromatic cores and π*-midgap states-π transitions 24 . The strongest signal at 413 nm is observed with an excitation wavelength of 340 nm where this shorter wavelength with higher photon energy is more effective for photon excitation. The PL peaks are shifted from 413 to 466 nm and their intensities are decreased as excitation wavelengths exceeded from 340 to 400 nm. This excitation-dependent PL behavior was extensively reported in fluorescent carbon-based nanomaterials 25,26 and is caused by the electronic conjugate structures, free zigzag sites and the wide distributions of differently sized dots 26,27 .
PL spectra of PANI, PANI/Ag (AMPSA) NC and PANI/Ag (AMPSA)/GO QDs NC aqueous solutions at excitation wavelength of 270 nm are demonstrated in Fig. 2(c). PANI aqueous solution exhibits high PL. The origin of PL in PANI is due to the delocalized π -conjugated electrons and π*-π transition of the benzenoid unit of polyaniline 28 .
The presence of Ag (AMPSA) NPs during the PANI polymerization reduces the PL intensity of pristine PANI due to the destructive spectral overlapping 29 . However, the PL intensity of the PANI/Ag (AMPSA)/GO QDs NC is improved, since the radiative recombination rate is increased by the coupling of the surface plasmon in the Ag NPs and GO QDs 24,30,31 as shown in Fig. 2(c). Matching plasmon resonance of Ag NPs to the emission spectrum of GO QDs is essential for achieving efficient enhancement of PL 24 . This complex nanostructures composed of Ag NPs concentrates the photon energy in a small region, which significantly enhances the local electromagnetic field. The area affected by the enhanced electromagnetic field, called a "hot spot", contributes to amplify the weak emission signal 32 .
The most obvious mechanism for the PL enhancement of PANI/Ag (AMPSA)/GO QDs NC due to GO QDs adsorption onto PANI/Ag (AMPSA) NC is based on the electrostatic interaction and the van der Waals forces. The large amount of negatively charged groups such as carboxyl, aldehyde and hydroxyl on the GO QDs and the positively charged amine groups of PANI/Ag (AMPSA) NC allows relatively strong electrostatic interaction. Such electrostatic effects can be considered as the main reason for the interaction between GO QDs and PANI/Ag (AMPSA) NC. Besides, functional groups like −OH and −NH 2 could work as the donor or acceptor of hydrogen bonds. This leads to the aggregation, which passivates the surface defect states of GO QDs and the PL intensity of the NC is enhanced 33,34 . It can be concluded that the PANI/Ag (AMPSA)/GO QDs NC has high PL intensity thanks to the synergistic effect of the constituents of the ternary composite involved PANI, GO QDs and Ag (AMPSA) NPs.
The room-temperature PL quantum yield (QY) of PANI/Ag (AMPSA)/GO QDs NC was determined by comparing the integrated emissions of the NC samples in aqueous solution with those of standard fluorescent "L-tryptophan" with an identical optical density. The QY of PANI/Ag (AMPSA)/GO QDs NC is 0.138 ≈ 14%. This value is similar to the QY of the standard L-tryptophan that reported in the literature 35 . For the QY estimation, the Eq. (1) is used 36,37 . Where F and F std are the PL areas in the sample and the standard amino acid (L-tryptophan), respectively; A and A Std are the absorbance of the NC and L-tryptophan and; n and n std are the refraction index of the NC and L-tryptophan. The QY of the standard L-tryptophane is 0.14 35 .
The refractive indices of PANI/Ag (AMPSA)/GO QDs NC and L-tryptophan were measured using Abbe refractometer.
Stability of PANI/Ag (AMPSA)/GO QDs NC. Stability of the fluorescent materials is a vital parameter
for the application of PANI/Ag (AMPSA)/GO QDs NC as a fluorescent probe sensor. To ensure the stability of PANI/Ag (AMPSA)/GO QDs, the effect of ionic strength on the fluorescence of PANI/Ag (AMPSA)/GO QDs NC is examined in the presence of various concentrations of NaCl. Also, the fluorescence spectra of PANI/Ag (AMPSA)/GO QDs NC are measured after stored for different periods at room temp (~30 °C).
Effect of ionic strength. The influence of ionic strength on fluorescence intensity of synthesized PANI/Ag (AMPSA)/GO QDs is studied using various concentrations of NaCl from 100 to 500 mM as presented in Fig. 3. It is observed that ionic strength has no significant effect on fluorescence intensity which is evidence that there is no interaction between the nanocomposite and NaCl. Results are demonstrated that PANI/Ag (AMPSA)/GO QDs NC have a stabilized fluorescence intensity under different ionic strength and it is a good candidate as fluorescent sensor applications.
Effect of time.
To investigate the fluorescence stability of the prepared PANI/Ag (AMPSA)/GO QDs with time, the PL intensity of PANI/Ag (AMPSA)/GO QDs solution is weakly measured for 5 weeks. Results are revealed that PANI/Ag (AMPSA)/GO QDs solution exhibit high resistance to photobleaching and the fluorescence intensity is slightly dropped by 7.3% and 16.3% after three and five weeks, respectively as depicted in Fig. 4. The high stability of the PL of PANI/Ag (AMPSA)/GO QDs may be due to the presence of AMPSA capping agent or the high stable PL of GO QDs itself. Also, the PANI/Ag (AMPSA)/GO QDs NC solution remains homogeneous and dispersed without any aggregation or color change. The XRD pattern of GO QDs illustrated in Fig. 5(b) shows a characteristic broad diffraction peak (002) centered at 2θ = 17.62°. This broad peak also indicates that the prepared GO QDs have a small particle size 16,41 . This is also mainly due to the presence of oxygenated groups, which increased the d-space between graphene sheets 16,25,[42][43][44] .
XRD patterns of PANI/Ag (AMPSA) and PANI/Ag (AMPSA)/GO QDs nanocomposites shown in Fig. 5(c,d, respectively) depict the dominants characteristic peaks of PANI in the form of emeraldine salt. There are two broad peaks at 17.30°, 19.72° and a small sharper peak at 25.22° corresponding to the (011), (020) and (200) lattice planes of PANI chains, respectively. The first peak at 17.30° is attributed to parallel repeat units of PANI. The other two peaks at 19.72° and 25.22° are attributed to the periodicity parallel and perpendicular to the polymer chains of PANI, as well as to a periodicity caused by H-bonding between PANI chains 40,45 . There are also some small peaks characteristics for both Ag 2 O and Ag NPs with an obvious decrease in their peaks intensity in comparison with those of pristine Ag (AMPSA) NPs. This is maybe due to the amorphous polymer coating and shielding the Ag (AMPSA) NPs 46 . Another interesting aspect is that the peak of Ag NPs at 43.92° in the Ag (AMPSA) NPs XRD pattern shifts to a higher 2θ in the nanocomposites. According to Blanton and Majumdar 47 , the 2θ peak can shift due to the oxygen functional groups on the Ag (AMPSA) NPs surface that facilitates the interaction between PANI and Ag (AMPSA) NPs. Another reason for the peak shift is the slight stretching of the unit cell of Ag NPs due to the adsorption of PANI molecular chains on the surface of the Ag (AMPSA) NPs 48 . For the XRD pattern of PANI/Ag (AMPSA)/GO QDs NC, the diffraction peak of GO QDs at 17.62° is overlapped with the peak of PANI at 17.30° (Fig. 5d).
FTIR technique was used to determine the functional groups of GO QDs, PANI/Ag (AMPSA) NC and PANI/ Ag (AMPSA)/GO QDs NC as illustrated in Fig. 6. The FTIR spectrum of GO QDs (Fig. 6a) shows a band at about 1632 cm −1 corresponding to the aromatic C=C stretching vibration. At the same time, the characteristic peaks at The infrared spectrum of PANI/Ag (AMPSA) NC presented in Fig. 6(b) demonstrates the presence of bands of PANI. The small broad band at 3442 cm −1 represents the N-H stretching mode 45,49 . The two peaks appear around 1560 cm −1 and 1488 cm −1 are assigned to C=C stretching vibration of the quinoid ring and C=C stretching vibration of the benzenoid ring, respectively [49][50][51] . The band at 1314 cm −1 is assigned to the C-N single bond stretching in benzenoid ring 50,51 . The peak at 1120 cm −1 corresponds to the vibration of (−NH + =) group resulted in the DBSA doping process of polyaniline 45,49,51 , while the peak at 783 cm −1 is associated with C-H out-of-plane bending vibrations of the para-substituted benzene ring [50][51][52] . The spectrum of PANI/Ag (AMPSA) NC also shows the C=O stretching peak of AMPSA monomer at 1658 cm −1 53 .
The FTIR spectrum of PANI/Ag (AMPSA)/GO QDs NC (Fig. 6c) confirms the presence of PANI in the nanocomposite and there are no obvious characteristic bands of GO QDs in the PANI/Ag (AMPSA)/GO QDs. This is maybe due to those vibrational bands of PANI shield or interference with the bands of GO QDs. It is notable that the spectrum of PANI/Ag (AMPSA)/GO QDs NC is similar to the spectrum of PANI/Ag (AMPSA) NC. These results confirm the successful preparation of GO QDs, PANI/Ag (AMPSA) NC and PANI/Ag (AMPSA)/GO QDs NC. The degree of oxidation can be predicted depending on the relative intensities of FTIR absorption peaks of benzenoid and quinoid stretching vibrations. These peaks have a ratio of about 1:1 in the doped PANI of the PANI/Ag (AMPSA) and PANI/Ag (AMPSA)/GO QDs NC as shown in Fig. 6(b,c, respectively). This shows that the doping level of the PANI is 50% 54 .
Raman spectroscopy is used to analyze information related to the electronic and structural properties of GO QDs and PANI/Ag (AMPSA)/GO QDs NC. It is a powerful tool for the characterization of carbonaceous materials. Figure 7(a) depicts the Raman spectrum of GO QDs and the major Raman features of GO QDs are the D band at around 1324 cm − 1 , the high G band at 1589 cm − 1 and the small broad 2D band at around 2236 cm −1 . The D band represents the defect in graphitic structure 55-57 and the G band represents the symmetric vibration www.nature.com/scientificreports www.nature.com/scientificreports/ of carbon atoms in graphite structure 57 . The intensity ratio of D band and G band (I D /I G ) represents the defect density in carbon structure 58 . It is found that I D /I G for the prepared GO QDs is only around 0.6 and is similar to that of high quality few-layer graphene nanoribbons, which indicates the high quality of the prepared GO QDs 41 . In the Raman spectrum of PANI/Ag (AMPSA)/GO QDs NC shown in Fig. 7(b), the intensity ratio of the two bands (I D /I G ), at 1348 cm −1 and 1585 cm −1 is higher than GO QDs (about 0.7). This suggests that the defect in the PANI/Ag (AMPSA)/GO QDs NC is raised. It is noticed that D band is slightly shifted to higher wavenumber in PANI/Ag (AMPSA)/GO QDs NC and this is attributed to the interaction between GO QDs and PANI and Ag NPs in PANI/Ag (AMPSA)/GO QDs NC. However, other vibrational peaks of PANI are not appeared because of overlapping with GO peaks 59 8) and (9). HRTEM image of Ag (AMPSA) NPs shown in Fig. 8(a) indicates that there are aggregations of Ag NPs, although the presence of AMPSA as a capping agent. The shapes of the nanoparticles are nearly oval with an average size www.nature.com/scientificreports www.nature.com/scientificreports/ of 20 nm. Figure 8(b) demonstrates polycrystalline domains of Ag (AMPSA) NPs that contain luminescent crystals (Fig. 8c) and non-luminescent Ag (AMPSA) NPs as illustrated in Fig. 8d. Also, the SAED of Ag (AMPSA) NPs presented in Fig. 8(e) confirms that Ag (AMPSA) NPs are polycrystalline with a d-spacing of ~0.33 nm. To confirm the size distribution of Ag (AMPSA) NPs, the particle size analyzer was used. It is clear that the size distribution (Fig. 8f) is a narrow and the average particle size (estimated by fitting the distribution spectrum using the Gaussian distribution function) is ~27.5 nm.
Morphological features of the as-synthesized GO QDs, PANI/Ag (AMPSA) NC and PANI/Ag (AMPSA)/ GO QDs NC are verified by HRTEM as shown in Fig. 9. HRTEM image of GO QDs (Fig. 9a) displays spherical GO nanoparticles with an average size of 15 nm and the obvious crystal lattice space presents a high crystallinity of GO QDs with d-space of 0.23 nm as depicted in Fig. 9(b). The observed SAED of GO QDs shown in Fig. 9(c) consists of concentric rings that show the polycrystalline structure of the GO QDs 60 .
HRTEM images of PANI/Ag (AMPSA) NC presented in Fig. 9(e,f) confirm the existence of Ag (AMPSA) NPs in the PANI matrix and the well-resolved lattice space with a d-spacing of ~0.27 nm of Ag (AMPSA) NPs clarifies their crystallinity. Moreover, the SAED of PANI/Ag (AMPSA) nanocomposite is displayed in Fig. 9(g). The hollow circles in the pattern confirm the amorphous structure of PANI. HRTEM images of PANI/Ag (AMPSA)/GO QDs NC at different magnifications shown in Fig. 9(h,i) reveal that it is composed of sheets of PANI as a matrix including Ag (AMPSA) NPs (red rectangles) and GO QDs (yellow spheres), respectively. | 2019-11-18T15:11:06.306Z | 2019-11-05T00:00:00.000 | {
"year": 2019,
"sha1": "4827455c638fd70c977348817957a1fcf7236abd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-53584-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4827455c638fd70c977348817957a1fcf7236abd",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
36893245 | pes2o/s2orc | v3-fos-license | Purification and Characterization of 2-Methyl-branched Chain Acyl Coenzyme A Dehydrogenase, an Enzyme Involved in the Isoleucine and Valine Metabolism, from Rat Liver Mitochondria*
2-Methyl-branched chain acyl-CoA dehydrogenase was purified to homogeneity from rat liver mitochon- dria. The native molecular weight of the enzyme was estimated to be 170,000 by gel filtration. On sodium dodecyl sulfate-polyacrylamide gel electrophoresis both with and without 2-mercaptoethanol, the enzyme showed a single protein band with M, = 41,500, sug-gesting that this enzyme is composed of four subunits of equal size. Its isoelectric point was 5.50 2 0.2, and A?&, nm was 12.5. This enzyme contained protein-bound FAD. The purified enzyme dehydrogenated S-2-methyl- butyryl-CoA and isobutyryl-CoA with equal activity. The activities with each of these compounds were co- purified throughout the entire purification procedure. This enzyme also dehydrogenated R-2-methylbutyryl- CoA, but the specific activity was considerably lower (22%) than that for the S-enantiomer. The enzyme did not dehydrogenate other acyl-CoAs, including isoval-eryl-CoA, propionyl-CoA, butyryl-CoA, octanoyl-CoA, and palmitoyl-CoA, at any significant rate. Apparent K,,, and V,,, values for S-2-methylbutyryl-CoA were 20 p~ and 2.2 pmol min” mg”, respectively, while those for isobutyryl-CoA were 89 p~ and 2.0 pmol min” mg” using as an out using the purified 2-methyl-branched chain acyl-CoA dehydrogenase preparation as the antigen and the four individual antibodies to other acyl-CoA dehydrogenases mentioned above. of
the /+oxidation cycle. These are butyryl-CoA-(EC 1.3.99.2), general acyl-CoA-(EC 1.3.99. 3), and long chain acyl-CoA dehydrogenases; they are most active with butyryl-CoA, octanoyl-CoA, and palmitoyl-CoA, respectively, as substrate. These enzymes are localized in the mitochondria of various tissues in mammals. However, the enzymes which dehydrogenate branched chain acyl-CoAs such as isovaleryl-CoA, 2methylbutyryl-CoA, and isobutyryl-CoA have not been extensively studied until recently. These branched chain acyl-CoAs are produced as intermediates in the metabolism of the branched chain amino acids, leucine, isoleucine, and valine, respectively. Previously, butyryl-CoA dehydrogenase (short chain acyl-CoA dehydrogenase) had been thought to catalyze the dehydrogenation of all short branched chain acyl-CoAs (1,2). However, several years ago, our biochemical observations on patients with isovaleric acidemia, an inborn error of leucine metabolism, suggested the existence of a dehydrogenase which is specific for isovaleryl-CoA (3)(4)(5). Using rat liver mitochondria as an enzyme source, we subsequently demonstrated that a specific isovaleryl-CoA dehydrogenase indeed exists and that it is distinct from butyryl-CoA dehydrogenase (6)(7)(8). Furthermore, we have recently reported the purification and characterization of isovaleryl-CoA dehydrogenase (8,9). This enzyme is biochemically and immunologically distinct from butyryl-CoA dehydrogenase (9).
In the course of these studies, we demonstrated for the first time that 2-methylbutyryl-CoA and isobutyryl-CoA were not dehydrogenated by either isovaleryl-CoA dehydrogenase or butyryl-CoA dehydrogenase. Instead, these two 2-methyl-substituted acyl-CoAs were dehydrogenated by an enzyme which was distinct from the other four acyl-CoA dehydrogenases (8). We have previously reported partial purification of this enzyme from rat liver mitochondria by a sequence of DEAE-Sephadex and hydroxyapatite column chromatographies and isoelectric focusing, and designated it 2-methyl-branched chain acyl-CoA dehydrogenase (8). In the present paper: we report the purification to homogeneity of 2-methyl-branched chain acyl-CoA dehydrogenase from rat liver mitochondria. We also describe here the molecular characteristics, kinetic parameters, requirement for ETF,' susceptibility to various types of inhibitors, and immunochemical properties of this enzyme.
Methods
Synthesis of Coenzyme A Thioesters of Sand R-2-Methylbutyric Acids, and Isoualeric Acid-S-and R-2-methylbutyric acids were prepared from L-isoleucine and L-allo-isoleucine, respectively, according to the method described previously (8). The purities of both carboxylic acids were 99% as determined by gas chromatographic analysis and mass spectrometry.
Coenzyme A thioesters of these carboxylic acids were synthesized by the mixed anhydride synthesis (10). These acyl-CoAs were purified by paper chromatography to remove unreacted coenzyme A and other reagents, using ethanol, 0.1 M potassium acetate, pH 4.5 (l:l), as a developing solvent. Coenzyme A thioester of isovaleric acid was also synthesized by the same method because the commercial isovaleryl-CoA from P-L Biochemicals contained 15% 2-methylbutyryl-CoA as determined by gas chromatography and mass spectrometry of the acyl group.
Assay of Acyl-CoA Dehydrogenases-Assays for the various acyl-CoA dehydrogenase activities were performed spectrophotometrically using PMS and DCIP as intermediate and terminal electron acceptors, respectively, and an appropriate acyl-CoA as substrate according to the method described previously (8,9). The incubation medium was composed of 0.1 M potassium phosphate buffer (pH 8.0), either 1.5 mM' or 3 mM3 PMS, 0.048 mM DCIP, 0.1 mM FAD, and 0.1 mM acyl-CoA unless otherwise mentioned. The final volume was 1 rnl.
The enzyme reaction was carried out a t 32 "C and the reaction was started with the addition of substrate. Bleaching of DCIP was followed at 600 nm for at least 2 min using a Beckman model 3600 double-beam spectrophotometer. Enzyme activity was cxpressed as micromoles or nanomoles of DCIP reduced/ml of enzyme solution/ min. The extinction coefficient of DCIP (21 mM" cm") at 600 nm was used as the basis for computation of the amount of DCIP reduced.
The suitability of E T F for supporting the purified 2-methylbranched chain acyl-CoA dehydrogenase activity was tested using the E T F preparation purified from rat liver mitochondria as described previously (8, 9). An appropriate amount of E T F (100-200 pg) was added to the above assay mixture, replacing PMS. The enzyme activity was again assayed for monitoring DCIP bleaching.
Preparation of Rat Liuer Mitochondria and the First Four Steps of Purification-In a typical experiment, 100 adult male Charles River CD rats, weighing 200 to 280 g, were killed by decapitation and the liver mitochondria were isolated by the method of de Duve et al. (11). The first four steps of the purification are the same as those utilized to isolate crude 8-methyl-branched chain acyl-CoA dehydrogenase as described previously (8). The mitochondria (52.0 g of protein) were sonicated and centrifuged a t 105,000 x g for 60 min (step 1). The supernatant was fractionated by ammonium sulfate (40-80%) precipitition (step 2).
The areciaitate was redissolved in 10 mM KPO, buffer. DH 8.0, containing 0:5 mM EDTA and dialyzed against the same buffer. The dialyzed solution (14.4 g of' protein) was applied to four DEAE-Sephadex A-50 columns (4.6 X 20 cm) equilibrated with 10 mM KPO, buffer, pH 8.0, 0.5 mM EDTA and the adsorbed fraction on each column was eluted with a linear NaCl gradient (0-0.6 M ) in 2 liters of the same buffer (step 3). The fractionation on this column was essentially identical with that described in our previous paper (8). Fractions from the column chromatography were separated into two which dehydrogenated isobutyryl-CoA, S-2-methylbutyryl-CoA, n-major preparations. Preparation B (tubes 62-85) contained activities butyryl-CoA, n-octanoyl-CoA, and palmitoyl-CoA.
Preparation B (1500 mg of protein) was applied to two hydroxyapatite columns (4.6 X 18 cm) equilibrated with 10 mM KPO4 (pH 7.5). The adsorbed fraction on each column was eluted with a linear gradient of 2 liters of KPO, buffer, p H 7.5 (0.01-0.33 M) (step 4). This column pattern was also similar to that described in our previous paper (8). Fractions from hydroxyapatite column chromatography ' 1.6 mM PMS was used for assay of 2-methyl-branched chain acyl-CoA-, isovaleryl-CoA-, and butyryl-CoA dehydrogenase activities (8).
were pooled into four major preparations (preparations D, E, F, and G). Preparation E (tubes 63-90) contained most of the S-2-methylbutyryl-CoA-and isobutyryl-CoA-dehydrogenating activities, and it was used for further purification.
Matrex Gel Blue A Chromatography (Step 5)"Preparation E (108 mg of protein in 25 ml) was applied to a Matrex Gel Blue A column (1.5 X 8 cm) equilibrated with 10 mM KPO, buffer, pH 8.0, containing 10% glycerol and 0.5 mM EDTA. The column was washed with the same buffer until the absorbance at 280 nm returned to the base-line, and then was washed with the same buffer containing 0.35 M NaC1. The adsorbed proteins were eluted with a linear gradient of 140 ml of the buffer containing 0.35 M NaCl and the same buffer containing 0.4 M NaCl and 7 mM FAD. Elution was done a t a flow rate of 0.42 ml/min.
Agarose-Hexane-CoA Chromatography (Step 6)"The sample solution (5 mg of protein in 7 ml) from step 5 was applied to an agarosehexane-CoA (type I) column (1 X 7 cm) equilibrated with 10 mM K P 0 4 buffer (pH 8.0) containing 10% glycerol and 0.5 mM EDTA. The column was washed with the same buffer until the absorbance a t 280 nm returned to the base-line. The adsorbed proteins were eluted with an 80-ml linear gradient from 0-0.5 M NaCl in equilibration buffer (pH 8.0). Elution was done a t a flow rate of 0.42 ml/min.
Purification of Electron Transfer Flauoprotein-ETF was purified to homogeneity from rat liver mitochondria according to the method described in our previous papers (8,9). The final preparation gave two protein hands on SDS-PAGE in both the absence and presence of 2-mercaptoethanol. These subunit M, = 30,000 and 35,500, in close agreement with those reported by Furuta et af (12). The ratios of absorbance at the maxima, 270:375:435:460 nm, were 6.9:0.04:1.0:0.8, as described previously (9).
Protein Determinations-Protein concentrations were determined by the method of Lowry et al. (13) unless otherwise indicated. Because the fractions from the Matrex Gel Blue A column contained a large amount of FAD, protein concentrations were determined by the method of Bradford (14). Determination of the protein concentrations of the pure enzyme preparation was done by the microbiuret method according to Itzhaki and Gill (15). Bovine serum albumin was used as standard in all of the assay methods.
Identification of Reaction Product-The reaction conditions for this purpose were similar to those utilized in the dye reduction assay except for the following modifications: the reaction mixture contained 100 mM phosphate buffer (pH 8.0), 3 mM PMS, 0.1 mM FAD, and 0.4 mM isobutyryl-CoA or 0.2 mM Sor R-2-methylbutyryl-CoA. No DCIP was added. The total volume was 0.5 ml. The mixture was incubated a t 37 "C for 2, 5, 10, and 20 min. After termination of the reaction by the addition of 0.05 ml of 3 M perchloric acid, the reaction product was hydrolyzed and steam-distilled according to the method previously described (9). The evaporated residues of the alkalinized distillate were redissolved in 50 p1 of 10% aqueous formic acid. One pl of the solution was injected into a Hewlett-Packard 5840A gas chromatograph equipped with flame ionization detectors and a 18850A terminal/data system. A coiled glass column (2 mm x 1.8 m) packed with SP-1200 (Supelco: Bellafonte, PA) was utilized for analysis. The temperature of the column oven was 110 "C, and nitrogen gas was used as a carrier gas. The recovery of acyl groups throughout these procedures was approximately 75%.
For mass spectral identification, the evaporated residues of the alkalinized distillate were dissolved in 1 ml of H20, acidified with 6 N HCl, and extracted four times with 1 ml of diethyl ether. The ether extracts were combined, dried over anhydrous MgS04, carefully concentrated to 0.5 ml on ice under a gentle nitrogen stream, and methylated with gaseous diazomethane according to the method previously described (16). A Finnigan 4510 automated gas chromatography/mass spectrometer/computer was used for analysis with electron impact ionization mode. A 10% OV-17 column (1.8 m X 2 mm) was used as the inset gas chromatographic column. The initial oven temperature was 40 "C, and it was raised at a rate of 6 "C/min. The ionizing voltage was 70 eV.
Electrophoretic Procedures-PAGE without SDS was performed in a 5.0% gel using Tris-glycine buffer (pH 8.9) according to the method Summary of purification steps for 2-methyl-branched chain acyl-CoA dehydrogenase from rat liver mitochondria Rat liver mitochondria (52 g of protein) were fractionated to purify the enzyme. Both isobutyryl-CoA (iC4CoA)and S-2-methylbutyryl-CoA (S-2-MeC4CoA)-dehydrogenating activities were determined by the dye reduction assay. The activity of each preparation was assayed after it was concentrated and dialyzed.
A c t i v i t i e s could not be determined due t o i n t e r f e r e n c e of t h e dye-reduction assay by non-specific r e d u c t a n t s .
of Davis (17). SDS-PAGE was carried out in 7.5% gels according to the method of Weber and Osborn (18). Gels were stained with 0.25% Coomassie brilliant blue and were destained in 7.5% acetic acid and 5% methanol solution. The proteins used as calibration standards Amino Acid Analysis-The amino acid composition of the purified enzyme preparation was determined after HC1 hydrolysis by the method of Stein and Moore (19) using a Beckman 121M amino acid analyzer. Total half-cystine content was determined as cysteic acid after performic acid oxidation according to the method of Hirs (20). Tryptophan content was determined after hydrolysis with 3 N mercaptoethanesulfonic acid according to the method of Penke et al. (21).
Immunoreactions-Antibody raised against the purified isovaleryl-CoA dehydrogenase (9) and those raised against short chain acyl-CoA-, medium chain acyl-CoA-(22), and long chain acyl-CoA dehydrogenases4 purified from rat liver mitochondria were used in immunoreactivity experiments. From the immune titration curve for purified short chain acyl-CoA-, medium chain acyl-CoA-, or long chain acyl-CoA dehydrogenase, the corresponding antibody (100 pl) precipitated 18 p g of the pure short chain acyl-CoA-, 14 pg of the pure medium chain acyl-CoA-, or 7 pg of the pure long chain acyl-CoA dehydrogenases under the conditions described in our previous paper (9). Immunotitrations and Ouchterlony double diffusion experiments were carried out using the purified 2-methyl-branched chain acyl-CoA dehydrogenase preparation as the antigen and the four individual antibodies to other acyl-CoA dehydrogenases mentioned above.
RESULTS
Purification of 2-Methyl-branched Chain Acyl-CoA Dehydrogenase-The entire purification procedure is summarized in Table I. Technical details are described under "Experimental Procedures." The first four steps of the purification are the same as previously described (8). Rat liver mitochondria were solubilized by sonication (step 1) and the supernatant Details of purification of short chain acyl-CoA-, medium chain acyl-CoA-, and long chain acyl-CoA dehydrogenases will be published elsewhere.
was fractionated by the sequence of ammonium sulfate precipitation (40-80%) (step 2), DEAE-Sephadex A-50 (step 3), and hydroxyapatite (step 4) chromatography. Fractionation patterns at steps 3 and 4 are shown in our previous publication (8). Preparation E, which was obtained from hydroxyapatite chromatography, contained other acyl-CoA dehydrogenase activities in addition to those dehydrogenating S-2-methylbutyryl-CoA and isobutyryl-CoA. In particular, butyryl-CoA dehydrogenase activity was very high while isovaleryl-CoA dehydrogenase activity was undetectable.
Relative specific activities using S-2-methylbutyryl-CoA, isobutyryl-CoA, isovaleryl-CoA, n-butyryl-CoA, n-octanoyl, and palmitoyl-CoA as substrates in preparation E were 1.0, 0.95, 0, 9.4, 0.68, and 1.5, respectively. The octanoyl-CoA-dehydrogenating activity in preparation E was due to the co-existing long chain acyl-CoA dehydrogenase; medium chain acyl-CoA dehydrogenase was not present in this preparation. In order to separate 2methyl-branched chain acyl-CoA dehydrogenase from other acyl-CoA dehydrogenases, preparation E (108 mg of protein in 25 ml) was applied to a Matrex Gel Blue A column (step 5). When elution was done with a linear FAD gradient (0-7 mM), S-2-methylbutyryl-CoA-and isobutyryl-CoA-dehydrogenating activities were co-eluted as a sharp single peak at 1-2 mM FAD, while only a small amount of n-butyryl-CoAdehydrogenating activity was eluted in this region; no significant activities for n-octanoyl-CoA and palmitoyl-CoA were detectable. Most of the butyryl-CoA-and long chain acyl-CoA dehydrogenases were eluted as very broad peaks a t FAD concentrations higher than 3 mM. When the column was further eluted with 10 mM KPO, buffer (pH 8.0) containing 0.8 M NaCl and 3 mM FAD, the butyryl-CoA-and long chain acyl-CoA dehydrogenases which still remained in the column were eluted as a sharp peak (Fig. 1). Fractions 18 to 35, containing both S-2-methylbutyryl-CoA-and isobutyryl-CoA-dehydrogenating activities, were pooled together. Relative specific activities with S-2-methylbutyryl-CoA, isobutyryl-CoA, n-butyryl-CoA, n-octanoyl-CoA, and palmitoyl- CoA in the pooled fraction were 1.0, 0.92, 0.23, 0, and 0, respectively. After concentration, the sample preparation (5 mg of protein in 7 ml) was applied to an agarose-hexane-CoA column (step 6).
When the agarose-hexane-CoA column was eluted with a linear NaCl gradient (0-0.4 M), both S-2-methylbutyryl-CoAand isobutyryl-CoA-dehydrogenating activities were again coeluted as a single peak in fractions 25 to 45; these activities were well separated from butyryl-CoA dehydrogenase (Fig. 2). After concentration, the sample solution (0.8 mg of protein in 1.5 ml) from step 6 was applied to a Bio-Gel A-0.5m column (step 7). As shown in Fig. 3, both S-2-methylbutyryl-CoAand isobutyryl-CoA-dehydrogenating activities were still coeluted as a single peak in fractions 50 to 70. These fractions were combined and then concentrated; this sample represents the final preparation. The specific activities of the final preparation for S-2-methylbutyryl-CoA and isobutyryl-CoA were 2.4 and 2.3 Fmol min" mg", respectively. The overall yield of the enzyme was 2.5%. An identical value was obtained when the recoveries of either of the activities were utilized for this computation. The purified enzyme was stable for at least 30 days when stored in 50% glycerol at -20 "C.
Purity, Molecular Weight, and Subunit Structure-The purity of the final 2-methyl-branched chain acyl-CoA dehydrogenase preparation was determined by PAGE with and without SDS. When boiled, the purified enzyme gave a single protein band with M , = 41,500 in SDS-PAGE in both the absence and presence of 2-mercaptoethanol (Fig. 4, B and C).
When the sample was subjected to SDS-PAGE without boiling, the enzyme gave an apparent single protein band with M, = 85,000, both in the presence and absence of 2-mercaptoethanol, indicating a dimeric form of the protein (Fig. 4A).
In PAGE without SDS, the purified enzyme also gave a single protein band in 5.0% gel ( R p value of 0.55) (Fig. 4E). The native molecular weight of the enzyme was estimated to be 170,000 by gel filtration on Bio-Gel A-0.5m chromatography (Fig. 3 ) . These data indicated that the enzyme is composed of four subunits of identical size. Isoelectric Point-The isoelectric point of the purified enzyme was 5.5 ? 0.2 as determined by sucrose discontinuous isoelectric focusing (LKB) using a 1:4 mixture of pH 3.5-10 and 4-6 ampholytes. The pH of each fraction was determined using a pH meter at 0 "C. This PI value was almost identical with our previous results using a crude 2-methyl-branched chain acyl-CoA dehydrogenase preparation from the hydroxyapatite chromatography step (8). Chromatofocusing (Pharmacia) was also carried out using PBE 94 and Polybuffer 74.
This enzyme was eluted from the chromatofocusing at pH 5.1 t 0.2.
Amino Acid Composition-The amino acid composition is shown in Table 11. The number of cysteine residues was estimated to be 5/subunit. The subunit molecular weight of the enzyme was calculated to be 42,400 from the amino acid composition, in close agreement with the value (41,500) determined by SDS-PAGE. The specific volume of the enzyme was 0.72 as computed from the amino acid composition.
Absorption and Fluorescence Spectra, and Prosthetic Group-The visible and ultraviolet absorption spectrum of the purified enzyme is shown in Fig. 5. The major absorption maxima were found a t 275, 340, and 435 nm. The ratios of absorbance at 275, 340, and 435 nm were 10.3:1.3:1.0. The fluorescence emission spectrum of the purified enzyme excited at 450 nm showed a peak at 520 nm as in the case of authentic FAD; its intensity was 28% of the equivalent amount of authentic FAD, indicating quenching due to FAD-protein interaction. The excitation spectrum of the enzyme as moni- Table 111. The enzyme exhibited its highest activity with either S-2-methylbutyryl-CoA or isobutyryl-CoA as a substrate. When R-2-methylbutyryl-CoA was used as a substrate, its activity was 22% of that observed with S-2-methylbutyryl-CoA. In contrast, the activity was extremely low or not detectable when the following compounds were used as a substrate: isovaleryl-CoA, propionyl-CoA, n-butyryl-CoA, nvaleryl-CoA, n-hexanoyl-CoA, n-octanoyl-CoA, palmitoyl-CoA, glutaryl-CoA, and sarcosine. The apparent V,,,,, and K,,, values for isobutyryl-CoA were 2.0 pmol min" mg" and 89 PM, respectively, and the apparent V,,, and K , values for S-2-methylbutyryl-CoA were 2.2 pmol min" mg" and 20 p~, respectively (Tables 111 and VI).
The reaction products were identified by gas chromatographic analysis as shown in Fig. 6. Under the conditions were not separated. The reaction product of the purified enzyme with isobutyryl-CoA was identified as methacrylyl-CoA by detection of its hydrolysis product, methacrylic acid (Fig. 60). The reaction product of the same enzyme with S-2methylbutyryl-CoA was identified as tiglyl-CoA (Fig. 6b). When R-2-methylbutyryl-CoA was used as a substrate, a compound which had a considerably shorter retention time (7.6 min) than that of tiglic acid (9.8 min) was detected (Fig. 6c). After conversion to a methyl ester, this compound was identified as ethylacrylic acid using mass spectroscopy by the identity of its mass spectrum to that of t,he authentic standard (23).
The reaction rates as assessed by measuring the amount of product in the first 5 min using gas chromatography were 12. min/0.5 ml of reaction mixture for both isobutyryl-CoA and S-2-methylbutyryl-CoA) obtained by the dye reduction assay using the same amount of the enzyme preparation. These results verify the validity of the dye reduction assay for acyl-CoA dehydrogenase.
The time course of the reaction was studied by gas chromatographic analysis of the products to estimate an apparent K , using isobutyryl-CoA and S-2-methylbutyryl-CoA. The reaction mixture was incubated a t 37 "C for 2, 5, 10, and 20 min. The reaction products from isobutyryl-CoA and S-2methylbutyryl-CoA were produced linearly with time for at least 5 min, but the reaction rate diminished after this point. The decrease of isobutyryl-CoA and the increase of methacrylyl-CoA both plateaued, revealing an equilibrium after 20 min. A similar time course was observed with S-2-methylbutyryl-CoA as a substrate. The apparent K , was determined as the ratio of the product concentration to the substrate concentration at the equilibrium. The apparent K , for these two substrates differed greatly: K , for isobutyryl-CoA was 1.0 while that for S-2-methylbutyryl-CoA was 4.0.
The inhibitory effects of various acyl-CoAs on 2-methylbranched chain acyl-CoA dehydrogenase activity were investigated by using isobutyryl-CoA or S-2-methylbutyryl-CoA as substrates. The results are summarized in Table IV. No difference between the two substrates was observed. The most notable finding was that tiglyl-CoA strongly inhibited both the isobutyryl-CoA-and S-2-methylbutyryl-CoA-dehydro-
TABLE Ill
Substrate specificity of the purified 2-methyl-branched chain acyl-CoA dehydrogenaye The dehydrogenating activity was determined by the dye reduction assay in the presence of 100 p M FAD. The substrate concentration was 100 p~ except for isobutyryl-CoA which was used a t 200 pM.
ETF as Electron Acceptor-The purified E T F preparation equal specific activity with S-2-methylbutyryl-CoA. In these experiments, we could not determine an apparent K, for ETF, because saturation was not observed with the amounts of E T F employed.
Effects of Various Inhibitors-The effects of various inhibitors on both S-2-methylbutyryl-CoA-and isobutyryl-CoAdehydrogenating activities of the purified enzyme are shown in Table V. Essentially identical inhibitory effects on these two activities were observed using various inhibitors. These dehydrogenating activities were severely inhibited by sulfhy-
TABLE V The effects of various compounds on 2-methyl-branched chain acyl-
CoA dehydrogenase activity The purified enzyme (10 pg of protein) was preincubated with each compound at the concentration indicated for 5 min at 32 "C. The dehydrogenating activity was determined by the dye reduction assay in the presence of 100 p~ FAD. 2-meC4CoA, 2-methylbutyryl-CoA iC4CoA, isobutyryl-CoA.
Residual activity with Preincubated with inhibited the enzyme activity by 50%, but iodoacetamide (2 mM) did not significantly inhibit the activity. The enzyme activity was completely inhibited by heavy metals such as Hg'+, Cu'+, and Ag' (0.1 mM each) which are known to affect thiol groups in proteins. Ca", Zn2+, Pb", and Fe3+ did not inhibit this enzyme activity at all.
Immunological Properties-In both immunoprecipitation and Ouchterlony double diffusion experiments, the purified 2-methyl-branched chain acyl-CoA dehydrogenase (10 yg of protein; specific activity = 2.2 pmol min" mg" for S-2methylbutyryl-CoA) was reacted with individual antiserum raised against isovaleryl-CoA-, short chain acyl-CoA-, medium chain acyl-CoA-, or long chain acyl-CoA dehydrogenase, respectively. The 2-methyl-branched chain acyl-CoA dehydrogenase did not exhibit any cross-reaction with these four antibodies in Ouchterlony double diffusion experiments; its enzyme activity was not precipitated by the four antibodies with either isobutyryl-CoA or S-2-methylbutyryl-CoA as a substrate (data not shown). These results indicate that 2methyl-branched chain acyl-CoA dehydrogenase is immunologically distinct from the four other acyl-CoA dehydrogenases and that the final preparation was not contaminated by the other enzymes.
DISCUSSION
In the present study, we purified 2-methyl-branched chain acyl-CoA dehydrogenase from rat liver mitochondria to homogeneity in seven steps including affinity chromatographies with Matrex Gel Blue A and agarose-hexane-CoA, which were used at the fifth and sixth steps, respectively. The activity to dehydrogenate S-2-methylbutyryl-CoA and that to dehydrogenate isobutyryl-CoA were co-purified throughout the entire seven steps of purification (Table I). The specific activity of the final preparation was enriched 90-fold over that of the preparation obtained after the DEAE-Sephadex step. The activities in the crude preparations such as mitochondrial sonic supernatant and (NH4),S04 precipitates could not be accurately measured due to interference by nonspecific reductants. The tritium release assay which is free of such interference was not available for these activities. In our previous study (9), the specific activity of isovaleryl-CoA dehydrogen-ase preparation after the DEAE-Sephadex stage was enriched approximately 20 times over that of the mitochondrial sonic supernatants as measured by the tritium release assay, and a similar degree of purification can be expected for 2-methylbranched chain acyl-CoA dehydrogenase at these steps. Thus, the final preparation of 2-methyl-branched chain acyl-CoA dehydrogenase is probably enriched nearly 1800-fold over that in the mitochondrial sonic supernatant.
The fact that isobutyryl-CoA-and S-2-methylbutyryl-CoAdehydrogenating activities co-purified throughout all steps of purification suggests that a single enzyme catalyzes the dehydrogenation of both isobutyryl-CoA and S-2-methylbutyryl-CoA. We have also shown in this report that both isobutyryl-CoA-and S-2-methylbutyryl-CoA-dehydrogenating activities of this enzyme were competitively inhibited by tiglyl-CoA, the product from S-2-methylbutyryl-CoA (Tables IV and VI The purified enzyme exhibited a high substrate specificity (Table 111). It dehydrogenated isobutyryl-CoA and S-2-methylbutyryl-CoA with high specific activities. The rates for these two substrates were approximately equal. This enzyme also dehydrogenated the R-enantiomer of 2-methylbutyryl-CoA, but the rate of the reaction with this substrate was only 22% of that with the S-enantiomer. The reaction products from isobutyryl-CoA and S-2-and R-2-methylbutyryl-CoA by this enzyme were identified as methacryl-CoA, tiglyl-CoA, and ethylacrylyl-CoA, respectively, by the detection of their hydrolysis products (Fig. 6). In contrast, this enzyme did not dehydrogenate any other straight chain acyl-CoAs, or a branched one, isovaleryl-CoA, at any significant reaction rate. This substrate specificity is very narrowly limited to those substrates with a methyl substitution at the a-carbon. Among the substrates with an a-methyl substitution, S-2-methylbutyryl-CoA and isobutyryl-CoA were dehydrogenated with high efficiencies while R-2-methylbutyryl-CoA was dehydrogenated at a considerably slower rate. These results on the substrate specificity and the identification of the products suggest that the reaction of this enzyme proceeds by elimination of one hydrogen each, respectively, from the a-methine group and the /%methylene (methyl) group taking the a position as illustrated in Fig. 7. Whether the substitution on the a position is a methyl or an ethyl does not significantly affect the rate of reaction. In contrast, when the substitution on the c position is an ethyl, the rate of reaction was significantly slower than that when it was a methyl. This suggests that the size of the substitution directed to the c position, although it does not participate in the dehydrogenase reaction, is important in defining the fitness of the substrate to the conformation of this enzyme at the active site.
The product inhibition by tiglyl-CoA was also specific. 2-Methyl-branched chain acyl-CoA dehydrogenase activity was inhibited by neither 3-methylcrotonyl-CoA nor isovaleryl-CoA. However, it was moderately inhibited by n-butyryl-CoA, n-valeryl-CoA, or crotonyl-CoA. This suggests that the enzyme can bind n-butyryl-CoA, valeryl-CoA, or crotonyl-CoA as substrate analogs, although the enzyme does not dehydrogenate them at a significant rate (Table 111).
The substrate specificity of 2-methyl-branched chain acyl-CoA dehydrogenase and the inhibition -of this enzyme by tiglyl-CoA are of particular interest in view of the regulation of the branched chain amino acid metabolism. The three -" Antibody to IVD no-cross reaction positive reaction no-cross reaction Antibody to SC-AD no-cross reaction no-cross reaction positive reaction a Described in detail elsewhere ( 9 ) .
These values were determined spectrophotometrically on the purified enzyme preparations. FAD might have been partially lost in the purification procedures. The FAD content per subunit of both enzymes in the native form is estimated to be 1 mol per subunit because activities of the final preparations were enhaned 1.5-2.5 times by the addition of exogenous FAD.
Emission spectra were monitored with exitation of 450nm, and excitation spectra were taken with emission at 530nm.
e The activity of isovaleryl-CoA dehydrogenase was not inhibited by tiglyl-CoA.
The activity of short chain acyl-CoA dehydrogenase was not inhibited by tiglyl-CoA. branched chain amino acids, leucine, isoleucine, and valine, are first transaminated to the corresponding 2-oxo acids. These three 2-oxo analogs are then oxidatively decarboxylated to isovaleryl-CoA, S-2-methylbutyryl-CoA, and isobutyryl-CoA, respectively, by a single common enzyme, branched chain 2-oxo acid dehydrogenase. This enzyme is subject to inhibition by any of the three branched chain acyl-CoAs (24). Thus, the branched chain 2-oxo acid dehydrogenase step has been considered to be the site for metabolic regulation which is common for the three branched chain amino acids. We have shown in the previous report that isovaleryl-CoA is specifically dehydrogenated by isovaleryl-CoA dehydrogenase and this reaction is specifically inhibited by 3-methylcrotonyl-CoA (9). Isobutyryl-CoA and S-2-methylbutyryl-CoA were dehydrogenated commonly by 2-methyl-branched chain acyl-CoA dehydrogenase, which is subject to the inhibition by tiglyl-CoA as shown in the present paper. Thus, after the 2oxo acid decarboxylation, the metabolism of isoleucine and valine may be commonly regulated while the leucine metabolism is independently controlled.
The ability of this enzyme to dehydrogenate R-2-methylbutyryl-CoA may be of more than a theoretical interest. It has previously been shown that when experimental animals were given RS-2-methylbutyric acid labeled with stable isotopes, they excreted labeled 2-ethyl-3-hydroxyproprionic acid (2-ethylhydracrylic acid) into urine (25-26). The results from detailed mass spectroscopic analyses of the urinary metabolites indicated that ethylhydracrylic acid was further oxidized by a pathway (R-pathway) which is analogous to the valine pathway (25). It was hypothesized that R-2-methylbutyryl-CoA was dehydrogenated on the shorter acyl chain producing 2-ethylacrylyl-CoA which was then hydrated to 2-ethylhydracrylyl-CoA while S-2-methylbutyryl-CoA was dehydrogenated on the longer chain producing tiglyl-CoA (Fig. 7). The data presented in this report represent the first scientific evidence that the two enantiomers of 2-methylbutyryl-CoA are, in fact, stereospecifically dehydrogenated.
The properties of 2-methyl-branched chain acyl-CoA dehydrogenase are summarized in Table VI, along with those of isovaleryl-CoA-and short chain acyl-CoA dehydrogenases. These three enzymes are similar in molecular size, prosthetic group, and basic mode of enzyme reaction, but they differ significantly from each other in catalytic and immunological properties. The native molecular weight of 2-methyl-branched chain acyl-CoA dehydrogenase is 170,000 as determined by gel filtration (Fig. 3). Its molecular weight is slightly larger than that of short chain acyl-CoA dehydrogenase. The subunit molecular weight of 2-methyl-branched chain acyl-CoA dehydrogenase was 41,500 on SDS-PAGE in the presence and absence of 2-mercaptoethanol (Fig. 4). These data indicate that the enzyme consists of four equal size subunits as in the case of isovaleryl-CoA-and short chain acyl-CoA dehydrogenases and that the binding between subunits is not through a disulfide linkage. However, unlike these other two enzymes which readily dissociate into four subunits in SDS-PAGE, 2methyl-branched chain acyl-CoA dehydrogenase gave a single protein band with M , = 85,000 wben analyzed without boiling the enzyme preparation (Fig. 4). This finding may suggest that the binding forces for four subunits are not equal and that the force between two subunits in a dimer is stronger than that which binds two dimers.
The absorption spectrum and fluorescence emission and excitation spectra of 2-methyl-branched chain acyl-CoA dehydrogenase are typical for FAD, indicating that this enzyme contains FAD as a prosthetic group. The FAD content was calculated to be 0.5 mol/subunit from the absorption spectrum. This FAD content is not a whole number, probably due to a partial loss of FAD in the purification process judging from the observation that the activity of the purified enzyme is enhanced 2.3-fold by the addition of 100 FM FAD. These results suggest that 2-methyl-branched chain acyl-CoA dehydrogenase originally contained 1 mol of FAD/mol of subunit in native form, as in the case of isovaleryl-CoA dehydrogenase (Table VI). In contrast, the activity of the purified short chain acyl-CoA dehydrogenase was not enhanced at all by the addition of FAD: its A275/A450 ratio in the absorption spectrum was 6.3, a typical value for an acyl-CoA dehydrogenase fully saturated with FAD (Table VI), indicating that the final short chain acyl-CoA dehydrogenase preparation contains 1 mol of FAD/mol of subunit. In catalytic properties, these three enzymes distinctly differ from each other. 2-Methyl-branched chain acyl-CoA dehydrogenase is specific for isobutyryl-CoA and S-2-methylbutyryl-CoA, isovaleryl-CoA dehydrogenase is for isovaleryl-CoA (9), and short chain acyl-CoA dehydrogenase is for n-butyryl-CoA and n-valeryl-CoA (8). There is essentially no cross-reactivity in these enzyme-substrate combinations except for n-valeryl-CoA, which is dehydrogenated by both short chain acyl-CoA and isovaleryl-CoA dehydrogenases. The high degree of substrate specificities of these three acyl-CoA dehydrogenases for short chain acyl coenzyme A esters is indicative of finely defined conformation surrounding the active and substrate-binding sites of these enzyme. These three enzymes are also immunologically distinct from each other (Table VI) (9).
The activity of 2-methyl-branched chain acyl-CoA dehydrogenase for either substrate is inhibited by low concentrations of organic sulfhydryl reagents such as N-ethylmaleimide, p-hydroxymercuribenzoate, and methyl mercury iodide ( Table V). The degrees of inhibition were essentially equal for the two substrates. The enzyme activity was severely inhibited by heavy metal ions such as Hg", Cu'+, and Ag' which are known to interact with sulfhydryl groups in proteins. These results suggest the existence of an essential cysteine residue at the active site. Similar inhibitory effects by organic sulfhydryl reagents have been observed on isovaleryl-CoA dehydrogenase (9) and apo-medium chain acyl-CoA dehydrogenase (22). | 2018-04-03T03:52:57.054Z | 1983-08-10T00:00:00.000 | {
"year": 1983,
"sha1": "1f871a4b7df1da709536b30e2ddeda2f7cc58e44",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(17)44692-8",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "e44f15473981f5727b000d88bf1404fde15537d7",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
199527960 | pes2o/s2orc | v3-fos-license | Unraveling structural and compositional information in 3D FinFET electronic devices
Non-planar Fin Field Effect Transistors (FinFET) are already present in modern devices. The evolution from the well-established 2D planar technology to the design of 3D nanostructures rose new fabrication processes, but a technique capable of full characterization, particularly their dopant distribution, in a representative (high statistics) way is still lacking. Here we propose a methodology based on Medium Energy Ion Scattering (MEIS) to address this query, allowing structural and compositional quantification of advanced 3D FinFET devices with nanometer spatial resolution. When ions are backscattered, their energy losses unfold the chemistry of the different 3D compounds present in the structure. The FinFET periodicity generates oscillatory features as a function of backscattered ion energy and, in fact, these features allow a complete description of the device dimensions. Additionally, each measurement is performed over more than thousand structures, being highly representative in a statistical meaning. Finally, independent measurements using electron microscopy corroborate the proposed methodology.
Supplementary Information
Sample preparation Structures were fabricated in state of the art 300 mm semiconductor facilities. Silicon-On-Insulator substrate with initial 14 nm and 25 nm thicknesses of Si-Top and buried oxide respectively follows an epitaxy of 46 nm of Si to obtain a total Si layer of 60 nm. The top-Si layer was patterned using lithography to obtain fin-like structures of 60 nm height running to 1.2 mm long. The array of fins extends along 10 mm with a targeted fin-pitch of 160 nm. The sample layout is shown in Fig. S1.
Analysis of the distance traveled by the ion in fin array
In order to analyze the major differences in the distances traveled by the ion in the material for different geometries with ϕ = 0 • and 90 • , we constructed a new structure using only Si atoms based on previous MEIS results ( Fig. 3 (a)). This structure is repeated ten times. The total distance traveled by the ion within the sample was obtained by the PowerMEIS code. In these simulations, we consider an incident beam of 200 keV H + that impinges normal to the surface and a backscattering angle centered at 120 • with an angular aperture of 4 • . The Fig. S2 shows the distance traveled in the material relative to depth and horizontal position for the two experimental geometries. Let us consider the depth dependence of the distance traveled by the ion in the fin. For the ϕ = 0 • case, the ions travel the same distance and have the same displacement, regardless of their horizontal position. However, this does not happen for the ϕ = 90 • case, as shown in Fig. S2 (b). In this situation, the ion will travel different distances in the fin for the same depth, depending on its horizontal position. This is because the probability of the ion crossing one or more fin structures during its outgoing path depends on its horizontal position. The same reasoning is valid for the bottom part between the fins. It is this pattern that produces oscillatory features in the MEIS spectra 1, 2 . Figure S3 shows the distribution of the number of voxels as a function of the distance traveled by the ion in the material for the ϕ = 90 • geometry (histogram with red area). Using the appropriate stopping power for the ion-sample configuration, this Figure S1. Layout of the sample used.
distance is converted to energy. As can be observed in this figure, a good agreement with the results of direct simulations (gray line) is obtained concerning the period and shape of the oscillatory features of this histogram. As expected, for ϕ = 0 • (blue line) the MEIS spectrum does not present any oscillatory features, varying only with the cross-section dependence.
Density and dose calculations
The density of each layer containing a compound with a combination between As and SiO 2 was calculated considering the volume and stoichiometry of each element in the compound. The volume was calculated according to the Equation S1: where V i is the volume of each element present in the compound, N a is Avogadro's number, m i and ρ i are the atomic mass and the density of this element, respectively. According to the Equation S1, the volumes for As and Si are V As = 2.16×10 −23 cm 3 and V Si = 1.99×10 −23 cm 3 . For the oxygen, the volume was calculated by (V SiO 2 -V Si )/2 and corresponds to V O = 1.26×10 −23 cm 3 . Once the volume occupied by each element has been calculated we can determine the density according to Equation S2 : where ρ region is the density of each region containing the As-doped SiO 2 compound and x i is the stoichiometry of the i-th element. Equation S2 is used to calculate the density at the top, wall and bottom (first and second layers) compounds. For example, the density at the top is given by: All results are presented in Table 1. In order to determine the implanted dose, it is necessary to calculate the number of As atoms incorporated in each region first. For this calculation, we used Equation S4: where N atoms As is the number of As atoms in the region, x As is the As stoichiometry and V region is the volume of the region. In this way, it is possible to obtained the dose according to Equation S5: where A is area of the region. In this work, we calculate the dose over each region (top, wall and bottom (first and second layers)) as shown in Table 1. Also, we calculate an average dose, considering the total number of As atoms implanted over the surface (top, wall, and bottom) of the fin array.
PowerMEIS code
PowerMEIS is a Monte Carlo program that performs simulations of the interactions of ions 3 and electrons 4 with matter. The sample is described by voxels organized in a matricial format, which may represent any complex structure with unlimited number of compounds 3 . Figure S4 shows a sketch of the matrix constructed for the fin structure.
The PowerMEIS code determines the incoming and outgoing ion paths by numerical integration of a three-dimensional space from the incident and scattering angles. The simulated spectrum is obtained by the integration of Equation S6 over all sample volume, adding the contribution of all elements.
where E 0 , E 1 and E out are the incident energy, the energy just before the backscattering, and the detected energy, respectively. The K i (Θ) is the kinematic factor and ∆E in , ∆E out are the energy losses along the incoming, outgoing ion paths. We calculate the energy losses from the stopping power SRIM library 5 Our simulations have considered the σ i (E 1 , θ ) obtained by solving the orbit equation using the Ziegler-Biersack-Littmark interatomic potential 6 . The neutralization F + (E) is extracted from the Marion and Young data 7 . The energy loss distribution occurs due to the fluctuations in the interaction with target atoms and the detection resolution. For Rutherford backscattering, F(E − E out ) can be written as a Gaussian function because the detection system cannot resolve the large number of inelastic interactions. On the other hand, these interactions need be taken in account in the MEIS technique. In this way, F(E − E out ) was written as a Exponential Modified Gaussian (EMG) distribution. The EMG is an analytic formula obtained by the convolution of a Gaussian distribution with an exponential distribution that represents the inelastic energy loss due to ionization and excitation of the backscattered ion 8 .
The chi-square is used as a figure-of-merit for the evaluation of goodness of fit for MEIS spectra. In addition, it provides the spatial resolution for each FinFET dimension (chi-square variations larger than 10%). In this work, we used the reduced chi-square given by the Equation S7 9 : where N is the total number of data points, I exp and I sim represent the proton yield in the experimental and simulated spectra, respectively. The min(I exp , 1) factor is used to take into account noises in the experimental data. Figure S5 shows the chi-square results. For each simulation, all dimensions (H f in , fin-pitch or W f in ) are fixed except one. In each plot, the chi-square is displayed as a function of this varying dimension for two geometries ϕ = 0 • and ϕ = 90 • and three scattering angles centered at 110 • , 120 • and 130 • . The ϕ = 0 • geometry does not give much information, whereas a clear and narrow minimum is found for ϕ = 90 • . H f in is determined at small scattering angles (110 • ) for both geometries. The shape of the spectra for the energies between 155 and 170 keV is crucial for the chi-square value. Our results indicate an uncertainty of 3 nm for H f in . For the determination of fin-pitch and W f in the variations in chi-square for ϕ = 90 • are used. In this geometry, any variation in the fin-pitch and width of the fin drastically changes the measured backscattering energy. These results show a sensitivity of about 3 nm for the fin-pitch and W f in dimensions.
The χ 2 analysis shows that small variations in H f in , W f in and fin-pitch increase the discrepancies between the experimental and simulated data. These variations depend strongly on the scattering angle and the irradiation geometry. Figure S6 shows the comparison between the experimental data for backscattering angle 120 • and ϕ = 90 • with the simulation for the best model S4/S9 Figure S4. Sketch describing the corresponding matrix constructed for the FinFET structure. and including a variation of ±3 nm in H f in , W f in and fin-pitch. As presented, the MEIS technique is very sensitive to FinFET size variations. Figure
Experimental details
The analysis was performed in approximately 4 hours. Typical beam current was smaller than 15 nA (accumulated charge did not exceed 0.02 C/cm 2 ). During the measurement, we change the position of the beam (0.5 × 2 mm 2 ) every hour (≤5 mC/cm 2 per spot). We have used H + as an incident beam which causes much less damage compared to He + beams 10 with two different scan measurements (2 hours each) at the same spot was done and no substantial difference was observed between them, as shown in Figure S8.
STEM-EDX analysis
The As dopant profiles obtained by MEIS and STEM-EDX are in good agreement, as shown in Figs. S9 and 3. The As dopant profile obtained by STEM-EDX is shown for three different regions: (a) horizontal cut over the fin structure, (b) vertical cut over the fin structure and, (c) vertical cut between fins. The first region presents an uniform dopant distribution with low concentration. In this case, the symmetry in the As signals at both sides of the walls depends on the region selected. The second region exhibits an uniform As distribution with a high concentration when compared with the As distributions at the wall and the bottom. As the MEIS results showed, the As distribution is not uniform in the region between fins. The dopant profile of the third region confirms the necessity of a diffusion layer of As. In this case, the As implanted is more concentrated in the first layer than in the second one. Another important agreement between MEIS and STEM-EDX results concerns the thickness of these layers. The As at the fin wall, top and bottom are distributed within a layer of 5, 9 and 10 nm, respectively. At the bottom case, there are two layers, the first with 3 nm and the second with 7 nm.
The fin array dimensions obtained by MEIS, SEM and STEM are summarized in Table S1. | 2019-08-12T14:01:40.116Z | 2019-08-12T00:00:00.000 | {
"year": 2019,
"sha1": "dc24bea78236f6a637fb41b88cf282ef2a16e53b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-48117-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc24bea78236f6a637fb41b88cf282ef2a16e53b",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216719271 | pes2o/s2orc | v3-fos-license | Stabilization of gelatin and carboxymethylcellulose water-in-water emulsion by addition of whey protein
Obstract Due to their aqueous nature and biocompatibility, water/water emulsions are particularly advantageous in the production of low calorie functional food and bioactive carrier microparticles. The aim of this study was to investigate the stability of water/water emulsions formed by gelatin and carboxymethycelullose through the Pickering effect, by addition of whey protein particles. The effect of phase composition and pH on emulsion stability over 3 days of storage was studied and the emulsion properties were characterized. Finally, the effect of the addition of different concentrations of whey protein particles on the emulsion stability was investigated. The added protein particles contributed to reduce the rate of phase separation and higher protein concentration showed this effect more clearly. The time of complete phase separation increased 12 h after addition of 15% (w/w) protein. Emulsions at pH 5.5 with protein particles, however, showed lower stability than those at pH 7.5 without
In this context, the present study aims to produce and characterize W/W emulsions from an aqueous two-phase system composed of gelatin and carboxymethylcellulose and evaluate the effect of whey protein on kinetic stability.
Molecular weight determination
The average viscosity molecular weight of gelatin and NaCMC was determined by viscometric measurements [15] . Samples of gelatin (1-10 g . cm -3 ) and NaCMC (0.4 -1.0 g . cm -3 ) were prepared using aqueous solutions of NaCl 0.1 mol.L -1 as solvent. The relative viscosity was measured with a capillary (Schott, Cannon Fenske, Germany) in a thermostatic water bath at 25.0 ± 0.1°C (Schott, CT52, Germany). Intrinsic viscosity [ƞ] is defined as (Equation 1): The intrinsic viscosity was obtained by extrapolating the reduced viscosity, ƞ RED ,vs the concentration (C) data to zero concentration. The intercept on the abscissa is the intrinsic viscosity. The average viscosity molecular weight was calculated based on the Mark-Houwink-Sakurada equation (Equation 2): where the constants k=2.69x10 -3 and a=0.88 are defined for gelatin [16] and k=1.23x10 -2 and a=0.91 for NaCMC [17] .
Phase diagram construction
A phase diagram was constructed by visually observing the formation of two distinct layers and was used to choose the composition of the W/W emulsions [18] . A stock solution of gelatin (16% w/w) was prepared by diluting the protein in ultrapure water at 60.0 ± 1.0 °C under magnetic stirring for 1 h. A stock solution of NaCMC (2% w/w) was prepared by dilution in ultrapure water at room temperature under magnetic stirring for 12 h. Sodium azide (0.05% w/w) was added to prevent the microbial growth. The pH was adjusted to 6.0 by the addition of NaOH or HCl. The stock solutions were diluted to the appropriate ratio in a transparent glass tube and left in a thermostatic water bath (Huber, Germany) at 45.0 ± 0.1 °C, avoiding any gelation. The mixtures were stirred in a vortex (Phoenix, AP56, Brazil) for 1min and stored in a water bath for 48 h with temperature controlled at 45.0 ± 0.1 °C.
Emulsion preparation
To induce the segregative phase separation (repulsive interactions between the polymers), which is necessary to form W/W emulsions [19] , the pH of the stock solutions was adjusted to 5.5, 6.5 or 7.5. The solutions were placed in a glass tube (11 mm diameter and 150 mm height) and vortexed (Phoenix, AP56, Brazil) for 1 min. Thereafter, the emulsions were stored in a water bath (Huber, Germany) for 48 h with temperature controlled at 45.0 ± 0.1 °C.
Kinetic stability
The kinetic stability of the emulsions was evaluated by visually observing the formation of two distinct layers over 3 days of storage [20] . The phase separation is expressed by the separation index (SI), which is calculated as the relation between the upper phase volume (V) and the original volume (Vo), as described by Equation 3 [21,22] :
Emulsion characterization
The emulsion properties were characterized only for those that remained stable for a period longer than 8 h. All measurements were performed immediately after emulsion preparation.
Dynamic superficial tension
The superficial tension of the gelatin and NaCMC aqueous solutions was measured with the pendant drop method using a tensiometer with (Teclis Scientific, Easytrack, France) connected with a thermostatic water bath (Julabo, Corio-CD BC4, Germany) at 45.0 ± 0.1 °C. To create the air-water interface, a bubble was formed at the end of a needle connected with a syringe and immersed in a glass cuvette filled with a gelatin solution (6%, 8%, 10% and 12% w/w) or NaCMC solution (0.10%; 0.25%; 0.50% and 1.00% w/w). The superficial tension was determined by bubble shape analysis and measured during 1300 s from bubble formation.
Particle size and ζ potential analysis
The size, size distribution and ζ potential were determined by laser diffraction using a Zetasizer (Malvern Instruments, Nano ZS90, England). Before the measurement, the emulsions were diluted in ultrapure water at a ratio 1:10 (v/v). The optical properties adopted were refraction index (1.335) and absorption (0.01). The results obtained correspond to the mean and corresponding standard deviation of three replicates.
Viscosity measurements
The viscosity of the emulsions was measured by a rotational viscometer (Thermo Fisher Scientific, Haake Viscotester D, Germany) using the LCP spindle at different speeds. The viscosity and percentage of torque were manually recorded when the viscosity reading reached apparent equilibrium. The measurement temperature was controlled at 45.0 ± 0.5 °C with a circulating water bath (Quimis, 0214M2, Brazil). The measurements reported correspond to an average of three replicates.
Effect of the addition of whey protein isolate on emulsion stability
The effect of the addition of WPI on emulsion stability was tested on samples that showed poor stability in previously performed tests. The preparation of WPI microgel particles was based on the method of Murray and Phisarnchananan [10] . A WPI solution (10% w/w) was prepared by the dispersing the protein in ultrapure water under magnetic stirring for 12 h. The solution was transferred to a glass bottle, heated in a thermostat water bath (Huber, Germany) at 90.0 ± 1.0 °C for 30 min and suddenly cooled under running water for 15 min. The gel formed was roughly broken with a spatula to obtain fine gel fragments which were diluted in water and homogenized by an Ultraturrax system (IKA, T25D, Germany) for 5 min at 10,000 rpm, and again by ultrasound (Hielscher, UP100H, Germany) for 2 min with an amplitude of 100%. The suspension obtained was centrifuged (Digicen 21R, Spain) with the RT504 rotor at 9,000 rpm until the microgel sedimented to leave a clear upper aqueous phase which was carefully removed via a pipette. To prepare the emulsions, different concentrations of the WPI microgel particles (5, 10 and 15% w/w) were added to the aqueous system of gelatin-NaCMC before homogenization in the vortex. The influence of the WPI on emulsion stability was statistically evaluated by application of the analysis of variance (ANOVA) using the software Statistical Analysis System version 9.2 (SAS Institute Inc , Cary, NC).
Molecular weight determination
The intrinsic viscosity [ƞ] and the average viscosity molecular weight (M v ) of gelatin and NaCMC are presented in Table 1.
In the literature, it is possible to find a wide variety of molecular weight values for gelatin, resulting from a polydisperse protein with broad molecular weight distribution in solution [23] . According to Ledward [24] , the molecular weight of gelatin type B can vary between 100 and 500 kDa. Riihimaki [16] determined the molecular weight of gelatin type B from different origins using the viscometer method and found values between 45 and 170 kDa and Masuelli [25] found [ɳ]=48.65 cm 3 .g -1 in a 0.01 mol.L -1 NaCl solution and M v =67.44 kDa, using the same method.
NaCMC, as well as many other derivative polysaccharides, has a heterogeneous molecular weight distribution and chemical composition, which explain the diversity of molecular weight values found in the literature. Vázquez et al. [26] characterized the average molecular weight of NaCMC of medium viscosity using a capillary viscometer and found [ƞ]=535 mL.g -1 and M v =124.94 kDa. Sharma et al. [27] determined [ɳ]=198 cm 3 .g -1 and M v =90 kDa and Gomez-Diáz and Navaza [28] found [ɳ]=643.9 cm 3 .g -1 and M v =386 kDa. Rinaudo et al. [29] characterized CMC samples by size exclusion chromatography and found molecular weights between 55.83 and 578.58 kDa. CMC is highly heterogeneous polymer whose molecular weight depends on the internal structure, mainly the degree of polymerization and the degree of substitution [30] . Figure 1 shows the visual phase diagram constructed for gelatin and NaCMC solutions in water at pH 6.0 and 45.0 ± 0.1 °C. According to the phase diagram, two distinct regions could be visualized: a one-phase region (homogeneous system), corresponding to the area below the binodal line, and a two-phase region (non-homogeneous system), corresponding to the area above the bimodal line. At relatively low gelatin and NaCMC concentrations the systems formed a single phase. The minimum concentrations for phase separation are approximately 3.0% (w/w) gelatin and 0.1% (w/w) NaCMC. Furthermore, higher gelatin concentrations increased the minimum concentration of NaCMC necessary for macroscopic phase separation to occur. Soon after preparation, the solutions in the one-phase region appeared clear, and those in the two-phase region initially appeared turbid followed by macroscopic phase separation after a few hours. This behavior suggests a segregative phase separation, with the formation of two immiscible aqueous phases caused by repulsive interactions between the polymers [19] .
Kinetic stability
In the test of kinetic stability, it was observed that both the phase composition and the pH of the solution influence on biopolymer interactions and, hence, on the kinetics of phase separation. Figure 2 shows the SI of the emulsions prepared at pH 5.5, 6.5 and 7.5 over 3 days of storage at 45.0 ± 0.1 °C. Because null values of SI for a long period of time are indicative of good stability of emulsions, it can be observed that, for all compositions tested, pH 5.5 is the condition of lowest stability while pH 7.5 is the condition of highest stability. At pH 7.5, macroscopic phase separation was not observed for 3 different compositions: 8% gelatin and 0.10% NaCMC, 10% gelatin and 0.50% NaCMC and 12% gelatin and 0.50% NaCMC. This result suggests that emulsions are more stable as pH moves from the isoelectric point of the protein due to an increase in repulsive interactions between the polymers. According to Dickinson [14] , the pH of the solutions controls the molecular charge distribution and the higher the polymer charge is, the lower the tendency of phase separation. A similar result was verified by Perrechil and Cunha [18] who observed phase separation only at low pH values.
In addition to pH, phase composition also influences the kinetics of phase separation. At pH 5.5, the emulsion with 6% gelatin and 0.25% NaCMC presented fastest phase separation, which was completed approximately 1 h after preparation, followed by emulsions with 8% gelatin and 0.1% NaCMC, 8% gelatin and 0.25% NaCMC, 10% gelatin and 0.50% NaCMC and, lastly, 12% gelatin and 0.50% NaCMC, which started phase separation approximately 8 h after preparation. The same sequence of phase separation was observed in emulsions prepared at pH 6.5 and 7.5.The increase in stability as a function of gelatin concentration can be explained by an increase in the viscosity of the continuous phase which limits the movement of the droplets and, therefore, their approximation and aggregation. Similar results were observed by Singh [19] and Perrechil and Cunha [18] , where emulsions with higher polysaccharide concentrations were more viscous and stable. Furthermore, the small difference in density between the two phases contributes to a slow phase separation. According to Dickinson [14] , a small difference in density between two aqueous phases implies a creaming rate up to 100 times lower than that of O/W droplets of the same size. Another important observation was that the final SI of the emulsions is connected with the concentration of the disperse phase, with lower concentrations inducing higher SI values. Emulsions with 0.50%, 0.25% and 0.10% NaCMC presented separation indexes of approximately 41, 52 and 62%, respectively. The picture of the emulsions at different moments during the analysis is presented in Figure 3. After phase separation, a translucent upper layer and a turbid bottom layer were observed, indicating that the NaCMC droplets sedimented because they were denser and more opaque than the gelatin solution.
Dynamic superficial tension
As shown in Figure 4, the gelatin-air and NaCMC-air systems presented a reduction in superficial tension with time until they reached equilibrium, indicating the migration of one or more components in solution to the interface [31] . This behavior can be explained by the partially hydrophobic nature of proteins and polysaccharides or by the presence of impurities in solution active in the interface. The NaCMC solutions presented an initial superficial tension of approximately 65 mN.m -1 followed by a fast reduction and a tendency for the rate of reduction to decrease until a steady state of approximately 50 mN.m -1 was reached. The gelatin samples presented lower initial superficial tension, approximately 47 mN.m -1 , because of larger amount of solutes in solution. Compared to that of the NaCMC solutions, a low rate of reduction was observed, which was related to the poor interfacial adsorption. In addition, a low tension variation and short time to reach the steady value could be observed.
The test of ANOVA showed that for NaCMC solutions, the concentration of the solutions does not have a significant influence (p<0.05) on the equilibrium superficial tension, which may be related to the small difference in density between the solutions. However, it was verified that the gelatin concentration significantly influences the equilibrium superficial tension (p<0.05).
Droplet size and ζ potential
The mean diameter (d m ) of the emulsion droplets prepared at pH 7.5, the polydispersity index (PDI) and the ζ potential, with the respective standard deviation are presented in Table 2.
These data show the formation of nanoemulsions, provided by the low interfacial tension that requires low energy to promote droplet breaking. The highly varied droplet sizes and the high PDI values show the formation of emulsions with broad size distribution. This result can be related to the polydisperse characteristics of the polymers in addition to the occurrence of Ostwald Ripening, a common phenomenon in nanoemulsions, meaning that smaller particles submit themselves to the larger ones and start growing larger [32] .
The magnitude of the ζ potential is indicative of the stability of the colloidal system. According to Freitas [33] , a minimum ζ potential higher than |60 mV| is needed for excellent stability, and one higher than |30 mV| is needed for good physical stability. All the emulsions presented ζ< |20 mV|, indicating weak electrostatic repulsion between droplets. Thus, it may be assumed that any change in the physicochemical properties of the medium can cause instability in the system or that these emulsions would show phase separation if evaluated for longer periods. In addition, it can be considered that repulsive forces exceed attractive forces (van der Waals interactions), inhibiting the droplet approximation. The ζ potential value, however, is only one of many indications of emulsion stability and in some cases, this is not a relevant direct parameter to assess stability [34] .
Viscosity
The measurement of emulsion viscosity showed that this property is highly dependent on phase composition. The mean apparent viscosity of emulsions with 6% gelatin and 0.25% NaCMC, 8% gelatin and 0.25% NaCMC and 8% gelatin and 0.10% NaCMC, as well as their respective standard deviation, are presented in Table 3.
Emulsions with 10% gelatin and 0.50% NaCMC and 12% gelatin and 0.50% NaCMC presented different behaviors. In addition to the high viscosity, above 10 times the value of viscosity for the other compositions, it was observed that while shear is applied, the viscosity tends to increase and when shear is stopped, the emulsion reverts back to the original structure, a typical behavior of rheopectic fluids. By definition, rheopectic fluids show an increase in structure strength during the application of stress and consequent recovery of the structure and viscosity at the end of the stress period [35] . One of the main reasons for this behavior is that the shear increases both the frequency and the efficiency of collision between the droplets, which induces aggregation and, thus, increases the apparent viscosity [19] . Rheopexy in highly concentrated emulsions was discussed by Masalova et al. [36] , according to them, the restoration of the initial viscosity can be explained by elastic deformations of the droplets in the disperse phase.
Effect of adding whey protein isolate on emulsion stability
As shown in Figure 5, the WPI particles influenced the SI value and the rate of phase separation of emulsions prepared at pH 5.5. The extent of the effect was dependent on the phase composition and the amount of protein added. The formation of a white and thick product in addition to high foaming was observed. Few minutes after the emulsion formation, it was possible to identify a thin clear upper layer, indicative of the beginning of the phase separation process, while emulsions without WPI remained homogeneous for approximately 1 h. However, whereas emulsions without Figure 5. SI of emulsions with a) 12% gelatin and 0.50% NaCMC; b) 10% gelatin and 0.50% NaCMC; c) 8% gelatin and 0.25% NaCMC; d) 8% gelatin and 0.10% NaCMC; e) 6% gelatin and 0.25% NaCMC over time with different concentration of WPI (• 0%; ○ 5%; ■ 10%; □ 15%) at pH 5.5. WPI showed complete phase separation in no more than 6 h after formation, those with WPI particles showed complete phase separation only 24 h after formation, demonstrating that the protein particles were able to slow the rate of phase separation. Another relevant effect was the reduction of the SI compared to the formations without WPI. The emulsion with 8% gelatin and 0.1% NaCMC presented this effect clearly: the SI, which was 62% before the addition of WPI, decreased to 36% after the addition of 15% WPI. It was also observed that the addition of WPI influenced the kinetics of phase separation of the emulsions at almost all compositions, with the only exception being those with 12% gelatin and 0.50% NaCMC. The addition of 15% WPI caused a lower rate of phase separation and higher SI reduction. The application of the ANOVA test ensured that the addition of WPI had a significant influence (p>0,05) on the SI value at any phase composition regardless of the amount of protein added.
The use of WPI microgel particles as stabilizing agents in O/W emulsions is extensively well known [37][38][39] however, the use of these particles in stabilizing W/W emulsions is very recent and still question still remain about the best conditions for using them. This is because the heat treatment induces the aggregation of protein molecules in solution, and stable suspensions of protein-based soft hydrogels are obtained. These hydrogel particles could adsorb on the interface much more strongly than could native untreated protein. As a result, remarkable stable water-in-water emulsions could be obtained because of the Pickering mechanism [6,12] . Recent discoveries have revealed that the Pickering effect is efficient only when the particles undergo an aggregation process at the interface [10,12] and when the particle is preferably solvated by the continuous phase [14] . According to Dickinson [14] , one significant disadvantage of WPI microgel as W/W emulsion stabilizers is the tendency of the particles to flocculate in the vicinity of the isoeletric point of the protein (pI~5) and thus, it is expected that the particles might be more efficient on stabilizing emulsions prepared at pH>5.5. Although the addition of WPI particles retarded the phase separation, the stability time of the emulsions prepared at pH 7.5 could not be exceeded without the addition of particles.
Conclusions
Under specific conditions of pH and phase composition, it is possible to produce stable W/W emulsions for at least 3 days of storage without the addition of stabilizing agents. Emulsions prepared at the pH furthest from the isoelectric point of gelatin and with high protein concentration presented the best stability. WPI particles added to emulsions at pH 5.5 showed the ability to reduce the phase separation speed and 15% WPI showed this effect clearly. Emulsions at pH 5.5 with WPI remained less stable than those prepared at pH 7.5 and 12% gelatin without WPI. Reducing the rate of phase separation opens new possibilities for research using particles to stabilize emulsions with practical applications in the formulation of functional food and in the encapsulation of bioactive components. | 2020-04-09T09:24:57.006Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "11c6748ba0bdc352a3d44c8dc9c51876ed36b486",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/po/v29n4/0104-1428-po-29-4-e2019051.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5a88111e56322640f5ae2f2ab57ee941f3a9391e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
13259202 | pes2o/s2orc | v3-fos-license | Automatic Rotation Recovery Algorithm for Accurate Digital Image and Video Watermarks Extraction
Research in digital watermarking has evolved rapidly in the current decade. This evolution brought various different methods and algorithms for watermarking digital images and videos. Introduced methods in the field varies from weak to robust according to how tolerant the method is implemented to keep the existence of the watermark in the presence of attacks. Rotation attacks applied to the watermarked media is one of the serious attacks which many, if not most, algorithms cannot survive. In this paper, a new automatic rotation recovery algorithm is proposed. This algorithm can be plugged to any image or video watermarking algorithm extraction component. The main job for this method is to detect the geometrical distortion happens to the watermarked image/images sequence; recover the distorted scene to its original state in a blind and automatic way and then send it to be used by the extraction procedure. The work is limited to have a recovery process to zero padded rotations for now, cropped images after rotation is left as future work. The proposed algorithm is tested on top of extraction component. Both recovery accuracy and the extracted watermarks accuracy showed high performance level. Keywords—Rotation recovery; image watermarking; video watermarking; watermark extraction; robustness
Although, there are different algorithms in digital watermarking for images and videos, and many of them claim high performance in term of robustness, it is still difficult for developers to come up with a perfect algorithm that survives all attacks at once and with high extraction accuracy results.This is due to the tries to achieve a performance tradeoff between various metrics such as imperceptibility and robustness [3].One of the problems while designing a digital watermarking algorithm is losing the robustness for some attacks, such noising and compression once you concentrate to increase the robustness for geometrical attacks like rotation, scaling, and translation.
The focus here will be on rotation attack by proposing a different solution that adds a new facility to watermarking systems.By plugging the proposed solution to any watermarking system, there will be no need to focus on the design phase whether the developed watermarking algorithm has to be invariant to the rotation attack or not.The proposed algorithm is implemented to be used on top of extraction function.Hence, the algorithm will recover the attacked rotated image or a sequence of images from a video then send the restored images to be used in the extraction phase.
The proposed algorithm scope can be further extended to be integrated to various practical applications other than digital watermarking, such as 3D modeling, image visual enhancement, scene recovery in cameras, and more.
The rest of the paper is organized as follows: the second section reviews the recent and related works.The third section presents the proposed algorithm.The fourth section illustrates the experiments.The fifth section presents the results of evaluating the proposed rotation recovery algorithm.Finally, the sixth section concludes the paper.
II. RELATED WORKS
In digital watermarking systems, algorithms to watermark images and videos need to address various performance metrics such as imperceptibility, robustness and capacity.Most researchers try in their algorithms to achieve some tradeoff between imperceptibility and robustness.This tradeoff makes developers to sacrifice some robustness values.For example, increasing the tolerance to some noising attacks with decreasing the visual effects in the watermarked image might lead to losing the resistance to geometrical attacks such as scaling, rotation and translation.Focusing on resisting geometrical attacks might force the developer to sacrifice the visual image quality.For these reasons, there were algorithms that focused to achieve specific tolerance to determined attacks; some examples of these algorithms that intended to be invariant to geometrical attacks are in [4,5,6,7].www.ijacsa.thesai.org In addition, current researches in image watermarking as well as in video watermarking have shown low robustness in the case of rotation attacks.Some examples of low resistance to rotation attacks are obviously reported in [8,9,10,11,12,13,14].In these works the watermark after rotation attack was difficult to be accurately extracted and the data was mostly lost.The reported normalized correlation (NC) values were very low.Table 1 summarizes some of the reported result in the case of rotation attacks.[8] 0.73 -L.Agilandeeswari et al [9] 0.80 9.33 Ta Minh Thanh et al [10] -Nasrin M. Makbol [11] -0.50 Zhao et al [12] 0.86 -Jiansheng et al [13] 0.50 -Lusson et al [14] 0.65 -Some approaches were proposed to recover images into their original states after rotation attacks occur.In [15] they proposed a rotation estimation and recovery algorithm using image alignment, Radial Tchebichef moments or Fourier descriptors.In their algorithm, for all used methods, they need the original non-rotated image to be used as a reference image for the recovery and estimation purposes.Although the algorithm works with presence of the reference non-rotated image, it was still estimating the rotation angle with degree error reached to 4 degrees.
Another algorithm previously was proposed by Morgan
McGuire [16].In this algorithm, image registration using Fourier-Mellin transform was used for the purpose of estimating geometrical attacks parameters such as scaling, rotation and translation parameters.The algorithm showed good performance in rotation recovery.However, it reported an error reached up to 1 degree.Moreover, experiments were reported for rotated scenes and not with zero padded images.It also needs a reference image to estimate the parameters.
Previously, a symmetric reversible method was proposed by Laurent Condat and Dimitri Van De Ville [17].In this method, a 1-D filter is designed to convolve the rotated image with appropriate fractional delay filters.Pixels interpolation was utilized to recover the rotated scene.However, their results showed blurred images after recovery.This indicates the weakness in the algorithm to perfectly recover the rotations.Such algorithm cannot work accurately while used with digital watermarking systems.
In 3D images field, some algorithms have been released to deal with rotated scene estimation.In [18] authors developed a method to automatically recover image rotations from 3D urban scene.This was achieved by estimating various parameters from the taken images by multiple cameras.The parameters such as, intrinsic camera parameters and extrinsic pose are used with edge detection algorithm and vanishing points to estimate the rotation of the scene.This algorithm seems impractical to work with single 2D image rotation attacks, for example, in digital watermarking applications.
Another work in [19] introduced a recovery algorithm for rotations on 3D cameras.The algorithm was developed as a part of creating view panoramic mosaics scene.The algorithm works by registering a sequence of images after recovery rotation and using the registered images to estimate the focal length.However, this algorithm cannot serve some applications such as digital watermarking while it needs to multi mages to recover rotation parameters.
As noticed from Table 1, numerous algorithms have been published claiming high robustness.That is true when considering the common attacks and ignoring geometrical attacks.However, investigating these algorithms proved that rotation attacks still uncovered when algorithms can tolerate other attacks such as noising, filtering, cropping and compression.Low NC and high BER values, under rotation attacks for the investigated algorithms open the way for researchers to find an alternative solution that does not affect the other results while considering different attacks to achieve high performance.
For the purpose of solving the issue of the low performance related to rotation attacks, an alternative solution that adapts the rotation recovery scenario to watermarking systems is proposed instead of designing the watermarking algorithms to be invariant to rotation attacks but affecting other performance metrics.
III. PROPOSED ALGORITHM
As discussed in the previous sections, the main problem was found in the weakness of the most available algorithms for image and video watermarking to resist rotation attack.In consequence, the detection process of the embedded watermarks will be inaccurate if not impossible.
To solve this problem a new recovery rotation algorithm is proposed here to prepare the attacked image before performing extraction process.The algorithm is implemented to be pluggable to the extraction component in any image or watermarking system.This algorithm is developed to automatically detect, estimate and recover rotations for acute angles rotated scenes without need to a reference image.This is due to the difficulty to estimate the acute rotations in images accurately.In addition to the large distortion happens to watermark data once the scene is rotated with acute angles.
To fully recover the rotated image into its original state, the algorithm is implemented to take the attacked image as input, detect the edges in the image, estimate and compute the rotation angle, estimate the original image size then according to the estimated angle and size the rotation recovery is performed.Fig 1 .describes the proposed algorithm process.
A. Recovery algorithm
The proposed rotation recovery algorithm is implemented in the following steps.Figure 2 shows the states of the image during executing the recovery algorithm.www.ijacsa.thesai.orgStep 1: Input the rotated image I.
Step 2: if I is in RGB, Convert I to gray scale image.
Step 3: Apply Canny Edge Detector to detect edges in image I.
Step 4: Apply image dilation on the output from Step 3 using desk structuring element of radius = 3. Save the result as EdgeImage.
Step 5: Measure the rotation angle as follow: Set one flag carrying first pixel value from the image coroner such that: ( Measure the opposite side of the angle: Loop in image rows and count the pixel values such that: Where Opposite is set to "0" the beginning.
Measure the adjacent side of the angle: Loop in image columns and count the pixel values such that: Where Adjacent is set to "0" the beginning.
Step 6: Rotate the image I by Angle and save the rotated image as RImage where: (5) Step 7: Estimate the original image size using RImage as follow: Set two flags from RImage such that: Where L is the length of RImage, and H is the height of RImage.
Loop to calculate the distances BL (Black Length) and BH (Black Height) between the edge of the image and the original scene such that: Where BH and BL are set to "0" in the beginning.
Find the original image size such that Where L and H are the length and Height of RImage.
Step 8: Crop RImage from Point (BL, BH) and by size of OriginalL and OriginalH.
Step 9: Return the Recovered Image.
B. Edge detection and angle estimation
After converting the colored attacked image to gray scale as explained in section 3.1, and to simplify the estimation of the angle with more accurate value, the edges of the rotated scene are detected using canny operator.The output from canny edge detector will be a binary image (black and white).Although canny operator works perfectly for edges, the resulted image still sometimes can cause inaccurate angle estimation.That is due to the black holes spotted in some places of the edge.These black holes can lead to wrong angle sides" measurements.To solve this matter, an image dilation using desk structuring element of radius = 3 is considered.This will fill the holes in the edge and the angle sides" measurements will be perfectly accurate.
In the dilated image, to estimate the rotation angle, a flag that carries the value of the pixel chosen from the opposite side or the adjacent side can be used to count the similar pixels that have the same value.In current case the pixel value will be "0".The counting continues until it finds a different pixel value.The stop pixel value is "1".The count of pixels either for adjacent or opposite side is registered as the length.The length of opposite and the length of the adjacent are used to estimate the rotation angle according to equations 1 to 5. Fig. 3. describes the required measurements for the proposed recovery algorithm.www.ijacsa.thesai.org
C. Image size estimation
Once the rotation angle is estimated and the scene is rerotated, a black or unwanted area will be resulted with a larger image size than it was in the original one.Two steps must be performed, estimating the original image size form the rotated scene and eliminating the unwanted area.Otherwise, the detection of the watermark data will not be possible.
In the proposed algorithm, the estimation of the size is implemented in a blind manner assuming that the original image size is unknown.To perform this, the distances between the scene and the image edge BH and BL, the original image length (width), (L), the original image height (H) must be measured according to equations 6 to 11.These measurements are then used to crop the recovered scene and eliminate the unwanted area as explored in Fig. 2 and Fig. 3.In this point, the image is ready to be used by the extraction function of the watermarking algorithm which makes the extraction function retrieves the embedded data accurately.
IV. EXPERIMENTS
For the purpose of evaluating the proposed rotation recovery algorithm under digital watermarking environment, the algorithm must be implemented on top of extraction function for any available image watermarking.This is to ensure that the algorithm has attained its main objective, which is to enable digital watermarking systems to resist rotation attacks and increase the accuracy of the extracted watermarks.
To validate the performance of the proposed algorithm, implementing an image digital watermarking algorithm according to [14] is considered.This algorithm was chosen due to its weakness to withstand rotation attacks.This issue made it possible to implement the proposed rotation recovery algorithm on top of the implemented image watermarking algorithm to verify the performance and see how it is possible to survive rotation attack after using the proposed recovery algorithm.The utilized image watermarking algorithm, is implemented based on Discrete Wavelet Transform (DWT), and was developed according to the framework as shown in Fig. 4.
After implementing the image watermarking algorithm based on Fig. 4., and adapting the recovery algorithm to the extraction component, the testing is performed by comparing the extracted watermarks from the image watermarking algorithm before and after using the recovery algorithm.Fig. 5. explains the testing scenario used to evaluate the proposed recovery algorithm.
To emphasis the results, three different standard images Baboon, Lena and Peppers of size 512x512 are used.Each image was watermarked using the implemented image watermarking algorithm, with a watermark of size 64x64 pixels.The watermarked images are then attacked by rotating the images using various acute angles.The watermarks are extracted using both implemented watermarking algorithms with and without recovery.www.ijacsa.thesai.orgThe testing of the recovery algorithm was conducted using the experiments illustrated in the previous section.Two measures were used to evaluate the accuracy of the extracted watermarks before and after using the proposed rotation recovery algorithm, Normalized Correlation (NC) and Bit Error Rate (BER) according to the following formulas: Where, O is the original watermark, E is the extracted watermark.
Where, O is the original watermark, E is the extracted watermark, and mxn is the total number of watermark bits.
After implementing the image watermarking algorithm and testing the detection of the watermarks in the normal case where no rotation attack has applied to the watermarked images, the watermarks were extracted accurately as in Fig. 6.In the current case, the extracted watermarks in the situation where no rotation is applied as the original watermarks to be used for calculating NC and BER values for the extracted watermarks after recovery is considered.The comparison is done later with the extracted watermarks after rotation attack.
To evaluate the accuracy of the rotation recovery algorithm, the normalized correlation was measured for both original image after watermarking and the recovered image from rotation attacks.This is to indicate how accurate the recovery is.Results from the used images conducted based on various angles have shown NC of "1" for Lena and Peppers and 0.98 for Baboon image.Baboon image has not reached to NC of "1" www.ijacsa.thesai.orgbecause the error expected on size estimation on some images which is around 1 pixel height or width.However, in recovered images having value of 0.98 of NC, it was enough to retrieve the watermark accurately.Fig. 7. shows the NC values for the various angles rotation recovery in the three images.
After ensuring the high performance of the rotation recovery algorithm, the experiments under the implemented watermarking algorithm was conducted.The mentioned three tested images, Baboon, Lena and Peppers were tested.The extracted watermarks from attacked images with different rotation angles and from recovered images were utilized to measure both NC and BER.As seen in the results above, the NC value was extremely improved in the extracted watermarks from around 0.80 in the attacked images to 1 in the recovered images in most cases.At the same time BER value improvements from around 0.30 to almost 0.000 was achieved in most cases.These results prove that the proposed rotation recovery algorithm was developed to perfectly suits image watermarking systems and increases the resistance against rotation attacks.
As shown in Fig. 14. a sample of the extracted watermarks from attacked and recovered images is presented.The extracted watermarks are almost lost with attacks applied to the watermarked images.In contrast, the watermarks were accurately extracted after performing the recovery using the proposed algorithm.
VI. CONCLUSION
In this paper, the current digital image and video watermarking algorithms were investigated in term of robustness.Consequently, the weakness of the most algorithms was noticed to be resides on surviving the rotation attacks.This article has proposed a new automatic and blind algorithm to recover acute angles rotations in images.The proposed algorithm estimates the angle of rotation mathematically then estimates the original watermarked image size in a blind way.
Based on the angle and estimated size, the original image is recovered.The proposed algorithm has been made pluggable to the extraction function in any watermarking algorithm to benefit from extracting accurate watermarks.Using such algorithm will save developers from considering rotation attacks during the design of the watermarking algorithms.This work has been designated for zero padded rotated images while investigating images by applying cropping attack after rotation attack is left as an improvement for the current work on the future.Testing the proposed algorithm showed NC of "1" for the recovery process which indicates accurate angles estimations.Evaluating the adaption under digital watermarking algorithms showed very high accuracy for the extracted watermarks as well.Regardless of specifically using the proposed rotation recovery algorithm for digital watermarking, it can be proudly integrated to other image processing applications. | 2017-05-04T10:08:40.125Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "9eb416780538b6699579c24b635d92c12a70c868",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume7No11/Paper_10-Automatic_Rotation_Recovery_Algorithm_for_Accurate_Digital_Image.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9eb416780538b6699579c24b635d92c12a70c868",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14731056 | pes2o/s2orc | v3-fos-license | Effects of Porphyromonas gingivalis lipopolysaccharide on osteoblast-osteoclast bidirectional EphB4-EphrinB2 signaling
In bone remodeling, the Eph family is involved in regulating the process of osteoclast and osteoblast coordination in order to maintain bone homeostasis. In this study, the effects of Porphyromonas gingivalis lipopolysaccharide (Pg-LPS) on the osteoblast-osteoclast bidirectional EphB4-EphrinB2 signaling were investigated. An osteoblast-osteoclast co-culture system was achieved successfully. Hence, direct contact and communication between osteoblasts and osteoclasts was permitted. Regarding the protein expression and gene expression of EphB4 and EphrinB2, it was shown that Pg-LPS increased the expression of EphB4 while inhibiting the expression of EphrinB2. Therefore, the results indicate that, when treated with Pg-LPS, the EphB4 receptor on osteoblasts and the EphrinB2 ligand on osteoclasts may generate bidirectional anti-osteoclastogenic and pro-osteoblastogenic signaling into respective cells and potentially facilitate the transition from bone resorption to bone formation. This study may contribute to the control of osteoblast differentiation and bone formation at remodeling, and possibly also modeling, sites.
Introduction
Bone remodeling is a coupling process of bone resorption and bone formation (1). Resorption by osteoclasts and formation by osteoblasts, which leads to the occurrence of a coupling mechanism, is a complex and life-long process (2). This remodeling process has been described as a 'bone remodeling cycle' consisting of activation, resorption, reversal and formation phases (3). It is crucial for the normal function of bone, including bone growth, bone repair and the replacement of obsolete bone. Therefore, the molecular mechanism of coupling has long been a focus of research in this area.
However, prior to the discovery of the effects of bidirectional Eph-ephrin signaling in bone homeostasis, no proper coupling mechanism was reported that was able to explain this process. Since its discovery 25 years ago, the Eph family of receptor tyrosine kinases, comprised of A-and B-subfamilies, has been found to be involved in a growing number of physiological and pathological processes in various cell types and organs (4,5). Notably, it has been confirmed that bidirectional Eph-ephrin signaling participates in many biological processes, including angiogenesis, bone and organizational development and axon guidance (6)(7)(8)(9)(10).
In bone remodeling, osteoclast and osteoblast coordination is the key to maintaining bone homeostasis. Ephrin is involved in regulating this process (11). It has been demonstrated that reverse signaling through EphrinB2 into osteoclast precursors suppresses osteoclast differentiation, while forward signaling through EphB4 into osteoblasts enhances osteogenic differentiation and the overexpression of EphB4 in osteoblasts increases bone mass in transgenic mice (12). This finding revealed the potential role of the Eph/ephrin receptor family of ligands in the bone. It has been suggested that EphrinB2 may act in a paracrine or autocrine manner on the osteoblast to stimulate osteoblast maturation and/or bone formation (13).
Chronic periodontitis, a major cause of anodontia in adults, is one of the most common oral diseases (14). Porphyromonas gingivalis (Pg) is recognized as the main pathogen in chronic periodontitis (15). Lipopolysaccharide (LPS) from Pg is a component of Gram-negative bacterial cell walls. Porphyromonas gingivalis lipopolysaccharide (Pg-LPS), with high toxicity and antigenicity to periodontal tissue, may lead to the loss of periodontal attachment and alveolar bone absorption (16,17). LPS has also been shown to be able to induce the formation of osteoclasts with bone resorbing activity in RAW 264.7 cells (18).
In the present study, the effects of Pg-LPS on osteoblast-osteoclast bidirectional EphB4-EphrinB2 signaling were studied. Osteoblasts and osteoclasts are derived from precursors originating in the bone marrow (19). Interaction among cells mediated by the EphB4 receptor on osteoblasts and the EphrinB2 ligand on osteoclasts generates bidirectional anti-osteoclastogenic and pro-osteoblastogenic signaling into respective cells, potentially facilitating the transition from bone resorption to bone formation (20). This local regulation may contribute to the control of osteoblast differentiation and bone formation at remodeling, and possibly also modeling, sites. In the present study, in order to mimic the in vivo environment and the process of bone remodeling, osteoblasts from the jawbones of newborn mice and osteoclasts induced from RAW 264.7 macrophage cells were successfully co-cultured. The effects of Pg-LPS on these cells, and the potential use of Pg-LPS, were then studied.
Materials and methods
Animals and chemicals. Female and male newborn Kunming mice (<48 h old) were obtained from the Jilin University Animal Center (Changchun, China). No metabolic or systemic diseases were observed in the mice. Pg-LPS was purified in our laboratory from Escherichia coli O55:B5 (Sigma, St. Louis, MO, USA). This study was approved by the ethics committee of Jinlin University (Changchun, China).
Isolation and culture of osteoblasts. Osteoblasts were isolated sterilely from small specimens of mouse jawbone. Bone fragments (~1 mm 3 ) were washed three times with Phosphate buffer saline (PBS) and digested in 0.25% trypsin-EDTA for 10 min. The enzymatic reaction was stopped by adding an equal volume of Dulbecco's modified Eagle's medium (DMEM; Gibco, Carlsbad, CA, USA) with 10% fetal bovine serum (FBS; Gibco). Washing of fragments was repeated three more times. The fragments were then placed in the cell culture dish and cultured in DMEM supplemented with 10% FBS and 1% penicillin/streptomycin in a humidified atmosphere containing 5% CO 2 at 37˚C. When cells covered ~80% of the cell culture dish, conventional digestion and passage were conducted. The medium was changed every two days after being passaged and the cells were ready to use until they were passaged to the third generation. The morphology of the osteoblasts was observed under an inverted phase contrast microscope (Axiovert 200; Zeiss, Göttingen, Germany).
Osteoblast identification. The isolated osteoblasts were identified through alkaline phosphatase (ALP) staining and the observation of calcium nodes. Elevated ALP expression is one of the most widely used markers for mature osteoblasts. ALP staining was performed using the Burstone method. Prior to observation, the original culture medium was removed and the attached cells were fixed with 10% (v/v) formalin/PBS for 10 min at 4˚C and stained using the substrate naphthol AS-BI phosphate coupled with Fast Blue RR diazonium salt at 37˚C. To perform the observation of calcium nodes, the third generation of osteoblasts, which was cultured for three weeks, was also examined under an inverted phase contrast microscope.
Induction and culture of osteoclasts. Osteoclasts were induced from RAW 264.7 cells, which were purchased from the China Center for Type Culture Collection (CCTCC, Wuhan, China). During the induction period, RAW 264.7 cells were seeded in a 6-well culture plate at a density of 1x10 4 cells/well and left overnight. The cells were subsequently treated with 50 ng/ml RANKL to induce osteoclasts, and the culture medium of DMEM supplemented with 10% FBS and 1% penicillin/streptomycin was replaced every two days. The osteoclasts were induced successfully after being cultured for six days.
Osteoblast-osteoclast co-culture system. The isolated third generation osteoblasts were seeded in the previously mentioned well of induced osteoclasts at a density of 2x10 5 cells/well. The co-cultured osteoblasts-osteoclasts were treated with 75 ng/ml Pg-LPS for 24 h. Cells cultured without the addition of Pg-LPS were used as the control. The morphology of the co-cultured cells was observed under an inverted phase contrast microscope.
Protein expression of EphB4 and EphrinB2. EphB4 and EphrinB2 protein expression in the induced osteoclasts and Pg-LPS-treated and untreated co-cultured osteoblasts-osteoclasts were determined by western blot analysis and immunofluorescence staining using antibodies directed at the respective proteins. For western blot analysis, cells were harvested and lysed and the total protein content was determined using a BCA protein assay kit (Beyotime, Beijing, China). The lysate with 30 mg protein was loaded onto SDS-polyacrylamide gel for electrophoresis and transferred to a nitrocellulose membrane. The membranes were blocked in 5% nonfat dried milk for 45 min at 37˚C and then incubated overnight with 1:1000 mice anti-EphB4 monoclonal antibody (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA), and 1:1000 mice anti-EphrinB2, monoclonal antibody (Santa Cruz Biotechnology, Inc.) at 4˚C. The membranes were washed three times in TBST and incubated with the corresponding secondary anti-mouse antibody (Santa Cruz Biotechnology, Inc.) conjugated with horseradish peroxidase (HRP) at room temperature for 45 min. The detected protein signals were measured using an enhanced chemiluminescence (ECL) kit (Beyotime).
Gene expression of EphB4 and EphrinB2. To further evaluate the expression of EphB4 and EphrinB2, changes in gene expression of EphB4 and EphrinB2 were examined by quantitative reverse transcription-polymerase chain reaction (qPCR). Sequences of the primers for target genes are shown in Table I. According to the manufacturer's instructions, total RNA was Statistical analysis. Data are expressed as the mean ± standard deviation (SD). An unpaired Student's t-test was used to test the significance of the observed differences between the study groups. A value of P<0.05 was considered to indicate a statistically significant difference. Fig. 1a. After being cultured for five days, cells around the mouse jawbone fragments increased significantly. They became concentrated and certain tissue fragments began to fuse. After seven days, the morphology was varied and the majority of cells were triangular or polygon-like. With increased time, the numbers of osteoblasts increased and the cells were purified through repeated washing and digestion (Fig. 1a).
Identification of osteoblasts. The morphology of the osteoblasts is shown in
The ALP staining showed a clear positive effect (Fig. 1b). Many reddish-brown particles were visible in the cells. A large number of high-density black nodular aggregates of varying size were seen during the observation of calcium nodes (Fig. 1c).
EphrinB2 expression of the induced osteoclasts. The immunofluorescence staining (Fig. 2a) and western blot analysis (Fig. 2b) clearly show that the expression of EphrinB2 was higher in the induced osteoclasts than in the control cells.
Morphological observation of the co-cultured osteoblast-osteoclast system. Direct contact between osteoblasts and osteoclasts was used in the present study. The results indicate that the isolated osteoblasts and induced osteoclasts grew well when co-cultured (Fig. 3).
Protein expression of EphB4 and EphrinB2. As shown in Fig. 4, immunofluorescence staining and western blot analysis were conducted to study the changes in the expression levels of EphB4 and EphrinB2 proteins in the osteoblast-osteoblast co-culture. After being treated with Pg-LPS at a concentration of 75 ng/ml for 24 h, the expression of EphB4 increased, while that of EphrinB2 decreased.
Gene expression of EphB4 and EphrinB2. The gene expression of EphB4 and EphrinB2 was detected. The results show A B that the relative EphB4 mRNA expression level was significantly increased in the Pg-LPS-treated osteoclast-osteoblast co-culture compared with that in the control (Fig. 5a; P<0.05). However, EphrinB2 mRNA expression was significantly decreased in the Pg-LPS-treated co-culture compared with that in the control (Fig. 5b; P<0.05). Therefore, the gene studies are in line with those on protein expression.
Discussion
In the present study, the effects of Pg-LPS on osteoblast-osteoclast bidirectional EphB4-EphrinB2 signaling were investigated. The results show that Pg-LPS increased the expression of EphB4 while inhibiting the expression of EphrinB2.
Our results show that many reddish-brown particles following ALP staining were visible in the cells. A large number of high-density black nodular aggregates of varying size were seen in the observation of calcium nodes (Fig. 1c). Triangular or polygon-like cells centered on the scattered aggregates, thus leading to the formation of calcium nodes. ALP staining and the observation of calcium nodes confirmed the successful isolation of osteoblasts.
RAW 264.7 cells, from Abelson murine leukemia virus-induced tumors, are osteoclast precursor cells derived from mice and are considered to represent the early differentiation stages of the osteoclast precursor (21). Expression of EphrinB2 is one of the indicators of induced mature osteoblasts. Hence, to verify the successful induction of osteoclasts, two complementary assays (immunofluorescence staining and western blot analysis) were employed to monitor the changes in EphrinB2 (Fig. 2). The results showed that the expression of EphrinB2 was significantly increased compared with that in the control group. Thus, the osteoclasts were successfully induced.
Direct contact between osteoblasts and osteoclasts was employed in the present study, in order that certain receptors which exert their impact through direct cell membrane contact were able to function. The co-cultured osteoblast-osteoclast system made it possible to mimic the real environment in vivo. After being treated with Pg-LPS at a concentration of 75 ng/ml for 24 h, the expression level of EphB4 increased, while that of EphrinB2 decreased. This result showed clear effects of Pg-LPS on osteoblast-osteoclast bidirectional EphB4-EphrinB2 signaling. Osteoblasts and osteoclasts are derived from precursors originating in the bone marrow (19). The interaction among cells mediated by the EphB4 receptor on osteoblasts and the EphrinB2 ligand on osteoclasts generates bidirectional anti-osteoclastogenic and pro-osteoblastogenic signaling in respective cells, potentially facilitating the transition from bone resorption to bone formation. The present study is consistent with a report by Kubo et al (20).
When mediated with Pg-LPS, the gene expression of EphB4 was significantly promoted while that of EphrinB2 was inhibited. EphrinB2, involved in reverse signaling into osteoclast precursors, is associated with the differentiation of osteoclasts. Forward signaling through EphB4 into osteoblasts promotes osteogenic differentiation. Contact between EphrinB2 and EphB4 inhibited the formation of osteoclasts, thus promoting the formation of osteoblasts. The results of the present study indicate that Pg-LPS regulates bidirectional EphB4-EphrinB2 signaling. Therefore, the differentiation of osteoblasts was promoted, while the differentiation of osteoclasts was inhibited. This regulation is considered to be an effective therapeutic approach for the treatment of bone-related diseases. Hence, this study may contribute to the control of osteoblast differentiation and bone formation at remodeling, and possibly also modeling, sites.
In conclusion, when treated with Pg-LPS, the EphB4 receptor on osteoblasts and the EphrinB2 ligand on osteoclasts may generate bidirectional anti-osteoclastogenic and pro-osteoblastogenic signaling into respective cells and poten- tially facilitate the transition from bone resorption to bone formation. | 2018-04-03T02:00:37.678Z | 2013-10-23T00:00:00.000 | {
"year": 2013,
"sha1": "32e6e0530d0ab917cdaaf972a20a9f4beb7943a6",
"oa_license": "CCBY",
"oa_url": "https://www.spandidos-publications.com/etm/7/1/80/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "32e6e0530d0ab917cdaaf972a20a9f4beb7943a6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59538102 | pes2o/s2orc | v3-fos-license | A Comprehensive Survey on Cooperative Relaying and Jamming Strategies for Physical Layer Security
Physical layer security (PLS) has been extensively explored as an alternative to conventional cryptographic schemes for securing wireless links. Many studies have shown that the cooperation between the legitimate nodes of a network can significantly enhance their secret communications performance, relative to the noncooperative case. Motivated by the importance of this class of PLS systems, this paper provides a comprehensive survey of the recent works on cooperative relaying and jamming techniques for securing wireless transmissions against eavesdropping nodes, which attempt to intercept the transmissions. First, it provides a in-depth overview of various secure relaying strategies and schemes. Next, a review of recently proposed solutions for cooperative jamming techniques is provided with an emphasis on power allocation and beamforming techniques. Then, the latest developments in hybrid techniques, which use both cooperative relaying and jamming, are elaborated. Finally, several key challenges in the domain of cooperative security are presented along with an extensive discussion on the applications of cooperative security in key enablers for 5G communications, such as nonorthogonal multiple access, device-to-device communications, and massive multiple-input multiple-output systems.
I. INTRODUCTION
The broadcast nature of wireless transmissions allows any receiver, within its coverage region, to capture the transmitted signal. This makes information security a major concern in the design of wireless networks. Recent advances in wireless technologies, such as the long-term evolution for cellular networks and Wi-Fi systems, have caused an exponential growth in the number of connected devices [1] which in turn entails the risk of increasing security threats, like data hacking and eavesdropping. Through cryptographic approaches, data security has been traditionally addressed at the higher layers of the open system's interconnected model, whereby the plain text message is encrypted by using powerful algorithms that assume limited computational capacity of potential eavesdroppers [2]. However, due to recent enhancements in computational power of devices and optimization strategies for breaking encryption codes, there is a need for better security strategies to protect information from unauthorized devices. Another drawback of the conventional cryptographic schemes is the requirement for key management to exchange the secret key between legitimate entities. Key sharing requires a trusted entity which cannot always be ensured in distributed wireless networks, like wireless sensor networks and wireless adhoc networks. On the other hand, the lower layers (physical and data link layers) are oblivious of any security considerations. Considering the recent challenges, security must be considered on physical layer to increase the robustness of existing schemes.
Physical layer security (PLS) was pioneered by Shannon and further discussed by Wyner [3], [4], [5], and thereafter has been identified as an appealing strategy to cope with the ever-increasing secrecy demands from the information theoretic perspective. In the recent years, PLS has been investigated both as an alternative and as a complementary approach to conventional cryptographic methods [6]. The PLS schemes exploit the random fading in wireless propagation channels to secure the communication link, while assuming no restrictions on the eavesdropper's computational power [4]. Consisting in a pair of legitimate transmitter and receiver (also known as Alice and Bob), a PLS's general wiretap model tries to communicate with the presence of an eavesdropper (also called Eve), as shown in Figure 1. Alice encodes a message w k into a codeword X n = (X(1), X(2), ...X(n)) and transmits to Bob; where k and n denote the number of message bits and codeword symbols, respectively. The signal received at Bob can be written as Y n = (Y (1), Y (2), ...Y (n)) whereas the signal received at the eavesdropper is given as Z n = (Z(1), Z(2), ...Z(n)). It is simply assumed that both Bob and Eve experience quasi-static fading. The signal Y (i) received by Bob is written as Similarly, the signal received at Eve can be determined as where i = 1, 2, ...n is the length of the signal, ℵ m and ℵ e represents the Gaussian Noise with zero mean and variance N m and N e for main and wiretap links, respectively. Moreover, G m and G e denote the channel amplitude gains of main and wiretap channels, respectively. Wyner's contribution is mainly introducing the concept of a wiretap channel for discrete memoryless channel. Subsequently, research efforts were directed towards exploring PLS in Gaussian channels [7], [8] and then fading channels [9], [10]. Although recent studies were more focused towards ensuring perfect secrecy, more efforts were directed towards weak secrecy, by further investigating the impact of fading of secrecy performance [11], [12]. To incorporate different kinds of eavesdroppers, extensive studies for passive and active eavesdropping scenarios were provided [13]. A radio eavesdropper, also called passive eavesdropper, is capable of detecting and intercepting the main transmission without bringing any changes in the network. Also, they cannot make any modifications at the intended receiver's obtained message. Resultantly, this type of attack is difficult to detect. On the contrary, an active eavesdropper can intercept and monitor a transmission and have the capability to bring modifications in the main channel [14]. The major aim of this type of attack is to degrade the received signal at the intended receiver, causing more decoding errors. In case of multiple adversaries, eavesdroppers can work independently (non-collusion) or cooperatively (collusion). Non-colluding eavesdroppers are mutually independent and do not share received information to cooperatively decode the confidential message [15], [16], [17], whereas, colluding eavesdroppers try to intercept the communication and mutually share the information, such as received signal-to-noise ratio (SNR), to decode the message [18], [19]. A wireless link from a source to eavesdroppers can then be considered as a single-input multiple-output (SIMO) link [20]. Despite the continued research interest in PLS schemes, as shown in Table I, many open problems remain. For instance, practical coding techniques for PLS and their performance metrics are mostly unknown [21].
A. Related Surveys
The cooperative relaying and jamming strategies in the PLS has been plentifully discussed in the literature. However, very few comprehensive surveys discuss all aspects, requirements and challenges of the cooperative security. For instance, in [30], the attacks in cognitive radio networks are categorized into learning attack, primary user emulation, data falsification, jamming attack, objective function attack and eavesdropping. The authors also characterize the secrecy capacity for cognitive radio networks, in the presence of multiple eavesdroppers. In [31], Yang et al. have provided a detailed survey on PLS and state-of-the-art in 5G networks. The authors analyzed the three most dominant 5G technologies; heterogeneous networks (a multi-tier system having multiple devices with different characteristics), massive multiple-input multiple-output (MIMO) and millimeter-wave (mmWave) technologies. The authors also highlight various opportunities and challenges for each of these aforementioned technologies. In [32], the authors investigated code design for security and reviewed the state-ofthe-art of polar codes, low-density parity-check (LDPC) codes, and lattice codes. In addition, they also surveyed the recent advances in PLS techniques for massive MIMO, mmWave, Heterogeneous networks and full-duplex technology. In [33], the literature review on PLS techniques was presented from the perspective of imperfect channel estimation. More specifically, the authors presented a high-level overview of the advancements in PLS for various wireless networks and discussed approaches for the design of secret key exchange and signal processing techniques for secrecy enchantment under imperfect channel estimation. In [34], Trappe et al. focused on different challenges of PLS. They highlighted various practical aspects of PLS that are require reserch attention, and discussed various benefits attached to these solutions. Mukherjee et al. in [35] studied PLS in detail for different multi-user conditions and presented various protocols for secret key exchange as well as approaches for code-design for information theoretic secrecy. The authors also provided future research directions for practical realization of PLS. An overview of physical layer based secure communication strategies was provided in [36]. After reviewing several information-theoretic studies, the authors asserted that despite numerous idealized assumptions in the PLS literature, game-theoretic strategies and multi-antenna transmission techniques can potentially help realize the vision of unbreakable and keyless security in wireless links. A brief survey of recent literature for jamming techniques was presented in [37]. The authors provided a description of various jamming techniques and highlighted their associated advantages and disadvantages.
B. Motivation and Contribution
While these surveys are the closest to the work presented in this paper, it is noteworthy to mention that the material our document presents is a continuation, as well as an update, of the recent achievements in the field related to the emphasis on the PLS implementation, with the cooperation of helping nodes. Specifically, we discuss recent developments in secure communications through cooperative relaying and cooperative jamming strategies. Thus, our work's intention is not limited to different cooperative PLS schemes and jamming techniques; rather, we aim to provide a taxonomy of the different proposed approaches in this area. The main contributions of this work can be summarized as follows: 1) Providing a brief overview of cooperative relaying and jamming techniques. 2) Developing a literature taxonomy of cooperative relaying and jamming, and hybrid techniques. 3) Discussing several open problems in secure cooperative relaying and jamming. 4) Presenting applications of cooperative security for 5G technologies including energy harvesting networks, relay-aided device-to-device communications and massive MIMO systems. Finally, for clarity, a taxonomy of the cooperative relaying and jamming schemes surveyed in this work is provided in Figure 2.
C. Paper Organization
The remainder of the paper is organized as follows. Section II discusses some fundamentals of cooperative relaying for
Security Issue
Reference Network Type Solution Authentication [22], [23] Wireless network Fingerprinting [24] Wireless Body Area Networks Wireless channel exploitation [25] Mobile network Time varying carrier frequency offset [26] Cognitive radio networks Authentic tag generation by one-way hash chain Key Agreement [27] Mobile networks Deep fade detection for randomness extraction; Light-weight information reconciliation Secrecy capacity enhancement [28], [29] Cooperative wireless network Optimization [30] Cognitive radio networks Cooperative jamming [29] Cellular networks Stochastic geometry and random matrix theory PLS. Section III provides an exhaustive discussion on secure relaying techniques. In Section IV, various cooperative jamming schemes are reviewed. Section V summarizes various hybrid cooperative strategies while Section VI discusses the application of cooperative PLS in future 5G technologies. Finally, Section VII provides some concluding remarks. A list of acronyms used in this work is provided in Table II.
II. FUNDAMENTALS OF PHYSICAL LAYER SECURITY
This section reviews some key concepts, necessary for understanding information theoretic security in cooperative networks.
A. Performance Metrics
For the readers' comprehension, some of the secrecy performance metrics have been highlighted.
1) Secrecy Rate: It is the information transmission rate for the secret message, represented as where H(.) is the entropy of the confidential message.
2) Equivocation Rate: It is a measure of the eavesdropper uncertainty about the confidential message W , given that Z n has been received at the eavesdropper. It can be expressed as 3) Perfect Secrecy: In case of physical layer security, the perfect secrecy is assumed to be achieved if specific conditions are achieved for (M, n) codes: The amount of information leakage can be represented as where I(.) represents the mutual information function. It can be noted that n → ∞, R s −R eq = 0 and hence, no information is leaked to the eavesdropper.
4) Secrecy Capacity:
The secrecy capacity C sec for a wireless channel can be defined as the maximum achievable secrecy rate R s [10]. Mathematically, it can be written as where P e = Pr(W = W ) is the probability of error, which is a measure of the reliability of information at Bob,W is the decoded message at Bob and ε > 0. The secrecy capacity can alternatively be written as [36] C sec = max where U is an auxiliary random variable, which creates two virtual channels from U → Y and U → Z according to the concept of channel prefixing [38]. Then determining the secrecy capacity is virtually the same as finding the joint probability distribution of X and U p(u, x) that maximizes the difference between the mutual information of the main and the wiretap links. Then by invoking the Shannon capacity theorem (8) can be rewritten as where the notation [x] + represents max{0, x}, and C s and C e are the channel capacities of the main and the wiretap link, respectively. The main condition here is that C s > C e , which emphasizes the fact that the main channel must be better than the wiretap channel, irrespective of the eavesdropper's computational power. This is another motivation to exploit cooperative communications to provide this much-desired advantage of the main channel. 5) Secrecy Outage Probability: The outage probability of secrecy capacity, also called secrecy outage probability (SOP), is the likelihood of achieving a non-negative target secrecy rate. In the presence of an eavesdropper in the fading channel, SOP is one of the most commonly used secrecy performance metrics. It can be formulated as [39] P out = Pr(C sec < R s ).
7)
Probability of Strictly Positive Secrecy Capacity: Probability of strictly positive secrecy capacity (SPSC) is the probability that the secrecy capacity C sec remains higher than 0 [43], [44], which is given as B. General System Model Figure 3 presents a generalized cooperative PLS system model. In this model, source S 1 transmits a signal to destination S 2 , in the presence of multiple eavesdroppers E and intermediate helper nodes H, where E = {E m |m = 1, 2, ...M } and H = {H k |k = 1, 2, ...K}. Note that a helper node can act as a relay, or a jammer, or both 1 . The helper nodes can be adaptively selected to play different roles based on their location. For instance, the nodes closer to the source can relay messages to the destinations. It can otherwise act as jammer. Both cases are now individually discussed. 1 Since the helper node requires at least two antennas to act as a relay and jammer simultaneously. For the sake of brevity, we only perform derivations for the case where helper node performs either relaying or jamming.
1) Cooperative Relaying:
There are numerous advantages of introducing relays in the network. Relays can be deployed in areas where the usual backhaul solutions are either unavailable or too expensive. Relaying is also a feasible solution when site acquisition for base station (BS) deployment is a problem. Moreover, relay networks can be deployed and removed easily, as compared to conventional cellular infrastructure [45]. By deploying relays near the cell edge, the throughput of users can be significantly improved. The deployment of relay networks in areas with low signal levels can result in higher SNRs for the surrounding users, which increases their achievable data rates. In scenarios where several user equipment move in a group (e.g. in a train), a co-located relay can provide improved mobility performance for that group of users. Typically, relaying involves two phases for transmitting a message from the source to the destination: in the first phase, the message is broadcast from the source to the relay and the destination [46]. During the second phase, the relay node transmits its received message to the destination using a specific protocol, e.g., amplify-and-forward (AF) [47], compressand-forward (CF) [48], compute-and-forward (CTF) [49], or decode-and-forward (DF) [50]. According to the AF protocol, the relay transmits a scaled version of its received signal. For the CF protocol, the relay compresses the received message before retransmitting it to the destination. In a multiuser scenario, the CTF protocol allows the relay to decode the linear combination of transmitted messages, received from a noisy observation of the channel, which is then passed on to the destination. The destination solves for its desired messages after it has received a sufficient number of linear combinations. In the DF protocol, the relay first decodes the received message and then re-encodes the signal for transmission to the destination [46]. It is pertinent to note that while the AF protocol is simpler to implement, as compared to DF, CTF and CF, its main disadvantage is the amplification of noise in addition to the received signal. The DF protocol provided its best performance when the relay is positioned near the source, or in the case of good channel conditions. Based on the transmission and reception capability, there are two types of relays: half-duplex (HD) relays [51], [52], [53] and full-duplex (FD) relays [54], [55], [56]. A HD relay needs two orthogonal channel uses to transmit and receive information, whereas a FD relay can simultaneously transmit and receive information, allowing the spectrum to be more efficiently utilized. The FD relaying mode also requires effective mitigation of self-interference at the relay caused by the significant power difference between the received and transmitted signals, assuming identical antenna gains [54]. Despite its lower spectral efficiency, the HD relays are preferred in practical systems, due to their low complexity and ease of implementation [57].
As discussed earlier, the transmission takes place in two phases by dividing a single block of time into two timeslots. However, this can vary for different relaying techniques and protocols. The secrecy capacity under different relaying protocols is listed in Table III. To provide further insights on the impact of different relaying protocols, Figure 4 plots the achievable secrecy rate for different HD relaying protocols, as a function of the main link's average transmit power. The relaying performance is benchmarked against the direct communication scheme. From Figure 4, it can be observed that the achievable secrecy rate generally increases with an increase in the transmit power. This increase in the secrecy rate is the lowest for direct transmission and is the highest for DF relaying. More specifically, for the direct transmission case, the secrecy rate increases as the number of eavesdroppers decreases from 3 to 1. For the multiple eavesdroppers case, the secrecy rate for non-colluding eavesdroppers is more for the colluding eavesdroppers case. Similar trends can be observed for AF and DF relaying schemes. Among DF and AF protocols, the largest secrecy rate is achieved for DF under non-colluding eavesdropping conditions. When H=3, we consider the optimal relay selection scheme [41] in which a helper node is selected, based on the CSI of both the source-relay and relay-destination links. In general, the figure shows that the secrecy rate is higher when H>E and vice versa, for both AF and DF protocols. However, at lower values of the transmit power, AF outperforms DF in terms of secrecy rate when H<E. Similar trends were also reported in [58].
2) Cooperative Jamming: Although interference is traditionally considered to be undesirable for network operations, it can be leveraged for securing wireless communication links. The most prominent application is cooperative jamming [66], [67], wherein a helper node may sacrifice its entire rate, in order to create interference at the eavesdropper to degrade its performance. Depending on the design considerations, the jamming signals can be of different types. For instance, Gaussian noise, which is similar to additive noise, degrades the signal of both the legitimate and the eavesdropper nodes. In contrast, it is possible to generate a jamming signal to the legitimate node, resulting in only adversely affect the eavesdropper's signal reception [68]. However, this type of jamming requires complex interference cancellation at the le- gitimate receiver to decode the codeword. Additionally, signals from other legitimate transmitters can also be used to degrade the eavesdropper's signal-to-interference ratio. However, such jamming scenarios are difficult to implement due to time synchronization requirements between the multiple transmitting pairs. In this case, the helper nodes do not relay information but transmit jamming signals to confuse the eavesdropper. The noise is added by the helper nodes in a controlled manner, causing the noise to be nullified at the destination. This results in increasing the secrecy capacity due to degradation of the received signal at the eavesdroppers. A list of commonly used cooperative jamming techniques along with their secrecy capacity expressions are given in Table IV. Figure 5 compares the achievable secrecy rate under different jamming conditions for increasing values of the main link's average transmit power. One can observe from the figure that the lowest secrecy rate is achieved for the direct transmission case, which has no jamming by either the helper or the destination node. Moreover, in the direct transmission case, the largest secrecy rate is achieved when there is only a single eavesdropper. We consider that a total of 3 helper nodes exist in the network and the best jammer is selected based on the CSI of the jammer-eavesdropper link. It can be seen that the jamming can help in improving the secrecy performance of the system and proves to be more effective against the noncolluding eavesdroppers. Furthermore, the destination assisted jamming along with the jamming from helper nodes, can also provide significant performance improvements against colluding and non-colluding eavesdroppers. Interestingly, it can be seen from the figure that more performance gains are achieved for destination & helper-assisted jamming when E=1. However, as E increases from 1 to 3, the secrecy rates for helper jamming and destination & helper-assisted jamming are similar to each other, which suggests that the impact of jamming reduces with an increase in the number of eavesdroppers. (1)(2)(4)(6) Colluding [60] (1)(2)(4) (1)(2)(4) Colluding [61] (1)(2)(4) DF Non-Colluding [63] (1)(2)(3)(5) Csec = log 2 min 1+max k∈K min
FD
One Way AF Non-Colluding [64] (1)(2)(4)(6) Non-Colluding [65] (1)(2)(4) Lately, secure communication via relays has drawn much attention. Since these relays are distributed, the geographical distance of communication between the source and destination can still be decreased which results in the secrecy performance being improved. The achievable secrecy rate and secrecy capacity have been evaluated under different source-relayeavesdropper scenarios. In fact, based on their role in the network, these relays can be a trusted entity or completely stranger (untrusted) to the communicating parties. Therefore, various modalities have been proposed to provide security using cooperative strategies for single and multiple-antenna devices as shown in Table V.
A. Untrusted Relays
In many practical cases, even when the external eavesdropper is not present in the network, secure communication between two nodes, using an intermediate relay, can be a concern. The source and the destination may want to keep their communication secret from the relay despite its willingness to cooperate [69]. This model has critical importance in government and defense intelligence networks where all users do not have the same access rights [70]. Also, if the relay belongs to a different network, its access to the information of the nodes for the other network will not be granted. Several studies indicate that it is possible to securely transfer messages from the a source to a destination using intermediate untrusted relays [71], [72], [73], [74], [75], [76]. In [77], the diversity order and capacity scaling to securely forward the information in untrusted relaying environment is investigated. A more realistic scenario was considered in [78] by introducing trust degree-based cooperation. A typical scenario, where a node can act as a relay and an eavesdropper is given in Figure 6.
Here, the communication takes place in two time slots. During the first time slot, S 1 broadcasts its message to the untrusted relay R/E and eavesdropper E while S 2 may not receive the broadcast signal, due to deep fading. Generally, the untrusted relay can use either the DF or the AF protocol to forward its message to S 2 in the second time slot. However, the AF protocol is generally preferred in this case as S 1 may not want an untrusted relay to decode the message meant for S 2 . The same signal is also overheard by E in the second time slot, and E may decide to combine or to use any one of the signals to decode the message of S 1 . In order to prevent E and R/E from decoding the secret message, several PLS approaches have been provided in the literature. Let us now briefly discuss some of these strategies for providing link security in untrusted relaying scenarios. 1) Coding Approaches: In the paradigm of untrusted relaying, secure transmission was studied in a multi-hop scenario in [79]. In this context, it was assumed that the direct link between the source and the destination does not exist and the communication between them is only possible through an intermediate untrusted relay. It was also assumed that each node could only communicate with its immediate neighbor to relay the information to the destination. This study focuses on nested lattice codes and used CTF protocol to relay information.
2) Beamforming Approach: Secure beamforming techniques can be used for providing link security as shown in Figure 7. Particularly, beamforming is used to transmit signals to a specific user, resulting in degradation of SNR of the same signal at any other user. A MIMO relaying system was considered in [80] for transmission using AF protocol. A twohop scenario was considered where the intermediate relay was not trusted. Specifically, the following two conditions were considered.
• Non-collaborative scheme: The intermediate node is assumed to be an external node. • Collaborative scheme: The relay re-transmits using beamforming at relay. The results presented by the authors show that collaborative scheme outperforms the non-collaborative scheme in terms of achievable secrecy when the SNR of the source-relay and relay-destination links were low. In addition, the proposed schemes ensure higher secrecy, as compared to conventional beamforming.
3) Cross Layer Design: When the communication is not possible between the source and destination without the intermediate untrusted relays, then an appropriate solution is distribution of untrusted relays into collaborative and noncollaborative relays [81]. In particular, if the source has to transfer information, the access of intermediate nodes to the information must be minimized. When any untrusted relay receives the information, all other relays are assumed to overhear that information. The entire information is divided into m data streams and then the associated data rates, for each stream, can be given as R 1 , R 2 , R 3 , ..., R m . The desired transmission rate from a source to a destination is given as In another study [82], the authors proposed to use upper layer security, along with PLS, to improve the secrecy of the data being transferred through untrusted relays. The study showed that for AF relays, perfect secrecy of information was attainable, whereas for DF protocol, significant amount of information leakage occur.
B. Trusted Relays
In the case of trusted relays, the eavesdroppers and relays are considered to be separate network entities. Some of the common relay-eavesdropper scenarios for HD, FD and successive relaying are provided in Figure 8. It can be seen from the figure that for HD-relaying techniques, the information transmission takes place in two time slots. The direction of communication can be either one-way (i.e., S 1 → R → S 2 ) or two-way, i.e., S 1 ←→ R ←→ S 2 . For both the one-way and two-way cooperative relaying, there are two transmission modes namely the relay reception mode for the first time slot and the relay transmission mode for the second time slot. Intuitively, for one-way relaying the eavesdropper can receive the same signal during the first and second time slots, which can be exploited to decode the secret message. In case of two-way relaying, the eavesdropper receives the message of S 1 and S 2 in the first time slot and it receives the superimposed signals of S 1 and S 2 from R during the second time slot. For successive relaying, two relays are used to improve the throughput of the system. During the first time slot, S 1 transmits its message to R 1 while R 2 transmits its message to S 2 , assuming that R 2 received a message in the previous transmission. During the second time slot, S 1 again broadcasts its message, which is received by R 2 to forward to S 2 in subsequent time slots. If an eavesdropper lies in the communication range of S 1 , R 1 , and R 2 then successive relaying may prove to be more susceptible to information leakage since the eavesdropper has a better chance of decoding the messages received during two time slots. Despite the hardware complexity of FD communications, it has several advantages over HD communications such as an increased ergodic capacity [118], [119], reduced end-to-end delays [120], reduced feedback delays [121], and improved network secrecy [122], [123]. Based on the usage of frequency band, FD relaying can be divided into two types, i.e., FDoutband relaying and FD-inband relaying, as illustrated in Figure 8 (g) & (h), respectively. The self-interference cancellation techniques play a more vital role in FD-inband relaying, especially in the presence of eavesdroppers. However, the generated interference can act as artificial interference to minimize the leakage of information to the eavesdropper, while simultaneously improving the power efficiency of the system [123], [124]. Some other cooperative relaying strategies are reviewed in the following sub-sections.
1) Relay Power Allocation: The required transmit power is one of the major concerns when a signal is transmitted across the network. A low power signal can increase decoding errors at the destination, due to the signal attenuation from the path loss and fading. In contrast, a signal with high power improves the received signal strength at the intended receiver, at the cost of introducing significant interferences for other receiving nodes. Therefore, optimum allocation of power is important from not only a communication point of view, but also from the secrecy perspective. The problem of optimum power allocation is analyzed in [125] and the authors proposed Untrusted [83], [70] AF & CF Single Antenna One-way Proved that for larger channel gain of the main link, the source does not need to transfer message at higher power as it will improve relay's ability to decode. [84] CTF Single Antenna One-way Decode by linear combination of incoming signals instead of decoding individually [85] CTF Single Antenna One-way Showed that CTF works best with lattice codes to improve secrecy capacity [86] CTF Multiple Antenna Two-way Showed that introduction of multiple antennas at the source nodes and the optimization of the transmit power improves the information security [79] CTF Single Antenna One-way End-to-end secure communication via joint use of wiretap codes, lattice codes, and a network coding scheme [73] AF Single Antenna One-way Proposed a modulo-and-forward (MF) operation at the relay with nested lattice encoding at the source [87] AF Single Antenna One-way Proved that a relay, no matter whether it is chosen as a helper or not, acts as an eavesdropper and the performance of secure communication systems is worse off when the number of relays increases [88] AF Single Antenna Two-way Found that if one node's transmit power is much lower than the other then two-way relaying with AF strategies is the best choice [89] AF Single Antenna One-way Derivation of close-form ergodic secrecy rate as well as the asymptotic expressions [80] AF Multiple Antenna One-way Optimization of transmit covariance matrices for secrecy enhancement [90] AF Multiple Antenna Two-way Optimization of covariance matrices for both relay-aided and direct communications [91] AF Single Antenna One-way Derivation of unified tight approximation and asymptotic expressions for the system secrecy outage probability with outdated CSI [87] AF Single Antenna One-way Derivation of lower bound of the ergodic secrecy capacity [83] AF Single Antenna One-way Derivation of Upper bound of the ergodic secrecy capacity in the presence of a jamming node [92] CTF Single Antenna One-way Derivation of genie-aided outer bounds on the secrecy rate regions Trusted [93] NF, AF, DF Single Antenna One-way Investigation of optimal relay location between the source and destination in a relaying network [58] CF, AF, DF Single Antenna One-way Derivation of optimal power allocation in closed-form [94] DF Single Antenna One-way Proposed achieved secrecy regions using a Cover and El Gamal's CF scheme [95] AF Multiple Antenna One-way Compared the benchmark nulling solution with local nulling [96] DF, AF Multiple Antenna One-way Compare secrecy outage capacity of AF and DF protocols [97] DF Multiple Antenna One-way Design an optimal relay beamformer to maximize secrecy rate and minimize transmit power, respectively [98] DF Multiple Antenna One-way Propose a joint generalized singular value decomposition (GSVD) precoding at the source and ZF-SVD precoding at relay and power allocation scheme [61] DF Single Antenna One-way Revised Bellman Ford algorithm for providing a secure route in multihop scenario [99] DF Single Antenna One way Maximization of secrecy rate under strict delay constraint [100] DF Multiple Antenna One-way Analyzing impact of jamming on bit error rate (BER) and throughput under imperfect CSI [101] AF Multiple Antenna One-way Present secrecy rate maximization beamforming and null space beamforming for cooperative relays [102] AF Multiple Antenna One-way Design a robust relay beamformer, including optimal rankone, MF, and ZF beamformer [103] AF Multiple Antenna One-way Joint transmit/receive beamforming at relay [104] AF Multiple Antenna One-way Propose a joint GSVD precoding at the source and ZF-SVD precoding at relay and a power allocation scheme [74] AF Multiple Antenna One-way Designed destination aided precoding and optimized the performance using iterative algorithm [105] AF Multiple Antenna One-way Optimal power allocation to maximize the secrecy rate in FD relays [106] DF Single Antenna Two-way Derivation of lower and upper bounds on the perfect secrecy rate [107], [108] DF Single Antenna One-way Analysis of reliability and security tradeoff [109] DF Multiple Antenna One-way Secrecy analysis for multi-antenna and multiple relays [110] DF Single Antenna Two-way Provided closed-form SOP expression under κ-µ shadow fading. [111] DF Single Antenna One-way Analysis of reliability and security tradeoff [112] DF Multiple Antenna Two-way Establishment of secrecy capacity regions for discrete memoryless and MIMO Gaussian channels [113] DF Multiple Antenna One-way Proposed SDP relaxation method and a suboptimal criteria of the precoding scheme [114] DF Multiple Antenna One-way Proposed space division multiplexing for allocation of the maximal allowable power [115] DF Multiple Antenna One-way Proposed artificial noise (AN) precoding to minimize the power allocated to information transmissions [116] DF Multiple Antenna One-way Provided three approaches 1) secrecy sum rate maximization (SSRM), 2) total transmit power minimization (TTPM), 3) minimum peruser secrecy rate maximization (MPSRM) [113] DF Multiple Antenna One-way Proposed SDP relaxation method and a suboptimal criteria of the precoding scheme [117] DF Multiple Antenna One-way Proposed smart jamming algorithm for the relay that is not assigned to any pairs to act as a friendly cooperative jammer [41] DF Multiple Antenna One-way Proposed a joint relay and jammer selection scheme to select two or three intermediate nodes to enhance security against Eve a convex optimization and one dimensional search method. Generally, the literature concerning power allocation considers total power as a constraint on objective function. A rather better approach is to focus on the power constraint of an individual relay. Thus, the authors in [125] maximize secrecy rate and reduce individual power consumption on the relay. A beamforming vector is formed, which increases secrecy rate subject to individual power constraints.
In [126], the authors emphasized on a technique of orthogonal frequency division multiplexing (OFDM) coordination strategy, based on Nash bargaining game (NBG). The problem of sub-carrier allotment on individual devices is devised on the basis of a bargaining, game between two people to set up fairness in the process. The system model in [126] consists of an OFDM transmission system with two sources, destinations and an adversary. Every node acts as a source as well as a forwarding relay. Then, the allotment of sub-carrier between the coordinating end-systems is obtained through NBG along segment/ frame of evolutionary game algorithm (EGA). Ultimately, the results are corroborated through simulations that show an effective evaluation and gaining improved secrecy rate as compared to direct transmission strategy.
In [127], the authors highlighted that link security methods were aimed to secure the signal, as opposed to securing the data. Since the channel has random nature and cannot be controlled by the users, a design in which channel state is considered essential for the intended channel may not be suitable for many practical scenarios. Their system model was based on AF relaying, where the transmitter used single antennas while the receiver contains multiple antennas. The receiver can perform FD process thus it can send and pick signals simultaneously. The authors then discussed the importance of the CSI for PLS procedures, with current challenges faced by security mechanisms based on CSI. For this purpose, they mentioned three important concepts: spatial domain utilization, transmission of intentional interference and cyclic feature suppression.
Resource allocation under certain constraints can improve the secrecy performance under AF relaying [128]. The authors analyzed PLS issue in OFDMA enabled two-hop model, based on multiple intermediate relays and a passive eavesdropper. The proposed system model in [128] considered dual-hop transmission mode where links from BS to the users and from the BS to an adversary were unknown due to large distance. Thus, all the secret users and the malicious node get information messages merely through relay nodes. Essentially, the sub-carrier allotment to users individually and the power thresholds over different sub-carriers at transmitting nodes were optimized. Subsequently, a suboptimal solution was derived, supplying significant gain upon the conventional solution. The results of the simulation verified that the proposed scheme supersedes the conventional approach and remarkable improvements were shown against different values of the network parameters.
2) Relay Selection: Another secrecy enhancement criterion is the relay selection in the network. The optimal relay selection policy was adopted for DF protocol in [129] and it was shown to be far better than traditional max-min relay selection. In [108], an opportunistic relay selection policy was used. It has been proven that secrecy outage probability significantly reduces when the number of DF relays increases in the network. Single and multiple relay selection schemes were considered for AF and DF protocols in [41]. In addition to this, diversity order for each scheme was presented in the paper. Buffer-aided relays, for enhancement of PLS and transmission efficiency was considered in [130] for two-hop relay networks.
A combined AF and CF scheme can be used for providing link security using cooperative relaying [131]. Here, the concept of broadcast is considered in a way so, instead of one transmitter and one legitimate receiver, many receivers are present in the network. The DF protocol can only be used if the relay nodes have good channel conditions. A node that serves as a relay may need lower security clearance than the destination. The average probability of error is defined in a sense where the message is decoded in error. At receiver side, the sliding window decoding algorithm is used. A DF relay selection methodology where the main and wiretap links experience correlated fading, was proposed by the authors in [132]. In contrast, the same author proposed a relay selection scheme for correlated AF relaying. Although few other works including [133], [134] have considered secrecy under correlated fading, correlated fading scenarios in Figure 9 can also be explored to provide further insights. Particularly, in Figure 9 (a), the direct link between the source S and D is assumed to be unavailable. The message is transferred with the support of the K intermediate relays and the correlation exists between actual and estimated links. In Figure 9 (b), the source-relay and relay-destination links of the same relay are assumed to be correlated. Moreover, each relay is assumed to experience independent fading. Finally, in Figure 9 (c), correlation exists among source-relay links and among relay-destination links. Also, the links between source-relay and relay-destination are assumed to be independently fading.
By considering high SNR, the authors in [135], [41] analyzed secure communication for perfect decoding at the source relay link. However, this assumption completely ignores the reduction in data rates, due to fading between source relay links. In contrast, the authors in [136], [137], [138] deviate from this assumption by considering imperfect decoding, due to fading between source relay links. All of these papers derive secrecy outage probability expression for DF relaying, whereas, only [136] and [137] consider no direct link and [138] considers direct link between the source and destination as well as focuses on relay selection schemes. Secrecy performance analysis of dual-hop threshold relaying was evaluated in [139] for a single source, single destination, single relay and single eavesdropper scenario. Additionally, closed-form expressions of secrecy outage probability and ergodic secrecy capacity were derived.
Khandaker et al. in [140] propose a truth-telling based mechanism, where the relays are forced to tell the truth, otherwise they are penalized; which is also called incentive control mechanism. The relay is selected from a group of relays interested in gaining the incentive. The incentive of energy harvesting from the signal causes the relays to allure to transmit the message. The authors also provide performance comparison of incentive control mechanism with another power optimization algorithm.
The secrecy performance for an uplink scenario, where a relay is equipped with multiple antennas in SIMO mixed RF/FSO system, was studied in [141]. More specifically, the impact of maximum ratio combining (MRC) or selection combining (SC) on the secrecy performance was evaluated when the relay combine received signal at different antennas. Of late, multiuser and multirelay selection strategies are proposed by the authors in [142], [143]. The relays are considered to be able to perfectly decode information and transfer it to BS, in the presence of an eavesdropper.
Shim et al. in [144] consider a generalized scenario in a multirelay network where a cluster of M sources transmit messages to a cluster of N relays, in the presence of a single eavesdropper and a single destination. The eavesdropper is assumed to utilize either MRC or SC to combine the signal during source-to-relay and relay-to-destination transmission phases. It has been deduced that increasing number of relays has more impact than increasing number of sources.
Confidential transmission of messages for bidirectional communication was studied in [145]. A DF relay is used and strong secrecy capacity regions are established. It was demonstrated that a conventionally used weak secrecy capacity region coincides with a strong secrecy capacity region. The authors proposed an optimal relay selection scheme for AF, and DF relaying in [41] for a given eavesdropper. Bao et al. in [146] extended the system model by introducing multiple eavesdroppers in the presence of multiple relays. The authors proposed three different relay selection protocols to exploit the diversity gains obtained using multiple relays. Afterwards, Yang et al., in [147], evaluated the secrecy performance of a downlink single BS, single DF relay and multiple destination environments. The authors considered switch-and-stay combining scheme to improve the battery lifetime and scheduling complexity, while using antenna selection scheme to reduce leakage of information. The MRC was used to combine the messages when CSI of the eavesdropper is not available. The authors in [42] investigated the secrecy performance for large scale MIMO relaying systems when the CSI of wiretap channel is not available, and the CSI of the main link is imperfect.
3) Relay Ordering: The authors in [148] proposed a strategy based on the work of [149], [150], where the relays are ordered according to their distance from the transmitter. To be more precise, the closest DF relay decodes its message first and then forwards it to the next relay. The same procedure occurs in a multi-hop fashion until the message reaches the destination. The authors first studied the secrecy performance in a single relay environment. They considered DF-based cooperation in a multi-relay network and proposed three different strategies based on ZF, as shown in Figure 10. In multi-relay single-hop schemes, the relays that receive the message from S directly forward it to D through cooperative beamforming. However, for K-hop and K/2-hop strategies, the transmission takes place in multiple hops. Particularly, in the K-hop strategy, the first relay R 1 forwards the received message to R 2 and D while R 2 forwards the same to R 3 and D, and so on. In contrast to single-hop communications, the K-hop scheme uses the partial ZF technique. In K/2-hop strategy, it was assumed that the total number of relays is even, and thus, these relays can be divided into K/2 clusters. In each cluster, there are two relays, wherein, the signal received by R 1 and R 2 in the first cluster is forwarded to R 3 and R 4 in the second cluster, and so on. The authors also proposed a suboptimal scheme for power control. Their results showed that it is disadvantageous to enable only partial ZF in every transmission block.
In a similar work [18], authors analyzed two ordering policies with TAS. The closed-form expressions were derived for outage probability of the secrecy capacity, for each ordering scheme. The results reveal that TAS improves the secrecy rate, as compared to single-antenna systems, yet the TAS increases with the path loss exponent. Moreover, the impact of the ordering policy reduces for higher path loss environments.
C. Unique Challenges of Secure Cooperative Relaying
Some challenges related to cooperative relaying are illustrated in Figure 11.
1) Determination of Trustworthiness: Trust in communication network is generally defined by a particular metric. The degree of trust, in general, is the level of belief that one node has for another node for a specific action [151]. This degree of trust usually depends on the amount of available information (direct or indirect) from previous observations [152]. For instance, node j receives information that a node k usually transmits its messages. This type of information is direct, however, if the same information is received from any other node then it becomes indirect. This methodology has critical inherent flaws from a secrecy point of view. First of all, a relay can perform bad mouthing or broadcast false information [153]. Secondly, a relay can display conflicting behaviors, like behaving differently for a particular node or group of nodes. Doing so will result in gaining trust of some nodes while it will become untrustworthy for other ones. Hence, a clear demarcation between relays, in terms of their trustworthiness, is imperative because an untrustworthy relay introduces uncertainty, whether a relay is an eavesdropper or a helper.
2) Position of Relay: Mobile networks face different challenges, as compared to static networks, when it comes to provisioning of link security. The following two reasons are explaining why: 1) The vehicles' high speed results in rapid changes in channel coefficients [154], [155], [156], [157].
2) The position of a source, relay and destination quickly changes, resulting in the issue of nodes authentication.
The above-mentioned issues can be addressed by using robust CSI evaluation strategies, as CSI is the most important component of PLS. In addition to this, adaptive security protocols may be provided, along with the inclusion of the upper layers to provide security in cooperative networks. Also, the position of the relay in a mobile networks is a potential method to utilize mobility. The position of mobile relay, with respect to the position of the source, destination and eavesdropper, is significantly important to ensure information theoretic security. Traditional approaches consider the role of relays based on their position [158], [159], [143]. Particularly, the relays near the source are used to relay information to improve the SNR at the receiver. However, one major drawback of this approach is the assumption of fixed locations of the sources and eavesdroppers i.e., the legitimate users and eavesdroppers need not to be static at any particular position. This may also result in pilferage of information where relays are deployed as static entities. One possible solution is in the form of deployment of mobile relays [160], [161], [162], [163]. In this context, the mobile relays can improve the secrecy by using the flexibility to move in the network. Some of the major issues with this approach are 1) the mobile relays should be aware of the number of eavesdroppers and their position, 2) signaling overhead can significantly increase in order to ensure cooperation among relays.
3) Protection Against Multiple Attacks:
Many studies on PLS address either passive or active eavesdroppers. In case of passive eavesdroppers, the defending can be ensured using cooperative techniques. However, the case where both passive and active eavesdroppers are present in the network has not been investigated extensively. There is a need to ensure the security using cooperative communication, when the passive eavesdropper tries to listen to the transmission, while the active eavesdropper tries to jam the transmission between legitimate users. In this context, design of flexible cooperative protocols for providing link security is considered necessary. 4) To Relay or Not to Relay?: Although there is a number of advantages of using relays, yet there is an associated drawback of relaying techniques that cannot be neglected. The needed additional overhead, for secure cooperative relaying, needs to be quantified. The user should be aware of the achievable rate of transmission and associated delays and probing overheads prior initiation of communication. Intuitively speaking, the tradeoff between secure throughput, delay and signaling overhead should be well established to make the communication secure and worthwhile. A study that partly answer this question was performed by Gong et al. in [164], however, a fixed number of relays in the network was considered. A secure relay selection and tradeoff evaluation, in the presence of different numbers of relays at different times, is yet to be explored. 5) Hardware Imperfections in Relays: Imperfect response of hardware in the form of phase noise, imbalances in inphase and quadrature phase and non-linear power amplification can severely degrade the performance of relays. Only a handful of studies have considered secure cooperative relaying [165], [166], [167], [168]. Although these studies have remarkable impact in PLS literature, yet they separately consider the said impairments. Moreover, these studies are limited to the study of a single cell, and under perfect channel estimation. It is also necessary to further investigate the joint impact of these hardware impairments on the secrecy performance of networks.
IV. COOPERATIVE JAMMING FOR SECURITY
The importance of AN in the area of PLS is enormous. In fact, if it is added in a controlled manner, it can make the whole difference between the way signal is interpreted at the legitimate receiver and at the eavesdropper. The magnitude of AN that adds to the signal is therefore an important concern. It may be noted that in an ideal case, the power to transmit signals should be minimized, however, in order to secure the message, additional power is added in the form of AN. This asks for algorithms to be developed for optimum power allocation.
AN is an enabler of cooperative jamming (CJ) techniques. Jamming at the eavesdropper is generally performed using one or more of the techniques shown in Figure 12. In particular, Figure 12 (a) shows the case where a dedicated jammer J is employed to interfere with the eavesdropper's received signal. Since the dedicated jammer may not be a part of the legitimate transmission, the interference signal can also be received at D thus degrading the secrecy performance of the system. However, some incentivized game-theoretic techniques with appropriate power allocation policies, also discussed later, can be used to improve the secrecy performance. In case a dedicated helper node is not available, then it is up to S or D to degrade the signal reception of the eavesdropper. Typically, this can be accomplished if either S or D is equipped with multiple antennas as depicted in Figures 12 (b) & (c), respectively. Note that both S and D should secretly exchange jamming information in advance to avoid degradation of their secret communication. It is also worth mentioning that both S and D can be equipped with multiple antennas to simultaneously jam the reception of the eavesdropper, though for the sake of simplicity, we have only focused on minimal jamming requirements at S and D. Orthogonal jamming can be combined with AN to provide better secrecy performance [169]. This study proved that the secrecy rate can be increased and the SOP can be reduced by using orthogonal jamming, as compared to the secrecy performance of only AN. CJ is suited when the eavesdropper has a single antenna. However, if the eavesdropper is equipped with multiple antennas, CJ may not work efficiently. This is one of the fundamental problems with jamming techniques: an eavesdropper with multiple antennas can use beamforming to cancel interference and get better signal-to-interference-and-noise ratio (SINR).
A brief summary of recently proposed CJ techniques, for the case of both single and multiple eavesdroppers is provided in Table VI.
A. Jammer Power Allocation
Power allocation between main signal and friendly jamming signal is one of the key criteria to increase the secrecy in CJ systems. In general, the optimal power allocation depends on following two conditions: 1) Availability of the global CSI of the network entities at the source's side. 2) Availability of the neither statistical nor instantaneous CSIs of the eavesdropper. If the available power is P max and transmit power is P t , then a typical power optimization for maximization of achievable secrecy rate C sec , can be formulated as In regards to above-displayed equation, following salient details can be provided • The optimal solution relies on the availability of global CSI and the solution is typically traceable in quasi-static fading. • The instantaneous solution is not traceable when only statistical CSI is available [170]. In that case, Jensen equity and specific bounds on the ergodic capacity can be exploited for optimal power allocation. • A variety of factors also affect the optimal power allocation including spatial location of legitimate nodes and eavesdroppers, and available maximum power. Tang et al. in [171] focused on secure downlink in multiuser scenarios and derived the closed-form expression for optimum power when the transmitter has multiple antennas. The eavesdropper acts passively and the users, as well as eavesdroppers, have perfect knowledge of CSI. Three precoding techniques were provided i.e., channel inversion (CI), zero forcing (ZF) and regularize channel inversion (RCI). The authors noted that RCI performs better than the other two precoding techniques. It has been learned that the secrecy rate decreases with N and alpha.
The CSI, as discussed before, is very important for secure data communication. If the CSI of the eavesdropper is not known then beamforming can be done to retain security. In general, the CSI of passive eavesdropper is not perfectly known. Li et al. verified that AN aided beamforming, as shown in Figure 13, can considerably improve the secrecy capacity [172]. The same authors provided two solutions namely, deterministic uncertainty model (DUM) and stochastic uncertainty model (SUM). For deterministic case, a semi-definite solution is proposed and for stochastic case, a suboptimal solution is provided. The authors solve the worst case secrecy rate maximization (WC-SRM) for DUM and outage probability based secrecy rate maximization (OP-SRM) for SUM. The DUM model quantizes the CSI at receiver and send it back and SUM assume the error to be Gaussian distributed. The DUM has been investigated in literature before as well but without AN. By combining AN and DUM, the performance of secrecy rate improved far above the case which only considered DUM on the main channel. Similarly, the secrecy rate decreases for SUM case as the variance increases.
Optimum power allocation for AN secure MIMO precoding system is considered in [173]. The authors derived a closedform expression for power allocation to maximize secrecy rate and it has been concluded that the tightness of the derived bounds depends upon the number of transmit antennas. AN is used to degrade the eavesdropper channel, the scheme also called "mask beamforming" or "mask precoding" in MIMO channels. AN precoding divides the total power P between noise and information signal. AN precoding ensures a positive secrecy rate even if eavesdropper noise variance approaches zero. The simulation result shows that as the number of transmit antennas increases, the secrecy rate increases as well.
AN-aided secure multi-antenna transmission scheme with limited feedback, was provided in [174]. In particular, a multi antenna scenario is considered with AN added beamforming plus feedback from a receive antenna. Again the focus is on the connection outage constraint of the main link and secrecy outage constraint of the eavesdropping link. An adaptive scheme for coding parameters and power allocation between AN and message data is considered. Recent work relaxes the requirement of CSI for the transmitter and allows a fraction of error to be added in the actual CSI. These models may not be good for limited feedback channels due to oversimplification of real-world problems in order to know the exact number of errors in the estimated CSI. Therefore, the same authors use limited feedback so that AN may leak into the desired channel. A rate-adaptive transmission technique has been given to cope with the leakage of AN. Specifically, if the feedback bits are significant in amount, then more power is allocated to the data and less power to AN.
Power-constrained optimal CJ for multiuser broadcast channel is introduced to maximize the secrecy of the network in [175]. Optimal CJ is done with friendly jammers to provide PLS. Here, the authors derive a lower bound for eavesdroppers SNR and extend asymptotic secrecy rate. The most recent approach of CJ is that the source transmits data to a legitimate receiver in presence of eavesdroppers. In their work, the source, the jammer, and the legitimate receivers are assumed to have N , L, K antennas, receptively, and the single eavesdropper has M antennas. The authors noted that for L − K < M , even with the inclusion of friendly jammers, the secure communication is not possible. The simulation results show that by increasing the transmit power at BS, the maximum SNR of the eavesdropper can be significantly reduced.
In [176], the authors used game theory to investigate the interaction between the source and the friendly jammers. Specifically, the source must pay friendly jammers to interfere with the eavesdropper reception. The authors investigated the price-performance trade-off and concluded that if the price set by the jammer is low, then the profit to the jammer would be low as well. However, if the price set by the jammer is too high, then the source may not buy at all. In addition, the authors also showed that centralized and distributed jamming schemes have a similar performance when gain per unit capacity is significantly larger.
In [177], the authors proposed a cooperative jamming approach by using a Stackelberg game in which the primary users act as leaders and the secondary users constitute the followers. Their proposed framework allows secondary users to transmit jamming signals with pre-specified probabilities and both the primary and secondary users are able to access the same channel in order to minimize the spectrum holes for secondary access. The evolutionary behavior of the system was modeled by a Markov chain and the Stackelberg equilibrium solutions were derived. However, the authors did not consider user fairness, which would require system modeling with a complex Markov chain.
B. Beamforming Approach
Among the techniques studied so far, cooperative beamforming (CB) is one of the important ones. This technique is particularly important when there is no direct link from the transmitter. A study of the CB for DF relay has been already conducted. For AF, the beamforming technique is difficult because of the noise amplification, however, techniques like CB and CJ come into play when there is a direct link from the source to relay, and the nodes are only performing the jamming. Null space technique is used to nullify the AN at the receiver.
The AN can be combined with several other techniques to further enhance secrecy of information. One technique is to combine AN with CB [178]. The goal is to optimize AF matrix and AN covariance for secrecy rate maximization. Polynomial time optimization technique is proposed, based on two level optimization and semi-definite relaxation (SDR).
Secrecy rate maximization problem is presented in ANaided beamforming for multiple-input single-output (MISO) wiretap channels [179]. The authors assume that the CSI of the legitimate channel is perfectly known, while the eavesdropper is a Gaussian random vector. The complete solution for secrecy rate maximization (SRM) with optimal power allocation is provided, while keeping in view the outage probability constraints. AN can more effectively outperform an eavesdropper since it encounters every interceptor (both passive and active). Moreover AN can be generated knowing only main channel. Previous solutions to the said problem are suboptimal. Sometimes AN may be injected in the main channel, if the channel is fast fading to increase ergodic secrecy. AN-aided beamforming technique can be used to increase secrecy performance in faded channels, significantly improving the the secrecy rate.
If the channel between the transmitter and the receiver is weaker than the transmitter and the eavesdropper, then secrecy rate is almost 0 and using single antenna may not be a good approach. Therefore, the authors in [180] take the advantage of weighted optimization, with the goal to assign the optimal weights to antennas and optimal power to wireless nodes. The authors consider a single transmitter, a trusted relay, an eavesdropper and a receiver. The source transmits the message signal and the relay transmits the weighted version of it.
In the presence of an eavesdropper, the secrecy rate is the figure of merit and it depends upon the difference of the secrecy capacities of the main and wiretap channel. Secrecy rate increases with the increasing number of antennas, and transmit power gets reduced. Intermediate nodes perform the function of adaptive beamforming as well as CJ.
The authors of [181] considered a network of multi-antenna legitimate and eavesdropper nodes and they proposed an optimal transmission strategy for this MIMO wiretap channel. The authors considered that the instantaneous CSI of the eavesdroppers was known at the transmitter, which could then perform power allocation between data transfer and broadcasting an interference signal. The authors modeled the interaction between the transmitter and jammer as a two-person zero-sum game and also considered the scenario where the players move sequentially under imperfect and perfect knowledge of their opponents' response. The authors demonstrated that changing a single parameter can significantly change the outcome of the Nash equilibrium.
In [182], Chu et al. formulated a secrecy rate optimization problem in the presence of cooperative jammers and multiantenna eavesdropper. The authors divided the convex optimization problem into two sub-problems: In the first problem the transmit covariance matrix was optimized, while in the second problem the covariance matrix of the cooperative jammers was optimized. Subsequently, it was proven that the revenue functions of transmitter and cooperative jammers are concave. The authors used a Stackelberg game to maximize the secrecy rate and provided the Stackelberg equilibria for the said game.
A virtual beamforming based jamming technique was proposed in [183]. The authors modeled the relationship between cooperative jammers and the source node by using a Stackelberg game in which the source paid cooperative jammers to transmit interference to the eavesdroppers. The jammers competed with each other to provide a reasonable price and the same was modeled as a non-cooperative game. By assuming a constant security rate between the source and the destination node, the equilibrium point for the pricing strategy was derived. Furthermore, a joint optimization strategy for power allocation and power pricing was derived. The authors showed that the power pricing and power allocation games converge to a single optimization point.
C. Jamming with Secure Key Exchange
In [184], the authors highlighted the limitation of traditional key exchange mechanisms in the application layer of OSI. These mechanisms are affected and overloaded by processing and needed a trusted mediator. Due to eventual growth complexity, the prescribed techniques started showing poor performances. To minimize this effect, the authors proposed a novel key substitution technique in Physical layer, which is based on the concept of self-jamming and exploits the features of OFDM. Their system model consisted of passive adversary between two legitimate users (transmitter and receiver) in FD mode. For the receiver FD mode served the purpose to act as both signal receptor and jamming node. It compensated the shortcomings of application layer secret key production/ exchange techniques. Their simulations conclusively illustrated that a private key could be exchanged safely between transceivers at a considerably less BER despite the existence of an adversary. The simulation depicted the results that an adversary had to randomly guess an exact key, with an increase in the BER of eavesdropper. In other words, multiple trials had to be performed by an adversary in order to guess an exact key.
The authors in [185] emphasized the privacy of a PLS technique using an induced artificial interference to obliterate the substitution of a secret key, allowing the receiver to sabotage random chunks of propagated signal. The authors increased the eavesdropping capabilities of the adversary by fortifying the eavesdropper, and then placed several antennas on the eavesdropper's side. Moreover, the interference of the jamming signal with the useful signal depends on the positions of the receiving antennas, considering multipath propagation. In this context, they designed an algorithm to distinguish between normal and jammed signal parts to unveil the transmitted signal. To validate their findings, their methods included simulations and practical experiments, using software-defined radio environment and utilized the wireless open-access research platform (WARP). They demonstrated that in the OFDM based multiple antenna system, adversary/eavesdroppers easily decreased the privacy during the key exchange and easily transcends single-antenna ones.
D. Protected Zone Approach
We may sometimes be interested in providing security to a particular location, the intended receiver may even sometimes be located in a particular location. Therefore, instead of providing security in the entire area, we may be interested in providing security in a section of that area instead. AN can be used to ensure protected zones [186]. The secrecy zone is defined based on transmission power and stochastic approach, to provide secrecy to target zones. The authors deploy a protected zone around the transmitter, and for this protected zone, the radius is the parameter to be optimized.
The assumption here is that the transmitter has multiple antennas and both the receiver and the eavesdropper have a single antenna.
The case where AN was generated by the intended receiver, was considered in [187]. This approach removes the need for CSI feedback, and there is no need for the number of eavesdropper antennas to be smaller than the number of legitimate receivers. A geometric secrecy concept was introduced so a given geographical region can be protected. If the eavesdropper is passive, then a probabilistic model will be used for CSI and the secrecy outage region can be defined. The SOP changes with the movement of the eavesdropper, wherein, as an eavesdropper moves closer to transmitter, the outage increases and decrease as it gets closer to the receiver.
A protected zone is an area free from any eavesdropper and only have trusted relays. The authors define a protected zone close to the transmitter and if any eavesdropper comes close, a high-level security will then be needed. A weighted normalized cost function (WNCF) is considered for an optimal power allocation and radius of protected zone. As demonstrated, increasing the power for the signal alone may not be that benefiting, an optimal method of power distribution between the information signal and AN is then needed. The size of protected zone decreases with power, and for a high target secrecy and minimum power, the size of the protected zone reaches its max. The power reaches a state where no more power is needed to increase the secrecy.
On the other hand, AN can be used to perform authentication thus enhancing PLS [188]. The CSI of the legitimate user is employed for authentication purpose, which obviously differs from the eavesdroppers' CSI. AN will be added to the received signal to enhance security performance in time variant channels. The probabilities of miss-detection and false alarm as a function of doppler spread were studied. As the doppler spread increases, both probabilities increase, showing the negative effect of channel variability on the secrecy performance.
Nabil et al. in [189] investigated a novel transmission scheme by incorporating the known location of the eavesdropper. They also assumed that the transmitter has incomplete information of the channel state of the legitimate receiver. The authors also defined protected zones to provide spatial secrecy against the eavesdropping attacks. The security is improved by allocating optimal transmission power, and by varying the size of the protected zone. The authors finally quantified the required amount of power for preventing the eavesdropping attack in closing quarters.
The authors in [190], similar to protected zones, introduced the concept of guard zones and a comparative analysis of guard zones were provided with the AN. In particular, the authors derived the closed-form expression of the threshold on the density of the eavesdroppers, for both guard zone and AN techniques. This helped to characterize the fact that the guard zone technique performs better when the distance between the legitimate users is greater than the threshold, as earlier derived by the authors.
E. Partial Jamming
Partial jamming is an emerging paradigm for the design of efficient jamming strategies. It works on the assumption that an eavesdropping node is not capable of deciphering the secret message by decoding only a part of the transmitted signal. More specifically, a friendly jammer transmits an interference signal in specific time slots to prevent the eavesdropper from receiving the complete signal [191]. Thus, the eavesdropper may not acquire the complete information due to receiving jamming signals in certain time slots. Note that the partial jamming technique is different from the aforementioned jamming designs that perform jamming for the entire communication duration and it is also different from the partial jammer selection techniques [192], [193], [194], [195] that select jammers based on the availability of partial CSI for the main or the wiretap links. Figure 14 shows the partial jamming operation in a twoway relaying scheme. The figure shows that communication takes place in two time slots: during the first time slot, both legitimate nodes S and SD transmit their messages to the relay R while the jammer J broadcasts its signal to R and E. However, R can remove the jamming signal before message decoding as it has a priori knowledge of the jamming signal. During the second time slot, as shown in Figure 14 (b), J refrains from jamming to conserve its power while R broadcasts its received superimposed signals of S and D. Since S and D already know their own messages transmitted during the first time slot, these can be easily removed from the composite signal received at S and D. In contrast, E may find it difficult to decode the message of either S or D due to its receiving only partial information during the first time slot.
Since partial jamming is a relatively new concept, limited work has been done so far to investigate its secrecy performance. In [196], the authors proposed to combine watermarking techniques with the iJAM jamming mechanism [197]. According to the iJAM design, the legitimate transmitter broadcasts its message twice and the legitimate receiver randomly jams the broadcast message. In this way, only the legitimate receiver knows which of its symbols were jammed and can discard them, whereas the eavesdropper remains oblivious of this information. The authors noted that an eavesdropper requires phase correction information between sender and receiver to completely decode the symbols. Moreover, they showed that a larger secrecy capacity can be achieved with their proposed design when compared with another benchmarking protocol namely watermark-based blind physical layer security (WBPLSec) protocol.
More recently, Chensi et al. analyzed partial jamming in the worst-case scenario that the eavesdroppers' CSI is not available at the legitimate nodes and that the eavesdroppers' node density is larger than the density of the helper nodes [198]. The authors concluded that the jamming should be performed during the first time slot as the information leakage is more dominant during this time slot; while the signals are overlapped during second time slot. The authors also showed that the single-time-slot jamming is more power- efficient than the dual-time-slot jamming. Their complexity analysis of partial jamming techniques showed that the number of floating point operations required for partial jamming is less than that required for full jamming.
It can be deduced from the aforementioned discussion that partial jamming is suitable for power-constrained systems. However, more research efforts are required to address design issues such as how to minimize the impact of the jamming signal on the legitimate receiver, when to send the jamming signal and how to deal with the diversity/ cooperation of eavesdroppers.
F. Exploitation of Cross-Layer Opportunities
One promising technique, to improve security in cooperative networks, is using cross-layer approaches. The deployment of the authentication, at different layers, can potentially increase the security in cooperative networks. However, due to this cross-layered security, the feedback overhead and complexity of hardware can considerably increase. Consequently, it is necessary that the tradeoff between complexity and security be well defined. The authors in [199] highlighted all of the aspects of the PLS schemes with respect to space, time and frequency domain. Since the wireless networks are not secure enough for a reliable transmission of the data, the authors focused on highlighting threats and attacks including tampering, leakage of private information, interference from unintended users, network flooding, jamming and eavesdropping. The techniques suggested to solve these security issues are the Yarg code and amplify and forward compressed sensing (AF-CS) method.
The alignment of sub-messages can be helpful in increasing the secrecy of information. In this secrecy enhancement technique, the transmitter divides the message into M submessages. Each helper also sends a jamming signal to confuse the eavesdropper. The M sub-messages can be separated at the legitimate receiver, due to their irregularities. Also, each CJ signal is aligned with the message signal. This alignment ensures that the information leakage to the eavesdropper is minimum. Hence, each message signal is protected at the eavesdropper by one of jamming signal. However, this scheme requires the CSI of the eavesdropper and legitimate link to align the message and jamming signal [200], [201].
The problems of analyzing the characteristics of signals and random processes may be solved by probabilistic approaches. A stochastic approach may very well be applied to the situation involving PLS [202]. The BS is Poisson distributed, whereas, legitimate and eavesdropping nodes are assumed to be randomly distributed. The authors assume a downlink scenario and an orthogonal multiple access technique. Many BS are intended receivers while others are eavesdroppers. The secrecy rate here depends on the eavesdroppers density, and as the eavesdropper's density decreases, the secrecy capacity significantly increases.
If the legitimate receiver has more antennas, the results are then even more valuable. In this case, the signal reception of eavesdropper can be jammed using a special noised called PDF-band-limited [203]. The focus of earlier studies is mostly on an asymptotic approach, whereas in this work, the eavesdropper's reception is jammed using a special noise. As long as the main channel is better than wiretap channel, positive secrecy rate can be maintained but does not guarantee perfect secrecy, as per Shannon criteria. The characteristics of additive noise matters, therefore, band-limited additive noise was considered by the authors. This is different from Gaussian channel and provides possibilities for designing such encoders which can improve the secrecy of transmitted information. AN is sent intentionally by a legitimate part which gets added with AWGN noise of channels, and an overall noise is received by both legitimate receiver and eavesdropper. It was found that with the selection of a proper jamming distribution, secrecy can be significantly enhanced.
A joint physical and application layer security scheme is considered for provisioning of security in [204] where signal processing is employed at physical layer and authentication, and watermarking at the application layer, as shown in Figure 15. It is mainly because the PLS measures neglect the application layer security measures and vice versa. A cross layer security measure will be a best solution to jointly cope with the issue of security. In PLS, the focus is mainly on the secrecy rate and the CSI is generally needed to calculate the secrecy rate. Since the full CSI is generally not available, therefore, a quasi-static fading channel is assumed in most of the work. Another technique is information processing approach (IPA) where different kinds of noises and signals are added to confuse the eavesdropper and enhancement of secrecy rate. Two main tasks are performed on the application layer authentication: 1) Who transmits the message? (Identification) 2) Whether the transmitted message has been altered or not? (Authentication) Fig. 15: Joint physical-application layer security scheme [204].
In another work, joint channel characteristics for PLS technique were investigated [205]. The problem of untrusted relays is considered, and joint channel characteristic, i.e., source-to-relay and relay-to-destination, is exploited. The AN is then added, and both internal and external eavesdroppers are dealt with. Also optimal power transmission is taken into consideration that combines both the source-to-relay and relay-to-destination channels and extract the joint channel characteristics. It then adds AN and calculates the secrecy capacity and optimum signal power. Again the transmitter is a multi antenna device and relays are multiple with single antennas. The message is first encoded to Gaussian random variable, and the symbols are further processed by a matrix. Channels are considered flat fading. Relays are assumed to be operated on AF protocol and CSI is assumed to be known globally, by the authors. Simulation results show that secrecy rate is improved when the AN is in the null space of legitimate receiver.
G. Unique Challenges of Secure Cooperative Jamming
We now highlight some of the particular challenges of cooperative jamming to enhance PLS, as given in Figure 16.
1) Incentive Based Jamming: Although several studies have investigated cooperative jamming [230], [210], [231], [194], [232] and destination assisted jamming [233], these studies consider that the jammers (helper nodes) in the network are generous enough to provide their services without any incentive. Generally, any dedicated helper node in the network is difficult to realize, as nodes tend to make independent and selfish decisions in large scale networks. Game theoretic approaches can be used to partly understand this interaction [176], [234], [235], [182], [236]. It is pertinent to mention that even these studied do not consider real-time fluctuations of locations of nodes, and suboptimal precoder assumption based results are obtained. Thus, the study of complex interaction between jammers and other network entities and parameters is still an open issue, and should be the focus of future research work.
2) Cooperative Jamming under Correlated Channels: It has been commonly observed that the fading conditions, due to less separation between the two nodes in space or time domain, are quite similar [237], [238]. Most of the studies on PLS assume the channel between jammer and destination, and that between jammer and eavesdropper, to be independently distributed. This is an oversimplification; it may not be true for most of the cooperative scenarios. It is because the fading correlation has a significant impact on the fading correlation [239], [240]. In addition, the system performance, under correlated fading for multiple antenna jammer, can notably vary from the case where independent fading is assumed. It is therefore essential to quantify the performance tradeoffs under correlated fading.
3) Inaccurate Power Allocation under Imperfect/ Unavailability of CSI: It has been stated previously that the availability of CSI for all nodes across the network, including the eavesdroppers, results in the maximum secrecy rate. But in practice, the legitimate nodes may only have limited or no access to the CSI at the eavesdropper, especially if the latter operates in passive mode. This issue is more concerning for jamming nodes because power allocation schemes for cooperative jamming usually depend on a perfect channel estimation. For instance, the CSI of the legitimate user is usually obtained by feedback. A handful of studies under jamming have evaluated the secrecy performance of under imperfect channel estimation for imperfect legitimate channel [241] and for main and jamming links [242], [182], [243]. In addition, these works derive lower or upper bounds, and closed-form expressions for aforementioned scenarios are largely missing in the literature. Moreover, during feedback transmission from the legitimate user, the eavesdropper can also get the information and use it to adopt a more destructive interception strategy. It is therefore necessary to further investigate the impact of channel estimation errors, especially for the case of colluding eavesdroppers and to design optimal and secure CSI feedback mechanisms.
4) Standardization of Cooperative Jamming: In order to minimize the gap between research efforts and practical implementation of the device cooperation, standardization is necessary. It is considerably difficult to standardize the friendly jamming, under different network topologies, because of decision based nature of jammers to either cooperate or stay independent. For instance, a node a can cooperate with source node s to jam the signal of node x (a potential eavesdropper for s) for a particular time. After some time, it is possible that node a wants to send a message to node x (being part of the same network). In this simple scenario, how should node x react to the request of a, given the fact that a tried to send jamming signals few moments ago. Conditions like this demand a dynamic standard for cooperative jamming, which is still nonexistent partly owing to the novelty of cooperative jamming strategies. Therefore, it is one of the important directions to conduct future research work.
5) Cooperative Jamming under Multi-cell Environments:
There is no denying the fact that notable strides have been made to improve the link security using above-mentioned cooperative jamming technique, yet large part of this work, is limited to a single cell environment only. The extension of these jamming schemes for a multicellular environment can reveal many deficiencies in them, e.g., it is more difficult --Proposed a novel design for optimal jamming covariance matrix to maximize the secrecy rate and mitigates loop interference associated with the FD operation. [206] -Proposed an efficient suboptimal algorithm for the majorization of system parameters to avoid global search and the practical case without availability of eavesdroppers' CSI. [207] --Proposed an optimal power allocation algorithm for jamming noise. [208] --Proposed a transmission scheme, by maintain a scaling law of the achievable secrecy rate, to maximize the secrecy performance. [209] --Proposed a power allocation scheme by considering imperfect CSI of nodes, to maximize the secrecy rate. [210] --Proposed an optimal jamming noise structure under global CSI in which secrecy rate performance is improved very close to the optimal one. [211] --Derivation of the optimal source covariance matrix to maximizes the secrecy rate subject to probability of outage and power constraint. --Derivation of closed-form expressions for the optimal weights and power allocation to maximize the difference in the SNR between destination and eavesdropper. [214] --Proposed a distributed mechanism to develop jamming participation algorithm by compensating non-cooperative nodes with an opportunity to use the fraction of legitimate parties' spectrum for their own data traffic. [169] --Proposed a novel CJ method to prevent eavesdroppers from using beamformers to suppress the jamming signals. [215] --Proposed a destination-assisted jamming and beamforming (DAJB) scheme to improve PLS. Also presented optimal power allocation algorithm by solving the second-order convex cone programming (SOCP) together with a linear programming (LP) problem. [216] -Proposed optimal and suboptimal power allocation schemes for maximizing achievable secrecy rate subject to a total power constraint. [217] --Provided solutions for allocating optimal weights along with the optimal power distribution and solved the problems using semidefinite and geometric programming. [218] --Proposed a CJ strategy to deal with eavesdroppers anywhere in the wireless network. Also, introduced jammer placement algorithms targeted towards optimizing the total number of jammers. [219] --Proposed a secrecy sum rate maximization based matching algorithm between primary transmitters and secondary cooperative jammers. Also, the conventional distributed algorithm (CDA) and the pragmatic distributed algorithm (PDA) are modified for maximizing the secrecy sum rate for the primary user. [220] --Proposed a social-aware cooperative jamming strategy along with optimal power allocation scheme. [221] --Proposed two models namely, single-channel multijammer (SCMJ) model and the multichannel single-jammer (MCSJ) model. Also, derived a closedform expression for the optimal price strategy for Bertrand equilibrium. Multiple [222] --Proposed a two-hop transmission protocol to ensure secure and reliable big data transmissions in wireless networks with multiple eavesdroppers. [223] --Formulated stochastic geometry based analytical model when the location of eavesdroppers is unknown. [224] --Proposed a friendly jammer-assisted user pair selection (FJaUPS) scheme to improve the security-reliability tradeoff. [225] --Proposed a Gauss-Jacobi iterative algorithm to compute a Stackelberg Equilibrium [226] --Derivation of closed-form expression for the secrecy outage probability and establishing the condition under which positive secrecy rate is achievable. Also provided a secure transmit design for maximizing the secrecy outage probability constrained secrecy rate. [227] --Derivation of feasibility condition to achieve a positive secrecy rate at the destination to solve the secrecy rate maximization problem. Also, an iterative algorithm is developed to obtain the optimal power allocation at the jammers. [193] --Proposed a heuristic genetic algorithm based solution followed by low complexity optimization solutions by considering the upper and lower bounds of power allocation. [228] -Proposed a suboptimal power allocation solution for jammer nodes for various scenarios, location of the eavesdroppers, and the destination. to obtain the CSI and location of an eavesdropper if it lies in the nearby cell or if it is moving from one cell to another. To our best knowledge, very limited work has been done to investigate the secrecy performance under multi-cell environment [244]. Hence, in view of its practical importance, considerable attention needs to be paid to propose dynamic jamming protocols.
V. HYBRID COOPERATION SCHEMES Above mentioned sections consider cooperative relaying and jamming separately, however, there exist studies in PLS that jointly exploit the advantages of relaying and jamming, as given in Table VII. To cover these studies, we provide an overview of hybrid techniques that jointly discuss relaying and jamming strategies.
A. Joint Relay/ Jammer Selection
One important area of research in PLS is the secure relay and jammer selection. The secrecy outage probability may tell us about the status of the relay whether it can be trusted or not [135]. In most parts of literature, relay and jammer selection is either not made, or if made, its broadcast to other relays, possibly leading to an eavesdropper. The destination having the information of only main link and statistics of eavesdropper will select optimal relays and jammer. Each node computes a channel coefficient and compares it to a threshold. If it is above the threshold, it acts as a relay and if below, it acts as a jammer. The optimal relay and jammer can be selected by an exhaustive method. Specifically, Greedy method and Vector Alignment technique are used for optimal relay and jammer selection. The SOP decreases as the authors compare different cases, such as no jammer, random selection, greedy method, vector alignment method and finally exhaustive search. The results show that even though exhaustive search is best for SOP, it requires very thorough search.
Information theoretic security performance was investigated in [258], [259], [87], [260], [261], [262] for AF relaying and destination-assisted jamming. He et al. in [83] deduced that positive secrecy rate can be achieved in the presence of an untrusted relay. Particularly, the authors considered destination assisted jamming during the source to relay communication. Huang et al. considered friendly jamming approaches, along with relaying to perform secure communication [263]. Wang et al. in [258] consider the case of the best relay selection to improve the secrecy performance, in the presence of multiple eavesdroppers. The best relay selection strategy is compared with a suboptimal scheme to combat eavesdroppers. The system model is then extended to incorporate a friendly jammer in the network. It was concluded, through simulation and analytical results, that the secrecy can be increased by increasing SNR between relay and destination and between jammer and eavesdroppers.
The authors in [264] proposed a game theoretic model by formulating two Stackelberg games to solve the problem of secrecy rate maximization. It was proved that Stackelberg equilibrium exists, and was corroborated by simulation results. The Stackelberg equilibrium was found to be an efficient solution to maximize the secrecy rate. In addition to this, the authors also proposed a multi-jammer assistance strategy to save energy, while providing improved secrecy in wireless networks.
According to Ding et al. CJ can be used to enhanced PLS, by performing antenna selection, along with AN [265]. A pair of source nodes, relay and eavesdropper is considered. All nodes have multiple antennas. The AN is added and the performance is evaluated again by adding more AN and joint antenna selection improvement to improve the secrecy rate. As the magnitude of AN is increased, the eavesdropper channel is degraded and secrecy rate is increased. The simulations are performed for a number of scenarios and it was found that the probability to have zero secrecy rate reduces as the number of antennas increases.
The authors of [266] proposed a novel transmission scheme for energy harvesting untrusted relays, as shown in Figure 17. Particularly, if the instantaneous secrecy rate of the main link lies above a targeted secrecy rate then direct single-hop transmission (DSHT) mode is selected otherwise cooperative relaying dual-hop transmission (RDHT) mode is used. If the DSHT mode is used, then the jammer injects the jamming signal to degrade the reception of relaying nodes while causing no interference in the main channel. In case the RDHT mode is selected, then during the transmission of message from source to relay, both destination and jammer transmit jamming signals. During the second slot, if only one of the relays is active then that relay is selected to transfer the message to the destination. However, only the jammer transmits the jamming signal during this time while the destination node refrains from confusing the relays. The authors noted that there is a tradeoff between the amount of harvested energy and the secrecy performance. Specifically, they showed that the SOP --Proposed transmit weight optimization of CR and CJ for with and without the availability of eavesdroppers' CSI. [248] --Derivation of closed-form jamming beamformers and the corresponding optimal power allocation. Also, proposed GSVD-based secure relaying schemes for the transmission of multiple data streams. [249] -Proposed a sequential parametric convex approximation (SPCA) algorithm to locate the Karush-Kuhn-Tucker (KKT) solution for maximization of ergodic secrecy rate. [250] --Proposed power allocation technique for transmitting jamming signals, secondary messages, and relaying messages such that the secrecy capacity of the primary system is maximized subject to the minimum secondary user transmission rate requirements. [251] --Proposed heuristic algorithm to solve joint problems of subcarrier assignment, subcarrier pairing and power allocations under scenarios of CJ to maximize the secrecy sum rate subject to limited power budget at the relay. [252] --Proposed a worst-case robust design by considering imperfect CSI of eavesdropper to obtain distributed jamming weights, which is solved through semi-definite program (SDP). [253] --Proposed an optimal relay selection scheme for (1) full CSI, (2) partial CSI and (3) statistical CSI cases. Also, exact and approximate secrecy outage probability expressions in closed-form are derived. [254] -Proposed a joint relay and jammer selection scheme and derive a closed-form suboptimal solution to maximize the secrecy rate. [255] --Proposed an adaptive cooperative relaying and jamming secure transmission scheme to protect the confidential messages where the legitimate receiver adopts the energy detection method to detect the jamming-aided eavesdropper's action and a cooperative node aid the secure transmission through cooperative relaying under jamming attack under eavesdropping attack. [256] --Proposed a bi-level optimization algorithm for deriving the optimal jamming and beamforming vector. [257] -Proposed a scheme for jointly optimizing the bandwidth and time in DF relaying while satisfying the secrecy requirements through jamming. Multiple [135] --Derivation of a closed-form expression for the secrecy outage probability and developed two relays and jammer selection methods for minimization of secrecy outage probability. [16] -Derivation of a closed-form solution for the optimal power allocation and proposed a simple relay selection criterion under two scenarios of non-colluding and colluding eavesdroppers.
increases for RDHT when the relays are closer to source, however, the amount of harvested energy is significantly high. On the contrary, if the relays are farther from the source and closer to the destination, then the SOP decreases as the first hop becomes a bottleneck for RDHT and limited energy harvesting takes place at the relays. Ibrahim et al. in [267] proposed three categories of relay and jammer selection for two-way cooperative communication scenario. The authors also introduced schemes to overcome the negative effects of interference. The authors concluded that the cooperation of eavesdroppers further degrades the secrecy outage performance. It was also shown that two-way relaying outperforms one-way relaying schemes.
The PLS highly depends on the CSI; the perfect knowledge of CSI is required to make sure that the eavesdropper gets the least of the legitimate information. In most of the work done in PLS, the CSI is assumed to be known at the transmitter. The CSI can only be estimated at the receiver, and the knowledge of CSI is required at the transmitter so the used AN can be targeted at the eavesdropper and nullified at the legitimate receiver. Feeding back the CSI, from the receiver to the transmitter, is common practice. This feedback takes time and since the channel is variable it may affect the performance of secrecy algorithms [268]. The authors consider DF protocol for intermediate relays, while Rayleigh fading is considered. In step 1, the transmitter forwards the signal to relays at while in step 2, the relays forward the signals. The SOP and secrecy rate depend upon the delay, where the CSI is fedback. It is demonstrated that with an increase in the feedback delay, the SOP also increases.
B. Joint Power Allocation
For wirelessly powered networks, the secrecy performance was evaluated by Xing et al. in [269]. Multi-antenna AF relaying was considered for hybrid receiver architecture. The received power is split into two streams for energy harvesting and information decoding. The communication is conducted into two phases where the first phase is used for energy harvesting and the second phase is used for jamming. The power reserved during the first phase is used for jamming in the second phase. Note that the authors assume that the density of eavesdroppers is known at the relay and CSI of eavesdroppers is not available. The results unveil that maximum ergodic secrecy rate is achieved for shorter relaydestination distances.
For the case of multiple helpers, the secrecy performance of harvest and jam (HJ) protocol was evaluated in [270]. Similar to above-mentioned approach, the radio signal is transferred to AF relay during first phase, which is used to harvest energy and decode information. A group of helpers, equipped with multiple antennas, use the harvested power to generate AN and degrade the signal of an eavesdropper. The secrecy rate is maximized by optimizing covariance matrix and its performance is compared with heuristic schemes. It has been shown that the performance of the proposed scheme is significantly improved when helpers are equipped with a larger number of antennas.
Similar to the work of [270], Xing et al. in [271] provided an optimal solution to reduce the complexity of receivers. The authors also presented semi closed-form solution to perform null space jamming. For perfect CSI availability cases, the authors show that the derived semi-definite relaxation (SDR) closely follow the simulation results. In contrast, for the case of imperfect CSI, a suboptimal rank algorithm was provided. Xiao et al. in [272] studied two eavesdropping conditions of untrusted relay, i.e. active eavesdropping and non-active eavesdropping. Subsequently, the authors used Lagrange duality methods to decompose the optimization problem in two subproblems. In particular, the authors jointly optimized power splitting ratio while minimizing the outage probability. It has been shown by the authors that the system performance is improved for non-active mode, as compared to the proactive mode.
C. Joint Relay/ Jammer Beamforming
A two-way relay network offering a practical case and a study of the PLS in such networks is quite interesting. Hybrid cooperative beamforming and jamming can be combined with two-way relays. The intermediate terminals act as a relay and also do the beamforming [273]. The secrecy rate is increased by optimizing the weights of beamforming and jamming vectors. Here the authors assume a transmitter-receiver pair, N relay nodes and J eavesdroppers. Each node has single antenna and the relay operates in FD mode. Communication channel between all possible pairs is assumed to be flat. One communication round is split into broadcasting phase and beamforming phase. In first stage, the source broadcast a message signal (two-way). The relay broadcast a weighted jamming signal to confuse the interceptor. The receiver has full CSI, so it separates the original message signal from the jamming signal.
CB and CJ can be combined to perform efficiently, in order to select the optimal nodes to serve as relays and jammers. In this regard, an SDR solution for secrecy rate maximization problem is considered feasible. The secrecy scaling laws for a large number of nodes were analyzed in [111]. The authors in [274], designed optimal precoding matrices for a MIMO relay channel. Similarly, source GSVD and relay SVD precoding was introduced in [98] to improve the secrecy performance of cooperative networks.
With two-way relay nodes, a rather practical scenario is considered and with a hybrid technique, the multi-antenna requirement is reduced and secrecy rate are improved. Secrecy rate can be improved when beamforming and jamming is used jointly [275]. The CB improves transmitter to receiver channel secrecy capacity, whereas CJ degrades wiretap link secrecy rate. Multiple nodes perform CB to increase the secrecy rate. Because of two-phase transmission, the eavesdropper get two chances to degrade the quality of the main link. This is achieved with some intermediate nodes doing beamforming while other doing jamming. Null-space beamforming is used to optimize the power constraints of all terminals.
For energy harvesting networks, the destination-assisted jamming for untrusted intermediate relay was studied in [276]. Specifically, the destination node uses the harvested energy during first transmission phase to jam the transmission of the untrusted relay. The results demonstrated in the paper unveil that power splitting policy achieves better optimal secrecy performance as compared to time switching policy.
D. Unique Challenges of Hybrid Cooperative Approaches
Let us now discuss some of the challenges pertaining to joint utilization of relays and jammers, as shown in Figure 18.
1) Handling Large Amount of Data: The integration of hybrid cooperative strategies comes with an excessive cost of repeated data processing. With a demand of higher data rates, the cooperation of nodes must be flexible enough to handle a large amount of data. This much processing of data would undoubtedly increase the energy consumption, creating a bottleneck if the devices start dying more frequently. Resultantly, more efforts should be put into designing protocols with enhanced capabilities and efficiency.
2) Integration with Upper Layers: One of the interesting directions for joint relaying and jamming is to exploit higher layers for efficient routing of messages. So far, the research efforts in relaying and jamming are mostly limited to physical layer techniques. However, there is a need to emphasize on the routing schemes to minimize intermittent connectivity issues during relaying and jamming. It is because the reachable neighbors of a node can vary rapidly for mobile networks. Moreover, the issue of intermittent connectivity can also result in rerouting of the messages through intermediate relays. The secrecy performance of these hybrid schemes under retransmissions and under the influence of upper layers anomalies is still undiscovered. The same is essential from design perspective of practical systems using these hybrid schemes.
3) Frivolous/ Unpractical Assumptions in Hybrid Techniques: The existing hybrid techniques in PLS literature usually consider a very basic premise with specific number of nodes. It is mostly assumed that the distance between the jammers is relatively smaller than the distance between source/eavesdropper/destination [254]. Some works consider perfect knowledge of wireless channels [277], [135], [278]. These assumptions, though necessary for tractable analysis, oversimplify the system model to an extent that no longer remains practical. Moreover, the optimization strategies proposed under these assumptions may not produce the same results or work effectively if implemented on hardware.
4) Synchronization of Jamming and Relaying:
One of the most neglected factors in hybrid techniques is the issue of time synchronization of the jammer and relays. It is worth mentioning that these hybrid techniques generally consider block fading models. According to the block fading model, the channel remains unchanged for a particular coherence time, while it randomly changes from one block to another. This consideration is fairly genuine, yet the devices far away from each other may not have same length the fading block. Resultantly, time synchronization based on block fading model is not much practical. Moreover, hardware requirements for precise synchronization and the impact of imperfect timing synchronization is majorly unknown.
VI. FUTURE APPLICATIONS OF COOPERATIVE PLS
One of our paper's goal is to understand how cooperative relaying and jamming can be applied to different forthcoming wireless technologies, to improve the PLS. More specifically, we discuss the recent studies and provide some key challenges and issues in the system design.
A. Wireless Information and Power Transfer Cooperative Networks
Recently, simultaneous wireless information and power transfer (SWIPT) have generated significant research interest from academia and industry [279], [280], [281], [282]. It can increase the lifetime of energy limited devices in the network. In fact, SWIPT improves the functionality of conventional wireless networks by concurrently transmitting power and information at the receiver. The source transfers power and information signal in a unicast (dedicated) or multicast scenario, as shown in Figure 19. In point to point communication, this approach is not much feasible if the same receiver is used for information decoding (ID) and energy harvesting (EH). It is because the power receiver needs to be placed near the source due to latter's lower sensing ability. Therefore, instead of point to point communication, intermediate relays can be used to improve the performance of the network, in terms of lifetime and reliability. The relay can be charged using one of these methods: 1) using dedicated power transfer source 2) using power splitting (PS) [283], [284], time switching (TS) [285] or antenna selection (AS) [286] receiver architecture. If the relay uses a dedicated power source then the secrecy rate at the relay is similar to conventional relaying networks. In contrast, when the relay uses the received RF signal to simultaneously harvest energy and decode information, then the secrecy rate depends on any one or all of the PS/ AS/ TS factors.
A fundamental three-node network model was considered in [287] in which the information eavesdropper was also a legitimate network node but with a limited role of only energy harvesting from the received RF signal. However, the eavesdropper additionally engaged in the un-authorized activity of intercepting the secret communication between the legitimate nodes. An optimization strategy was proposed for transmit beamforming to improve the secrecy of the legitimate users. The previous model was extended to a four-node scenario in [288], [110] and the secrecy performance was analyzed. In particular, as shown in Figure 20, the authors of [110] noted that a large power splitting factor increases the intercept probability. Furthermore, the authors also concluded that for smaller values of the time-allocation factor, the system's secrecy performance can be ensured by allocating less power at the relaying node for energy harvesting and more power for decoding the received information. The authors in [270], [289], [271] using AF relaying and jamming techniques, improved the secrecy performance under PS architecture of ID and EH. In a similar way, the authors provided a robust security scheme by using AN in [290]. The case for multiple power receiver i.e. multiple energy harvesting eavesdroppers, was considered in [291], wherein, two problems were addressed 1) maximization of secrecy rate for information receiver and 2) maximization of energy transferred subject to secrecy rate constraint. Some recent studies also show that security and energy efficiency, in power transfer networks, can be improved by deploying friendly jammers. The authors in [292] improved the PLS by introducing an extra jamming node in the network. More specifically, CJ optimization was performed for the worst case secrecy rate. The authors in [266] provide a cooperative relaying and jamming scheme for untrusted dual-hop energy harvesting AF relays. Cooperative relaying to enhance the secrecy of devices was improved in SWIPT, in [293]. The authors consider multiple antennas to minimize information leakage and maximize harvested energy. Massive MIMO systems with SWIPT were studied in [294] and it was illustrated that large antenna gains can be used to improve the efficiency of transferred power. Moreover, high resolution beamforming was used to significantly decrease the pilferage of information. Similarly, energy efficient mechanism in SWIPT enabled massive MIMO was studied in [295] and an efficient power allocation scheme was proposed to provide link security.
B. Massive MIMO
Massive MIMO is a type of multi-user MIMO scheme in which the BS is equipped with hundreds of antennas and can serve tens of users in the same time-frequency block. In [296] the authors proved that by using a very large number of antennas at the BS even simple linear processing performs near optimal; e.g., by using MRC in the uplink or maximumratio transmission (MRT) in the downlink, the effects of fast fading, inter-cell interference, and uncorrelated noise almost vanish in the limit of a large number of BS antennas. The key question here is whether significant multiplexing gain can be obtained with low complexity and low-cost signal processing techniques.
In contrast to traditional MIMO, the massive MIMO systems have more stringent security challenges. Firstly, the process of CSI estimation in massive MIMO is complex due to a large number of antennas. Secondly, the antennas may experience correlated fading due to potentially small interelement spacing and then the conventional independent fading models cannot be used. This also introduces complications in the derivation of analytical expressions. Lastly, the pilot contamination can adversely affect channel estimation and thereby degrade the secrecy performance of the system. To partly address this issue, the authors in [297] derived an asymptotic expression of secrecy capacity by jointly using AN precoding and ZF in massive MIMO systems. Zhu et al. in [298] analyzed the secrecy capacity for multi-cell massive MIMO systems. For cooperative massive MIMO systems, the analysis of secrecy outage was performed in [96], [42]. Wang et al. in [299] provided the optimal power allocation factor to reduce the secrecy outage probability. The authors in the same work also proposed directional jamming towards the eavesdropper. The authors concluded that directional jamming outperforms conventional omnidirectional jamming in massive MIMO systems. A jamming signal detection strategy for massive MIMO network was provided in [300]. It was concluded that a receiver node can more efficiently reject the jamming signal if its desired signal has a significantly larger power level compared with that of the jamming signal.
In a similar study [301], the authors considered a single cell with a single-antenna jammer that could adjust its transmit power. The authors then proposed a receiver filter to reduce the impact of jamming in the considered scenario. However, the generalized case for multi-cell environment in the presence of an N -antenna jammer was considered in [302]. For such scenarios, the BS must estimate N channels and subsequently cancel the interference from these channels. In this case the exploitation of frequency/time offsets or subspace methods may prove more suitable for jamming signal rejection.
C. Internet of Things (IoT)
There is estimated to be up to 20 billion IoT device by 2020 whereby these devices are predicted to generate a revenue of US$ 9.8 trillion [303]. Despite its applications in health and autonomous drones and logistics, there are still weaknesses in their core design with respect to implementation of IoT. The first weaknesses arise due to the configuration of IoT devices [304], [305], [306]. The process of reconfiguration is a delicate one as it needs to be performed on secure channel or the eavesdropper may acquire sensitive information, such as keys and device association. The second challenge comes from the lack of well-defined topology of IoT networks, and due to the fact that IoT network may consist of both static and mobile nodes. Consequently, it is not possible to arrange devices in a specific order to fully exploit cooperation of nodes at physical layer.
The work on link security of IoT networks is still in early stages and only limited number of studies exist in the literature [307]. Pecorella et al. in [308] proposed a physical layer based method to improve the link security/ reliability of data, without extra hardware or increased complexity. However, the proposed scheme is found to be more suitable for near-field transmission scenario. Still the work on cooperative schemes in IoT, to ensure secrecy of messages, is non-existent and more efforts are required to be focused towards the physical layer aspect of the IoT.
D. Spectrum Extension (mm-Wave/ FSO)
In recent years, research efforts are being made to exploit the spectrum bands, not used in the earlier generation of networks. A very promising solution for future 5G cellular network is the mmWave communication [309]. The mmWave contains a wide range of carrier frequencies, operating over a frequency band of 3-300 GHz. It provides short range, high bandwidth (multi-gigabits-per second) connectivity for cellular devices. The mmWave band has several desirable features, including large bandwidth, compatibility with directional transmissions, reasonable isolation, and dense deployability. The mmWave channels suffer from significant attenuation due to the inability of short mmWave band wavelengths to diffract around obstacles. Interruption in line of sight (LoS) communication, due to a moving obstacle, can lead to link outage in this case [258]. Further, the limited penetration capability could restrict the mmWave connectivity to a confined space. For example outdoor mmWave signals may be confined to outdoor structures, such as car parkings or streets, and limited signals may penetrate inside buildings [258], [310]. Recently, Gong et al. considered two-way relaying for mmWave band in [311], [312]. Secrecy performance of ad-hoc networks, using mmWave band, was evaluated by Zhu et al. in [313], [310] while [314] designed precoders for MISO OFDM systems using mmWave band.
Similar to mmWave, Free-Space Optical (FSO) communication [315] is also expected to provide many enhancements in bandwidth utilization. However, like mmWave, FSO also faces challenges due to heavy rain and fog. Also the performance of FSO completely degrades in non-LOS condition [316]. In the context of PLS, [141] discussed a link security in a hybrid RF/FSO multiuser relay network. More specifically, securityreliability tradeoff was discussed by the authors, followed by providing opportunistic scheduling schemes. It was found that the information leakage takes place if the eavesdropper is located near the receiver or transmitter, without affecting the reliability of information transfer. Link security analysis for FSO system over Malaga Turbulence channels was provided in [317]. Similarly, the secrecy performance of RF/FSO system was also evaluated in [12], where the authors consider Nakagami-m fading for RF link and Gamma-Gamma fading for FSO link.
E. Device-to-device (D2D) Communication
A rapid increase in the density of devices has pushed up the demand of data rates in wireless communication. In this regard, D2D communication, which allows direct communication between two-user equipments (UEs), has emerged as a prominent technology for upcoming cellular networks [318], [319], [320]. Specifically, proximity gains can be used to improve energy and spectrum efficiency. This brings up the challenges of high interference to conventional cellular user equipment (CUE) causing a performance degradation. This interference can be used to significantly enhance the link security of cellular networks [321], [322], [323]. Also, when generated by a friendly D2D jammer, it can be used to confuse the eavesdropper, allowing the D2D user to transmit frequently. The authors in [324] jointly optimized the access control and power of RF link, where the links were subjected eavesdropping attacks. The authors then provided an extension to their work, by applying the same optimization strategy for large-scale D2D networks in [325]. For the case of multiple eavesdroppers and multiple antennas, Chu et al. in [326] considered a downlink D2D communication scenario to provide a robust beamforming technique. The authors in [324], optimized access control and transmit power subject to secrecy constraints for CUEs. This case was extended for a large scale D2D network in [325]. The authors introduced a scheduling strategy for D2D links to improve their secrecy performance. A generalized M ×N relay-assisted D2D scenario was considered by the authors in [327], where N represents the number of cluster of devices and M denotes the number of devices in each cluster. It was unveiled that the SOP increases an increase in the number of hops. In contrast, it decreases as the number of devices in each cluster increase. A robust MISO beamforming technique was proposed in [326] to maximize the secrecy rate and minimize the transmit power, under the constraint of transmission rates.
F. Non-orthogonal multiple access (NOMA)
Non-orthogonal multiple access (NOMA) is deemed to be a revolutionary advancement for future 5G networks. NOMA allows users with better wireless channel conditions to apply successive interference cancellation (SIC) techniques to remove the messages for other users, and subsequently decode their messages [328]. From the perspective of cooperative NOMA, performance of near and far users have been extensively investigated in [329], [330], [331], [332], [333], [334], [335], [336]. From the perspective of the PLS, however, limited work exists in cooperative NOMA. Cooperative NOMA scheme was considered in [337]. The authors showed that the diversity order of the system is determined by the secrecy performance of the user with a poor channel. The authors also proposed to enlarge protected zones to enhance the secrecy. In [338] authors proposed protected zones around BS by using channel ordering, as shown in Figure 21. The authors concluded that the secrecy performance of NOMA can be improved by generating AN at BS, and using pre-specified protected zones. The authors also derived exact and asymptotic closed-form expressions of the SOP. Despite these efforts, the work on secure communication in cooperative NOMA architecture is still in early stages and the secrecy enhancement of users with poor channel conditions is a critical problem. More recently, the authors in [339] proposed to enhance the secrecy performance of two-way FD relaying in a NOMAbased system. They derived closed-form expressions for the ergodic secrecy rate with and without eavesdropper collusion. It was also shown that the link reliability increases with an increase in the number of antennas at the FD relay. In [340], the authors showed that the asymptotic secrecy outage probability of DF and AF NOMA-based relays becomes constant at high SNR values. Interestingly, the authors also noted that the secrecy performance of a cooperative NOMA system is independent of the channel between the far user and the relay.
G. Cognitive Radio Networks
Spectrum scarcity is one of the most researched issues in wireless communications. In this context, cognitive radio (CR) network has been considered as a potential contender to address this issue. CR network works by allocating same spectrum to a secondary network if the QoS is not degraded or the spectrum is idle [341], [342], [343]. Since various devices are allowed to access the spectrum, therefore, CR networks are inherently vulnerable to eavesdropping attacks [344], [345], [346]. Link security of both primary and secondary users has been extensively discussed in recent years [347], [348]. A four node scenario was considered in [349], where transmit beamforming was used for multi-antenna CR transmitter. Three suboptimal and lightweight solutions were provided, due to the non-convexity of the utility function. Maji et al. evaluate secrecy of CR network and provide relay selection schemes under the influence of interference from neighboring nodes [350]. The problem of the information's interception was considered in [351], for multiuser cooperative CR network. Interestingly, Pareto resource allocation policies were designed for 1) maximization of energy harvesting efficiency, 2) minimization of transmit power and 3) minimization of the ratio between power leakage and total transmit power. Cooperative relaying was studied for CR in [352]. In particular, the relays were given two roles i.e., information relaying and CJ. Cooperative relaying for energy harvesting cognitive networks was studied in [353] for PS architecture. For AN aided EH cognitive networks, a precoding scheme was provided by authors in [351], to maximize the secrecy rate while minimizing interference. Similarly, opportunistic cognitive relaying was studied in [354] where one relay transmits the information to destinations, while the other relay act as a jammer for the eavesdropper. The authors provided four relay selection policies, based on different combinations of best and random relay selections. It was then shown that secrecy outage saturation results when jamming relays are not present. The case without jamming was considered in [355]. The authors provided in-depth analysis on securityreliability tradeoff for different relay selection schemes. As demonstrated, the tradeoff between reliability and security can be minimized with an increase of relays and by adopting a proper relay selection method.
In the domain of cognitive networks, machine type communication (MTC) or machine-to-machine (M2M) communication has gathered considerable research interest. It refers to the exchange of information among devices without involvement or intervention of humans. It has numerous unique attributes that include distinct service environment, infrequent transmission of data and large-scale distribution. Recently some studies have investigated this domain, from the perspective of PLS [356], [357]. The future directions include minimization of power consumption when providing hop-to-hop link security and estimation of local and global CSI for secure route selection.
VII. CONCLUSION
This survey provides a detailed, transparent and precise information regarding the latest developments on the use of cooperative techniques for improving PLS. Moreover, this survey offers classification for different cooperative techniques, along with the discussion of their merits and demerits. The article also presents and elaborates different hybrid approaches and their associated challenges. Based on above stated arguments, the following key conclusion can be extracted: • More research efforts need to be focused towards exploiting relay positioning and formulation of efficient trust metrics. • Cross layered schemes can be used to gain more benefits from secure cooperative schemes. • In order to efficiently utilize existing cooperative schemes, social models and incentive-based techniques need to be designed. • Hardware implementation of hybrid cooperative schemes is still a challenge due to weak time synchronization between relays and jammers. • Multi-cellular design needs to be further investigated for practical realization of secure cooperative PLS architecture.
Conclusively, this article provides the readers with an opportunity to appreciate the significant and rapid advances in cooperative PLS literature, which is a growing area of wireless communication. This survey will undoubtedly trigger and motivate the interested readers to concentrate their research efforts towards the design of secure cooperative PLS schemes for 5G networks. | 2019-02-05T19:21:02.940Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "ee516486b9ef49b4a20ec100ab28fdea5f95ecb8",
"oa_license": "CC0",
"oa_url": "https://jyx.jyu.fi/bitstream/123456789/67170/1/Furqan_et_al_comprehensive_survey.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "5a0b021b853abe064f475b5820ed3dbe9676edc3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
23764116 | pes2o/s2orc | v3-fos-license | Single coronary artery originating from the right aortic sinus without a left anterior descending and circumflex artery in conventional swine
Single coronary artery is a rare coronary artery anomaly. Very few previous reports of this anatomical malformation in swine have been found. A 22 kg Yorkshire X Landrace F1 crossbred castrated male swine was presented for enrollment in a coronary stent implantion study. Coronary angiography revealed a single coronary artery arising from the right aortic sinus. The right coronary artery and anomalous left coronary artery were implanted with novel coronary stents without any side effects.
Because the porcine cardiac and coronary artery anatomy is similar to that in humans, swine are important laboratory large animal models in cardiovascular research [3,[8][9]12]. The normal left coronary artery of humans and swine divides into a left anterior descending artery (LAD) and left circumflex artery (LCX) (Figure 1-1). The normal right coronary artery (RCA) arises from the right aortic sinus and travels between the right auricle and pulmonary trunk (Figure 1-2).
The incidence of coronary artery anomalies is approximately 1-2% in human beings [2]. In a coronary angiographic analysis of 16,573 patients, coronary malformations were detected in 48 (0.29%). The origin of the LCX from the RCA or right aortic sinus was the most common anomaly (28 patients [58.3%]). An anomalous RCA originating from the LAD or LCX was observed in six patients (12.5%). The left coronary artery originated from the right aortic sinus in five patients, and the LAD originated from the RCA or the right aortic sinus in five patients. The RCA originated from the left aortic sinus in three patients and from an ectopic ostium in the ascending aorta in one patient [11]. In 126,595 American people, the coronary artery malformation rate was 1.3 % and various coronary artery anomalies have been reported [1,10]. There have been many reports of coronary anomalies in human interventional cardiology cases [4][5][6]10], but previous reports of anomalies in swine are very rare [7]. We conducted an angiogram for developing a novel coronary stent. And a coronary malformation was an incidental finding during the coronary stent implantation in a porcine coronary restenosis model. for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication No. 85-23, revised 1996). The study animal was a 22 kg Yorkshire X Landrace F1 crossbred castrated male swine weighing 22 kg. To prevent acute thrombosis after stenting, premedication with aspirin 100 mg and clopidogrel 75 mg per day was given for 5 days before the procedure. On the procedure day, the pigs were anesthetized with zolazepam and tiletamine (2.5 mg/kg, Zoletil50 ® , Virvac, Caros, France), xylazine (3 mg/kg, Rompun ® , Bayer AG, Leverkusen, Germany) and azaperone (6 mg/kg, Stresnil ® , Janssen-Cilag, Neuss, Germany). They received supplemental oxygen continuously through an oxygen mask. Subcutaneous 2% lidocaine at the cut-down site was administered, and the left carotid artery was surgically exposed, and a 7 French sheath was inserted.
Continuous hemodynamic and surface electrocardiographic monitoring was maintained throughout the procedure. Then 5,000 units of heparin was administered intravenously as a bolus prior to the procedure, and the target coronary artery was engaged using standard 7 F guide catheters, and control angiograms of both coronary arteries were performed using a nonionic contrast agent in two orthogonal views.
On angiography, the left coronary artery did not exist (Figure 2-1). The right coronary angiogram showed a single coronary artery arising from the aortic sinus (Figure 2-2).
Two types of stents were implanted in the anomalous left coronary artery and right coronary artery in the pig. The stent was deployed by inflating the balloon and the resulting stent-to-artery ratio was 1.3:1 (Figure 3). Coronary angiograms were obtained immediately after stent implantation. Acute stent thrombosis is a very common side effect after stenting. Our coronary angiogram showed that acute thrombosis did not occur in the stent implantation area (Figure 4). Then, all the equipment was removed and the carotid artery was ligated.
Our case report showed that conventional swine have a coronary artery anomaly without clinical symptoms. The structural vascular anomaly may influence outcomes of myocardial infarction experiment, but do not affect the coronary stent experiment. In humans, coronary artery malformations can cause clinical symptoms, and in rare cases even death. Therefore, the researcher may have to exclude such swine with a coronary anomaly through angiographic findings. | 2016-05-12T22:15:10.714Z | 2013-12-01T00:00:00.000 | {
"year": 2013,
"sha1": "6ad32cbc46322a3619bda1cf32a0207705d3d8c1",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc3879342?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ad32cbc46322a3619bda1cf32a0207705d3d8c1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266889614 | pes2o/s2orc | v3-fos-license | Bacteriology of chicken meat samples from Bharatpur, Chitwan, Nepal
Many food-borne diseases are associated with the consumption of chicken meat and thus are of public health signi fi cance worldwide. A cross-sectional study was done to isolate, identify, and characterize bacteria from chicken meat samples of Bharatpur, Chitwan. A total of 102 samples were randomly collected and processed at the Microbiology laboratory of Birendra Multiple Campus, Chitwan for three months (May-July 2016). One gram of each sample was crushed on 9 ml distilled water in a sterile mortar pestle followed by serial dilution and inoculation of 0.1 ml sample into suitable culture media and incubated at 37 о C for 24 hours. Identi fi cation of the isolates was done by microscopic examination and biochemical tests and the antimicrobial susceptibility pattern of the isolates was determined by the Kirby Bauer disc diffusion method. Out of 102 meat samples, the growth positivity rate was 94.0% ( n = 96/102) on all of the culture media. 26/48 fresh and 44/54 frozen samples gave positive growth with 36 isolates from fresh and 60 isolates from frozen meat samples, with the occurrence of Staphylococcus aureus 26(27.08%), Pseudomonas spp 6(6.25%), Proteus spp 4(4.16%), Escherichia coli 22(22.91%), Salmonella spp 16(16.66%), Citrobacter spp 8(8.33), Acinetobacter spp 8(8.33%), Streptococcus spp 2(2.08%), Shigella spp 4(4.16%) and Vibrio spp 2(2.08%). Cefalexin (92.85%) was the most effective antibiotic against Gram-positive bacteria followed by Amoxicillin (71.42%) and Methicillin (64.28%). The least effective antibiotic was Ampicillin (50%). Similarly, Gentamicin (76.47%) followed by Nalidixic acid (41.47%) were effective against Gram-negative bacteria, while some showed resistance to all three classes of drugs exhibiting MDR.
Introduction
The consumption of contaminated food is unsafe for health and its consequences have been one of man's major health problems for a long time.They remain to be a major public health concern globally.Food-borne diseases are responsible for a large occurrence of adult illnesses and deaths; more importantly, as sources of acute diarrheal diseases, they are known to take the lives of many children every day [1].The problem is severe in developing countries due to diffi culties in securing optimal hygienic food handling practices.Evidently, in developing countries, up to an estimated 70% of cases of diarrheal disease are associated with the consumption of contaminated food especially meat and meat products [1].
Transmission of entero-pathogenic bacteria is affected directly or indirectly through objects contaminated with feces.These include food and water indicating the importance of fecal-oral human-to-human transmission [2].Chicken is a rich source of meat protein and is highly consumed all over the world.
However, under a poor hygienic environment, raw chicken meat presents an ideal substrate supporting the growth of pathogenic Escherichia coli and coliform bacteria, indicating the potential presence of other pathogenic bacteria; this may even constitute a major source of food-borne illnesses in humans.
Chicken is a nutritious, healthy food that is low in fat and cholesterol compared to other meats but an excellent source of protein.Meat must be of high microbiological quality to ensure that the consumer receives a product that is not spoilt or does not carry food-borne diseases [3].Special attention in poultry meat production is paid to the fact that live animals are hosts to a large number of different microorganisms residing on their skin, feathers, or in the alimentary tract.During slaughter, most of these microorganisms are eliminated, but subsequent contamination is possible at any stage of the production process, Citation: Subedee A, Paude N (2023) Bacteriology of chicken meat samples from Bharatpur, Chitwan, Nepal.Open J Bac 7(1): 016-024.DOI.https://dx.doi.org/10.17352/ojb.000025from feather plucking, evisceration, and washing to storage by cooling or freezing [4].Microorganisms from the environment, equipment, and operators hands can contaminate the meat.
The increased prevalence of Salmonella contamination in poultry has gained considerable scientifi c attention during the last few decades.Poultry is one of the most common reservoirs of Salmonella and contamination of poultry products can occur during the different stages of poultry production.Poultry is a food that has been highly appreciated by man since time immemorial.It is an important, low-cost source of animal protein, rich in nutrients, phosphorus, other minerals, and B-complex vitamins [5].Food-borne diseases associated with the consumption of poultry meat and its processed products are of public health signifi cance worldwide [6].
Poultry and poultry meat are often found contaminated with potentially pathogenic microorganisms such as Salmonella, Campylobacter, S. aureus, E. coli, and Listeria.Microorganisms introduced from environmental exposure, lack of sanitation in slaughtering premises, equipment, and outfi ts, and operators' hands contaminate the meat product [7].Chicken meat has higher pathogenic and spoilage bacterial counts than most other foods, where meat can be contaminated at several points throughout the processing operation during scalding, defeathering, and evisceration as well as cross-contamination from other birds and processing equipment.The meat surface does not normally, inherently contain pathogenic organisms but can acquire the organisms from fecal matter or cross-contamination during slaughter.Ensuring a safe food supply has been a continuous challenge following the recognition of more and more pathogenic bacteria.Despite modern innovations in slaughter hygiene and food production techniques, food safety has been at the forefront of public health issues [8].The organisms tend to remain on the surface or just under it.Meat is an ideal medium for bacterial growth because of its high moisture content, richness in nitrogenous compounds (essential amino acids, proteins), and a good source of minerals, vitamins, and other growth factors.Furthermore, its pH is favorable for the growth of micro-organisms too [9].
The progressive increase in antimicrobial resistance among enteric pathogens in developed and developing countries has become a critical area of concern [10].Previous studies have shown that food-borne pathogens, such as Escherichia coli and Salmonella, are highly prevalent, and have been isolated in stool samples from humans affected by food-borne illnesses, as well as in the meat and poultry products processed for human consumption [11][12][13][14].Ensuring a safe food supply has been a continuous challenge following the identifi cation of more and more pathogenic bacteria.Despite modern innovations in slaughter hygiene and food production techniques, food safety has been at the forefront of public health issues [8].The safety of commercially processed poultry products is a major area of concern for producers, consumers, and public health offi cials worldwide for products excessively contaminated with microorganisms are undesirable from the standpoint of public health, storage quality, and general aesthetics [15].The contamination of chicken meat with microorganisms during processing, handling, and transportation is undesirable, though inevitable.A higher bacterial load on the carcass could be expected when carcasses are handled unhygienically at the abattoir [16].Two of the most common etiologic bacterial organisms responsible for causing gastroenteritis, a major public health concern in most regions of Thailand, are Salmonella and E. coli [17,18] Several studies have reported an outbreak of infections due to consumption of contaminated food and poor hygiene and in most of the cases, data are loosely based on laboratory isolates which do not refl ect the actual ratio of food-borne infections.However, a few community-based reports provide evidence of outbreaks caused by Salmonella, Shigella, E. coli, and Listeria spps in different parts of the world [19].Moreover, antibiotic resistance levels are also elevated among food-borne pathogens such as Salmonella and Shigella [20].Meat, a good source of animal protein along with sensory attributes, appeals to consumers very easily.
Materials and methods
This was a cross-sectional study carried out at the microbiology laboratory of Birendra Multiple Campus, Bharatpur during a period from May to July 2016, and the samples were collected from different wholesalers and retailer meat shops in Bharatpur, Nepal.
All of the slaughter slabs, and wholesale and retail chicken meat shops in Bharatpur ward no. 7, 8, and 10, Chitwan, Nepal were visited and butchers were interviewed.
Collection of samples
A total of one hundred-two random samples of chicken carcasses were collected from local commercial retail shops and wholesale shops in Bharatpur sub-metropolitan municipality.The collected samples were kept in separate sterile plastic bags and transferred directly to the laboratory in an insulated icebox under complete aseptic conditions without any delay to evaluate their bacteriological quality.Samples with improper labeling and inappropriate collection were also rejected.
Preparation of samples (USDA 2011)
One gram of the examined samples was removed by sterile scissors and forceps after surface sterilization by a hot spatula, transferred to a sterile polyethylene bag, and 9 ml of 0.1 % sterile buffered peptone water was aseptically added to the content of the bag.Each sample was then homogenized by a mortar and pestle to provide a homogenate of 1/10 dilution.
Bacterial culture
The collected samples were immediately processed without storage.Samples homogenized with peptone water were incubated at 37 °C for 5 hours, and then one loopful of the culture was streaked on Mannitol salt agar for Grampositive bacteria, MacConkey agar for Gram-negative bacteria, Eosin Methylene Blue agar for Coliforms, especially, E. coli and Xylose lysine deoxycholate agar for the identifi cation of Salmonella spp.and incubated at 37 °C for 24 hours.Further, the suspected, isolated colonies were sub-cultured on Nutrient
Disc diffusion susceptibility test
Susceptibility tests were performed by the disc diffusion method [24,25].The turbidity of the inoculums should be adjusted to the equivalent turbidity of 0.5 McFarland standards.An 18 hours culture of test organisms incubated at 37 °C was standardized by diluting to 0.5 McFarland turbidity standard before spreading over the surface of Mueller Hinton agar (MHA) (Titan Biotech Ltd. Bhiwadi-301019, Rajasthan, India.)plates using sterile cotton swab/glass spreader [26] and allowed to dry for 2 to 5 minutes.Using sterile tweezers, antimicrobial discs ampicillin (10 mcg), nalidixic acid (30 mcg), nitrofurantoin (300 mcg), trimethoprim (5 mcg), Gentamycin
Quality control for test
In this study, the quality and accuracy of all tests were maintained by following standard procedures of collection, isolation, and identifi cation.
Statistical analysis
Data entry, management, and analysis were done using SPSS v20.The association between different risk factors and the antibiotic resistivity pattern of isolated bacteria was compared statistically by a Chi-square ( [2]) test.
The pattern of growth of chicken meat sample
Out of 102 meat samples, growth was observed in 96 (94.11%) samples, and no growth was observed in 6 (5.88%) samples (Figure 1).
Differentiation based on gram's reaction
Out of 96 positive cultures, 28 were Gram-positive, while the rest 68 isolates were Gram-negative (Figure 2).3).
Antimicrobial susceptibility pattern of Staphylococcus aureus
26(27.08%)Staphylococcus aureus was isolated from the meat sample.Of which, Cefalexin (92.30%) followed by Amoxycillin (69.23%) was the most effective drug against the isolated species (Table 2).
Antibiotic susceptibility pattern of Proteus spp
4(4.16%) of Proteus spp was isolated from the meat samples.Of which, Trimethoprim (100%) was the most effective drug used against Proteus spp followed by Gentamicin (50%) and Nalidixic acid (50%) (Table 8).Antibiotic susceptibility pattern of Vibrio spp 2(2.08%) of Vibrio spp were isolated from meat samples.Of which Gentamicin (100%) was the most effective drug (Table 9).
Antibiotic susceptibility pattern of Pseudomonas spp
6(6.25%) of Pseudomonas were isolated from the meat samples.Of which Gentamicin (100%) was the most effective drug followed by Nalidixic acid (66.66%) (Table 10).
Age-wise distribution of the butchers and distribution of bacteria among fresh and frozen meat samples
Among 96 isolates, the highest number of isolates 60(62.5%)was recorded from the age group 16 years -30 years.The lowest number of isolates were detected from age group 46 -60 accounting for 4(4.167%).Lastly, 32(33.33%)bacterial isolates were detected from the age group of 31 -45 summing both fresh and frozen meat samples.The prevalence of bacteria in fresh and frozen meats was not signifi cantly affected by the age of butchers (p > 0.05) (Table 12).
Sex-wise distribution of butchers and distribution of bacteria among fresh and frozen meat samples
Among 96 isolates, 22(22.91%)bacteria were detected from fresh meat handled by male butchers, while 14(14.58%)isolates were detected from fresh meat handled by female butchers.Similarly, 42(43.75%)isolates were detected from frozen meat provided by male meat sellers, while 18(18.75%)bacteria were isolated from frozen meat provided by female meat sellers.From the above data collected, meat collected from male butchers was more highly contaminated than meat sold by female butchers as the number of isolates in the case of a male was quite higher than that of a female.There was no signifi cant association between the sex of the respondents and the quality of the products being sold p > 0.05 (Table 13).
Distribution of bacteria among the type of shops and distribution among fresh and frozen meat samples
Among 96 isolates, 20(20.83%)bacteria were isolated from fresh meat taken from retailer shops, while 48(50%) isolates were detected from frozen meat samples taken from retailer isolates were detected from frozen meat samples taken from wholesale shops.From the above data collected the number of bacterial isolates was higher from frozen meats which were taken from retail shops.However, there was no signifi cant association between the type of shops and the distribution of bacteria among fresh and frozen meat samples p > 0.05 (Table 14).
Distribution of bacteria among literacy groups and the occurrence of isolates from fresh and frozen meat samples
Among 96 isolates, 20(20.83%)bacteria were isolated from fresh meat provided by literate butchers, while 34(35.41%)bacteria were isolated from frozen meat taken from literate meat sellers.Similarly, 16(16.66%)bacterial isolates were detected from fresh meat provided by illiterate butchers, while 26(27.08%)isolates were identifi ed from frozen meat samples taken from illiterate butchers.The number of bacterial isolates was higher in the case of frozen meat collected from literate meat sellers.There is no signifi cant association between the literacy group and the occurrence of bacteria in fresh and frozen meat samples p > 0.05 (Table 15).
Discussion
Bacterial contamination of chicken meat can be from the chicken itself, workers, tools, and types of equipment as well as hygiene in the slaughterhouse.In the present study performed, In the study conducted in Chitwan, antimicrobial drugs were used for AST.Ampicillin, Amoxycillin, Methicillin, Cefalexin, and Azithromycin were used against Gram-positive isolates while Nitrofurantoin, Gentamicin, Trimethoprim, and Nalidixic acid were used against Gram-negative isolates.
Hence, among all the antibiotics used for Gram-positive isolates, most strains of bacteria were resistant to Ampicillin (50%).And in the case of Gram-negative isolates, most bacteria were resistant to Trimethoprim (63.23%).
Among 26 Staphylococcus spp isolated from our study, only 4 of them (15.38%)showed Multi-Drug Resistance (MDR) properties i.e. among 5 antimicrobial drugs used, the isolated spp were resistant to more than 2 classes of drugs.Similarly, out of 22 isolated E. coli 10(45.45%)showed MDR, while summing both fresh and frozen meat samples.The prevalence of bacteria in fresh and frozen meats was not signifi cantly affected by the age of butchers (p > 0.05).Among 96 isolates, 22(22.91%)bacteria were detected from fresh meat handled by male butchers, while 14(14.58%)isolates were detected from fresh meat handled by female butchers.Similarly, 42(43.75%)isolates were detected from frozen meat provided by male meat sellers, while 18(18.75%)bacteria were isolated from frozen meat provided by female meat sellers.From the above data collected, meat collected from male butchers was more highly contaminated than meat sold by female butchers as the number of isolates in the case of the male was quite higher than that of females.There was no signifi cant association between the sex of the respondents and the quality of the products being sold p > 0.05.Among 96 isolates, 20(20.83%)bacteria were isolated from fresh meat taken from retailer shops, while 48(50%) isolates were detected from frozen meat samples taken from retailer shops.Similarly, 16(16.66%)isolates were detected from fresh isolates were detected from frozen meat samples taken from wholesale shops.From the above data collected the number of bacterial isolates was higher from frozen meats which were taken from retail shops.However, there was no signifi cant association between the type of shops and the distribution of bacteria among fresh and frozen meat samples p > 0.05.The poultry slaughtered and dressed under Chitwan conditions carrying high initial contamination would be exhibited to the point the consumers are offered as retail meat.So, retail meat would harbor all the bacteria that are already present in meat as inherent contamination through infection and that are introduced during handling, improper dressing, cleaning, unsanitary conditions, and retailing.To increase meat quality, assurance following microbial load assessment is deemed necessary.Hence, this study was conducted to assess the microbiological situation of fresh chicken meat which can be the refl ection of the hygienic condition of meat consumed and the possible hazards to public health.Chicken meat can also act as a reservoir of drug-resistant bacteria.Antimicrobial resistance among E. coli, Salmonella, and other species in chicken meat is of increasing concern due to the potential for the transfer of these resistant pathogens to the human population.Most of these genera are known to be of public health concern and have been associated with cases of gastroenteritis and other food-borne diseases [27].The sources of these contaminations have been linked to poor hygienic conditions of the handlers, the environment, and cross-processing contaminations [28][29][30].Among the 102 meat samples, only two did not produce any microorganisms when incubated at 37 0 C. Of the samples, 58 contained Coliform bacteria, 58 contained S. aureus, 56 showed Pseudomonas growth, and 38 of them contained E. coli.Among the samples, 32 out of the 58 samples were S. aureuspositive.The susceptibility results of bacteria isolated from meat samples showed that they are highly resistant to all the antibiotics tested.Gram-negative organisms are more resistant than Gram-positives; this is expected because of the intrinsic nature of the gram-negative cell wall.The gram-negative micro-organisms isolated belong to the Enterobacteriaceae family, this group of organisms is always resistant to various classes of antibiotics .
Conclusion
A total of 102 chicken meat samples were collected from different retail and wholesale shops, out of which 40 fresh and 56 frozen samples were growth positive while 6 samples were growth negative.The samples were collected from shops with 64 male and 38 female butchers.The results of the present study indicated that the prevalence of common food-borne pathogens in the market samples of chicken meat in Bharatpur, Chitwan was at the higher levels.On antimicrobial susceptibility testing, Ampicillin was the most effective antibiotic against Gram-positive bacteria followed by Amoxicillin, Cefalexin, and Azithromycin.Most of the isolated Staphylococcus were Methicillin-resistant and some exhibited MDR too.Similarly, Gentamicin followed by Trimethoprim and Nalidixic acid were effective against Gram-negative bacteria, while some showed resistance to all three classes of drugs exhibiting MDR.The highest MDR organism isolated was Staphylococcus aureus followed by E. coli and Salmonella spp.There wasn't a signifi cant association between the type of meat shops and the condition of the meat samples (p > 0.05).Similarly, the association was absent between the age of the meat sellers and the occurrence of pathogens in fresh and frozen meat samples.Hence, we cannot say that the age factor is responsible for causing more contamination of meat products.Likewise, there was no correlation between the sex of butchers and the hygiene of the meat, so we cannot say that male butchers handled meat improperly so the number of isolated bacteria was higher though the collected data suggested that result.The present study provided us with an idea about the occurrence of pathogens in chicken meat in correlation with different risk factors and helped policymakers to build rules and regulations that strengthen the quality of health of people consuming meat and meat products.
(
10 mcg), methicillin (5 mcg), amoxicillin (30 mcg) and Azithromycin (15 mcg) were placed widely spaced aseptically on the surface of MHA plate.Tweezers were re-fl amed after the application of each disc.The plates were then incubated at 37 °C for 24 hours.Following incubation, the Diameter of the Inhibition Zone (DIZ) was measured with a transparent ruler and expressed in millimeters (mm).
For identifi cation and standardization by the Kirby-Bauer test, a standard culture of E. coli ATCC 25922 was used as a reference strain.For quality control, media, antibiotics, and reagents were prepared, stored, and utilized as recommended by the manufacturing company.Antibiotic discs were stored at refrigerator temperature.For each batch of the test, a positive and negative known culture was used for color reaction, biochemical tests, and antibiotic sensitivity tests.
Figure 1 :Figure 2 :Figure 3 :
Figure 1: Showing a pattern of growth of meat samples.
Figure 4 :
Figure 4: Showing the number of MDR organisms.
Table 12 :
Showing age-wise distribution of butchers and distribution of isolates among fresh and frozen meat samples.
Table 13 :
Showing sex-wise distribution of butchers and distribution of isolates among fresh and frozen meat samples.
Table 14 :
Showing distribution of bacteria among the type of shops and distribution among fresh and frozen meat samples.
Table 15 :
Showing distribution of bacteria among literacy groups and their prevalence in fresh and frozen meat samples. | 2024-01-10T16:21:57.209Z | 2023-12-29T00:00:00.000 | {
"year": 2023,
"sha1": "f989cef1b680ea72570e08a1ea6db3e629b23c29",
"oa_license": "CCBY",
"oa_url": "https://www.peertechzpublications.org/articles/OJB-7-125.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3dcab584355cadafae0d7c53ab82b80e580ddf28",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
67764220 | pes2o/s2orc | v3-fos-license | High efficiency regeneration and genetic stability analysis of somatic clones of Gynura bicolor DC .
Gynura bicolor DC. is a perennial vegetable and medicinal plant. It is an important source of anthocyanins. The effects of different growth regulators on callus induction and plant regeneration were evaluated. The best SFC index (8.6) of plant regeneration was obtained in combination of 2,4-D at 2.0 mg/l and BA at 0.5 mg/l, and the frequency of regenerating explants was 78.3%. The highest number of shoots per explant was 11. The genetic stability of the regenerants was analyzed by random amplified polymorphic DNA (RAPD), inter-simple sequence repeat (ISSR) molecular markers and flow cytometry. The results indicated that no somaclonal variation was detected among the regenerants. To our knowledge, this is the first report of somatic clone study in G. bicolor. The high efficient and reproducible protocol will be advanced for the further studies on secondary metabolic products, transformations and breeding of this potential medicinal plant.
In vitro cell and tissue cultures are frequently used for the production of secondary metabolites (Zhao and Verpoorte, 2007;Dörnenburg, 2008), therefore, in vitro tissue culture of G. bicolor may be a potential method for the production of large amounts of anthocyanins for medicinal applications. The present regeneration protocol is useful for large-scale clonal multiplication as well as for transformation studies.
In plant regeneration, one of the most crucial concerns is to retain genetic integrity with respect to the mother plants. Some strategies are available for detecting genetic variations, including DNA analysis techniques and flow cytometry analysis (Fiuk et al., 2010). It is important to detect the variations of somatic clones at early stage during plant tissue culture to avoid further labors and supplies consumption (Qin et al., 2006(Qin et al., , 2007. The randomly amplified polymorphic DNA (RAPD) (Williams et al., 1990) and the inter-simple sequence repeat (ISSR) (Zietjiewicz et al., 1994) have been successfully employed to assess the genomic stability in regenerated plants including those with no obvious phenotypic alterations (Tyagi et al., 2010;Samantaray and Maiti, 2010). Flow cytometry was performed to study DNA content stability of This study described an efficient system for the plant regeneration of G. bicolor. The genetic variation for the regenerated plants was analyzed by RAPD, ISSR and flow cytometry techniques. High efficient protocol for producing tissue cultural based regenerants without somatic genetic variations would be useful for further studies on secondary metabolic products, gene transformations and plant breading of this potential medicinal plant species.
Culture medium
The leaf explants were cultured on MS medium supplemented with different combinations of plant growth regulators. All the media were solidified with agar (8 g/l), adjusted to pH 5.8 by 0.1 N NaOH and sterilized by autoclaving at 121°C for 20 min. All plant growth regulators were incorporated in the medium before autoclaving. AgNO3 was added to the medium after autoclaving by filter sterilization.
Culture conditions
G. bicolor genotype Hongye was used in this study. Tissues of young and fully expanded leaves were used as explants for the culture. Leaves were surface-sterilized in 70% ethanol for 30 s, followed by immersion in mercuric chloride (0.1 % w/v) with two drops of Tween-20 for 8 min. The plant materials were subsequently rinsed three times in sterile water. The leaves were cut into 1 cm 2 pieces and were used as explants. Ten explants were inoculated in a 90 mm Petri dishes containing 25 cm 3 solidified medium. Leaf explants were placed with the adaxial surface towards the medium and then incubated in a growth chamber at 26 ± 2°C with a photoperiod of 14 h and light intensity of 40 mol m -2 s -1 provided by fluorescent tubes. The same culture conditions were used in all the experiments.
Effect of plant growth regulators on callus and multiple shoot bud formation
Callus was induced on explants by culturing on a medium containing various levels of NAA, 2,4-D, KT and BA (Table 1); combinations of growth regulators were designed according to the previous results obtained from preliminary experiments). After four-week callus induction, the calli were transferred to shoot induction medium containing TDZ at 0.5 mg/l and AgNO3 at 4 mg/l. The calli were subcultured every four weeks. The number of explants with shoot buds was scored after 15 weeks culture and the adventitious shoots formed per explant were counted.
Rooting and plant establishment
When shoots reached 5 mm in length, the shoots were excised from the explants and placed on shoot elongation medium containing TDZ at 0.2 mg/l and AgNO3 at 4 mg/l. The elongated shoots were transferred to rooting medium containing IBA at 0.5 mg/l. After rooting, the plantlets were transferred to pots containing peat and perlite (2:1) and placed in a greenhouse under plastic covering to maintain a high humidity.
DNA extraction and PCR amplification conditions
Forty (40) random-selected regenerated plants along with the mother plant were used for RAPD and ISSR analysis. Total genomic DNA was extracted from leaves of each individual using CTAB method (Guo et al., 2003). Totally, 36 arbitrary 10-mer RAPD primers were used for the RAPD analysis following the method of Williams et al. (1990). RAPD amplifications were performed routinely using PCR mixture (20 l) which contained 25 ng of genomic DNA as template, 2.0 l of 10× PCR buffer (1.5 mM MgCl2), 200 M dNTPs, 1 unit (U) of Taq DNA polymerase (Takara Shuzo Co.) and 1 M of each prime ( Table 2). The optimized PCR In the case of ISSR primers, optimal annealing temperature was found to vary according to the base composition of the primers. ISSR amplifications were performed in a volume of 25 l containing 25 ng of genomic DNA as template, 2.5 l of 10× PCR buffer (1.5 mM MgCl2), 200 M dNTPs, 1 unit (U) of Taq DNA polymerase (Takara Shuzo Co.) and 1 M of each primer. PCR was performed at an initial denaturation of 94°C for 5 min followed by 35 cycles of 45 s denaturation at 94°C, 30 s at the annealing temperature and 90 s extension at 72°C with a final extension at 72°C for 5 min using a thermal cycler (MJ Mini, Bio-Rad, USA). The annealing temperature was adjusted according to the (melting temperature) Tm of the primer used in the reaction. The amplification products were electrophoresed in 0.8% agarose gels in 0.5×TBE (Tris-borate-EDTA) buffer. The size of the amplification products was estimated using a DNA ladder DL 2000 (Tiangen, China). The gels were photographed under UV light. Amplification with each primer was repeated twice to confirm reproducibility of the results.
Flow cytometry analysis
For flow cytometric analysis, young leaves of the target species (mother plant and plantlets growing in vitro) and of the internal standard (0.5 cm 2 ) were chopped simultaneously with a sharp razor blade in a plastic Petri dish with 1 cm 3 nucleus-isolation buffer [0.1 M Tris, 2.5 mM MgCl2 . 6H2O, 85 mM NaCl, 0.1 % (v/v) TritonX-100, 1 % (v/v) PVP-10; pH 7.0], incubated for 30 s, filtered through a 35 m mesh and stained with 0.2 cm 3 staining solution including PI and RNase. The relative fluorescence intensities of the stained nuclei were measured by a flow cytometer (BD FACSCalibur). Solanum lycopersicum cv. Stupicke nuclei (2C= 1.95 pg, Borchert et al., 2007;Xing et al., 2010) was used as an internal standard for genome size. The absolute DNA amounts of the samples were calculated based on the values of the G1 peak of G. bicolor and tomato. Routinely, at least 5000 nuclei were measured per sample. The 2C nuclear DNA content of unknown sample was calculated as follows: Sample 2C relative DNA content = Sample peak mean Standard peak mean × 2C DNA content of a standard (2)
Statistical analysis
The experiments were set up in a completely randomized design. All values obtained from the three repeat experiments were averaged.The data were analyzed in relation to the variance and presented as mean ± standard error (SE). Statistical analysis of quantitative data was carried out using LSD test. All statistical analyses were performed at 5% level using DPS (version 3.01) (Ruifeng Info Technology Ltd., Hangzhou, China).
Effect of plant growth regulators on callus and multiple shoot bud formation
The responses of the explants to various combinations of growth regulators on callus and adventitious shoot formation are summarized in Table 1. In our study, the frequency of callus formation was 100%. Callus formation was observed on the subepidermal area after 12 days of culture. The formed callus had purple color (Figure 1a), which indicated that it contained anthocyanins. Callus browning was observed after four weeks of culture and less brownlization was occurred in the medium containing 2,4-D. The 2,4-D was effective in resulting in a decrease of the frequency of callus browning (3.3 to 36.7%), whereas NAA was less efficient in browning control with the frequency of callus browning (66.7 to 68.3%) ( Table 1).
There was a distinct difference in appearance of explants grown on shoot induction medium supplemented with AgNO 3 . This study showed that the addition of AgNO 3 at 4 mg/l was beneficial in the alleviation of callus browning and greatly increased the number of plantlets produced. The browning was delayed and the callus browning rate was 31.6% on the shoot induction medium supplemented with AgNO 3 , whereas the callus browning rate was 81.7% on the shoot induction medium without AgNO 3 . The highest frequency of shoot regeneration (78.3%) was obtained in the medium containing 4 mg/l AgNO 3 and the shoot regeneration rate was only 43.3% for the control (without AgNO 3 ). Similarly, the highest number of shoots per explant (11.2) was obtained in the medium containing 4 mg/l AgNO 3 , whereas the number of shoots per explant was only 5.1 for the control (without AgNO 3 ).
The presence of AgNO 3 in the shoot induction medium not only enhanced shoot regeneration efficiency, but also expedited the initiation of adventitious buds; the multiple shoot initiated after 12 weeks on the shoot induction medium containing AgNO 3 , were 3 to 4 weeks shorter than those without AgNO 3 . As a result, 4 mg/l AgNO 3 was added to the shoot induction medium in all the subsequent experiments. The increase in shoot regeneration frequency induced by AgNO 3 may be due to the interruption of the ethylene signal transduction pathway. AgNO 3 has been shown to promote regeneration by acting as a potent inhibitor of ethylene action in chilli (Ashrafuzzaman et al., 2009), cotton (Divya et al., 2008) and common bean (Dang and Wei, 2009).
Multiple shoot formation started to be evident after 12 weeks (Figure 1b). The number of explants with shoot buds was scored after 15 weeks culture and the results showed that 6-Benzylaminopurine (BA) stimulated multiplication to a higher extent compared to kinetin (KT). It was concluded that the promotive effect of BA on shoot regeneration resulted to higher frequency of regenerating explants. The addition of 2,4-D had a positive effect on shoot formation. The combination of BA and 2,4-D resulted in a higher proliferation rate compared with other combinations. Although four combinations of 2,4-D and BA were good and did not show any differences statistically, but the best SFC index (8.6) was obtained in the combination of 2,4-D at 2.0 mg/l and BA at 0.5 mg/l and was 1.8 to 2.8 times higher than those in the other combinations. The frequency of regenerating explants was 78.3% in the best combination. The number of shoots per explant was over 10 shoots per explant in most growth regulator combinations (Figure 1c, Table 1).
The SFC index is a parameter that is effective in evaluating the overall regenerative potential of explants, as it results from the combination of their regeneration frequencies with the number of shoots formed per explant (Ozudogru et al., 2005). Our results also showed that the higher shoot regeneration efficiency resulted from higher frequency of regenerating explants rather than the higher number of shoots per regenerating callus.
Rooting and plant establishment
A mixed pool of shoots (2.0 cm long or more) on shoot elongation medium was evaluated for rooting. Isolated shoots were excised and rooted and 100% rooting was achieved in IBA at 0.5 mg/l within 11 days of incubation. Acclimatization of rooted shoots was easily obtained after their transfer to pots (Figure 1d). Survival rate was over 80% and after seven weeks, rooted plantlets were transferred successfully to the greenhouse.
Genetic stability of regenerants
In order to confirm genetic integrity, the DNA of 40 random-selected regenerated plants was compared with the DNA of the mother plant. The 36 RAPD primers used in this analysis gave rise to 150 scorable band classes, ranging from 150 bp to 2.5 kb in size. The number of bands for each primer varied from 2 (C-30) to 8 (C-13), with an average of 4.17 bands per RAPD primer (Table 2). Of the 28 arbitrary ISSR primers initially screened, 18 produced clear and scorable bands (Table 2), ranging from 200 bp to 2.7 kb in size. The screening with the 18 ISSR primers generated 89 scorable band classes. The number of bands produced by each primer ranged from 2 to 7, with an average of 3.18 bands per ISSR primer. A total number of 9799 bands (numbers of plantlets analyzed × number of bands obtained with RAPD and ISSR primers) were generated, giving rise to monomorphic bands comparing all the regenerated plants and the mother plant. Examples of the monomorphic band classes obtained are shown in Figure 2a for RAPD markers and Figure 2b for ISSR markers. These analyses clearly indicated that there were no polymorphic bands.
Both RAPD and ISSR markers have been successfully applied to detect the genetic similarities or dissimilarities in various plants (Chandrika and Ravishankar Rai, 2009;Sikdar et al., 2010). RAPD and ISSR were chosen because they amplify different regions of the genome allowing better analysis of genetic stability/variation of regenerated plants, as well as their simplicity and cost effectiveness. When analyzing micropropagated plants of kiwifruit, Palombi and Damiano (2002) also suggested the use of more than one DNA amplification technique as advantageous in evaluating somaclonal variation.
Nuclear DNA content (genome size) is a specific karyological feature that is useful for systematic purposes and evolutionary considerations (Bennett and Leitch, 1995). Genome size is positively correlated to nuclear volume, cell volume, mitotic cycle time and the duration of meiosis. Tissue culture was considered as a key origin for chromosome instabilities, although the molecular mode of action is still unknown (Guo et al., 2005). In vitro regenerated Hypericum perforatum plants and their progenies showed cytological variations (Brutovska et al., 1998) potentially linked to high variation in the concentrations of characteristic bioactive compounds of the medicinal plant (Cellárová et al., 1994(Cellárová et al., , 1997. Flow cytometry has aided this research as it has been demonstrated to be a convenient, accurate, rapid and highly reproducible method for estimating the nuclear genome size of plants. In order to test the genetic variability of the regenerated protocol, leaves of the regenerated plants were subjected to flow cytometric measurements with tomato leaves (S. lycopersicum cv. Stupicke) as an internal standard and the absolute DNA contents of 30 regenerated plants of G. bicolor were calculated. The nuclear DNA contents of 30 regenerated plants varied from 15.07 to 15.17 pg/2C and it was similar to that of the mother plant (15.13 pg), the source material. This result showed no significant RAPD profile of plantlets regenerated from G. bicolor obtained with the primer C-25. Lanes 1, mother plant; lanes 2 to 11, regenerated plantlets. MW, DNA molecular size marker; (b) ISSR profile of plantlets regenerated from G. bicolor obtained with the primer S-2. Lanes 1, mother plant; lanes 2 to 11, regenerated plantlets. differences (ANOVA, Tukey-HSD) between the mother plant and the in vitro cultured plants, which is indicative that they maintain their genetic stability during in vitro culture. This confirms the usefulness of tissue culture for the production of certified plant material to obtain herbal medicines.
In conclusion, we have successfully developed a novel and efficient protocol for plant regeneration of G. bicolor and analyzed the genetic stability of the regenerated plants by flow cytometry analysis, RAPD and ISSR molecular markers. This protocol will be useful for further secondary metabolic product, transformation and breeding studies. | 2018-12-28T14:47:48.661Z | 2011-09-30T00:00:00.000 | {
"year": 2011,
"sha1": "f53eeed7f91f00f73334d208e946eb1e1f99e8ae",
"oa_license": "CCBY",
"oa_url": "http://academicjournals.org/journal/AJB/article-full-text-pdf/32A89B532091",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d8a9f71b5a159a7fa5945322bf25ce5da9a72677",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
44662451 | pes2o/s2orc | v3-fos-license | A New FOC Approach of Induction Motor Drive Using DTC Strategy for the Minimization of CMV
Received Feb 4, 2013 Revised Apr 23, 2013 Accepted May 26, 2013 This paper presents a New FOC Approach of Induction Motor Drive using DTC Strategy for the Minimization of CMV (common mode voltage) with the switching tables for the generation of PWM signals. High performance induction motor drives require a better transient and steady state performance. To achieve high performance, there are two control strategies namely, field oriented control (FOC) and direct torque control (DTC) for induction motor drives. Though these two methods give better transient performance, the FOC needs reference frame transformations and DTC gives large steady state ripples. To overcome these drawbacks, this paper presents a novel FOC algorithm for induction motor drives, which combines the principles of both FOC and DTC. The proposed method uses a predetermined switching table instead of a much more time consuming pulse width modulation (PWM) procedure. This approach gives a quick torque response like DTC and gives reduced ripple like FOC. The switching table is based on the conventional DTC principle, which gives good performance with reduced common mode voltage variations. To validate the proposed method numerical simulations have been carried out and compared with the existing algorithms. The simulation results confirm the effectiveness of the proposed method. Keyword:
INTRODUCTION
High performance induction motor drives require decoupled torque and flux control, which can be achieved by using the FOC strategy. Hence, this control technique is becoming popular in many industrial applications. In 1972 F. Blaschke presented a paper on FOC for induction motor [1]. Though FOC method gives decoupled control, it requires reference frame transformations, which increases the complexity of the system. To improve the performance of FOC strategy, many researchers have published various papers [2][3][4].
In 1985, Takahashi introduced direct torque control (DTC) scheme [5]. In contrast to FOC, DTC method requires the knowledge of stator resistance only. Hence it decreases the associated sensitivity to parameters variation and the elimination of speed information. DTC offers many advantages like absence of co-ordinate transformation and PWM modulator when compared with FOC strategy. Moreover, DTC is simple for the implementation, because it needs two hysteresis comparators and a lookup table only to control both the flux and torque. A detailed comparison between FOC and DTC methods has given in [6]. Though, DTC gives fast dynamic response, it gives large ripple in steady state current, torque and flux responses. The conventional FOC strategy uses hysteresis type current controllers to generate the PWM signals. However, this can be achieved by using the switching tables also [7]. Hence to overcome the drawbacks of FOC and DTC, this paper presents a new FOC scheme, which combines the principles of both FOC and DTC. The proposed algorithm uses sophisticated switching tables to generate the PWM signals to the inverter. Moreover, the proposed method does not require reference frame transformations and gives good steady state and transient performance.
CONVENTIONAL FOC ALGORITHM
Though the induction motor has a very simple construction, its mathematical model is complex due to the coupling factor between a large number of variables and the non-linearities. The FOC offers a solution to circumvent the need to solve high order equations and achieve an efficient control with high dynamic. The FOC algorithm controls the components of the motor stator currents, represented by a vector, in a rotating reference frame. In the FOC algorithm, the machine torque and rotor flux linkage are regulated by controlling the stator current vector. The stator current vector is resolved into a torque producing component ( * To achieve decoupling control, the entire rotor flux is aligned along d-axis and hence the q-axis flux component will become zero. With this, the torque expression can be modified as given in (2).
Hence, the total rotor flux can be given as in (3).
PROPOSED FOC ALGORITHM WITH DTC STRATEGY
The electromagnetic torque expression for an induction motor can also be represented as where is the angle between stator current and rotor flux linkage vectors as shown in Figure 1. From (5), it can be observed that the variations of torque depend on the variation of . Hence, fast torque control can be achieved by rapidly changing ' ' in the required direction. This is the basic principle of "proposed FOC". For a short time transient, the rotor flux is almost unchanged. Hence, the rapid changes of electromagnetic torque can be produced by rotating the d-and q-components of stator current vector in the required direction according to the demanded torque. Here, the d-and q-axis stator currents are fixed to the synchronously rotating reference frame. The approximate stator voltage expression can be represented as By assuming the rotor flux linkage vector as constant, the voltage expression can be simplified as follows.
For a short time interval of t , the stator current expression can be represented as given in (9).
PROPOSED FOC ALGORITHM FOR REDUCED COMMON MODE VOLTAGE
The common mode voltage is the potential of the star point of the load with respect to the center of the dc bus of the VSI as shown in Figure 4 A set of phase voltage equations can be written as given in (10).
are inverter pole voltages and so V is common mode voltage.
Hence, if the drive is fed by balanced three phase supply, the common mode voltage is zero. But, the common mode voltage exists inevitably when the drive is fed from an inverter employing PWM technique because the VSI cannot produce pure sinusoidal voltages and has discrete output voltages. A detailed analysis is given in various papers [8][9][10][11].
It can be shown that the switching state and dc bus voltage decides the common mode voltage. There are eight available output voltage vectors in accordance with the eight different switching states of the inverter. According to the switching states of the inverter the common mode voltage can be expressed as given in (12).
where S a , S b and S c denotes the switching states of each phase. The common mode voltage for each inverter state is given in Table 2, which shows that, if only even or only odd voltage vectors are used, no common mode voltage variation is generated.
If a transition occurs from an even voltage vector to an odd one (or vice versa), a common mode variation of amplitude V dc /3 is generated. If a transition from an odd (even) voltage vector to the zero (seventh) voltage vector occurs, a common variation V dc /3 is generated. If a transition from an odd (even) voltage vector to the seventh (zero) voltage occurs, a common-mode variation of amplitude 2V dc /3 is generated. Finally, if a transition occurs from zero to seventh or vice versa, a common mode variation of amplitude V dc is generated.
Therefore, from the point of view of common mode emissions, the worst case is transition between two zero voltage vectors. To minimize the generated common mode emissions of the drive, the exploitation of both the null voltage vectors (zero and seventh) should be avoided.
The block diagram of proposed DTC based FOC algorithm is as shown in Figure 5. As in conventional vector control, the proposed vector control algorithm generates d-axis and q-axis reference stator currents, which are at synchronously rotating reference frame. Then as in DTC the proposed vector control techniques uses two -level hysteresis controller and lookup table. Thus, the proposed algorithm eliminates time consuming PWM procedure. The generated d -and q -axis current commands are compared with their actual current values obtained from the measured phase currents. The current errors are used to produce d-and q-axes flags as inputs to the switching table. A third input to the table determines the sector through which the current vector is passing. Based on the outputs of hysteresis controllers and position of the stator current vector, the optimum switching table will be constructed. This gives the optimum selection of the switching voltage space vectors for all the possible stator current vector positions.
SIMULATION RESULTS AND DISCUSSIONS
To validate the proposed algorithms, numerical simulation studies have been carried out by using Matlab-Simulink. For the simulation studies the dc link voltage is taken as 540V.The parameters of the induction motor used in this paper are R s =1.57ohm, R r =1.21ohm, L m =0.165H, L s =0.17H, L r =0.17H and J=0.089Kg-m 2 . The simulation results of proposed algorithms are shown from Figure 6 - Figure 10. From the results it can be observed that the proposed algorithm gives good performance during the transient and steady state conditions. The proposed algorithm has been developed by using the switching tables in order to reduce the common mode voltage variations, the zero voltage vectors are eliminated and hence this algorithm gives slightly increased ripple in current when compared with the conventional algorithm.
CONCLUSION
The FOC algorithm is becoming popular in high-performance applications. To eliminate the reference frame transformations in the conventional FOC algorithm, novel FOC algorithm is presented in this paper by using the switching tables. The proposed algorithm combines the basic principles of FOC and direct torque control algorithms. It uses the instantaneous errors in d-and q axes stator currents and sector information to select the suitable voltage vector. Hence, the proposed algorithm uses a predetermined switching table instead of a much more time consuming PWM procedure in conventional FOC algorithm. In order to reduce the common mode voltage variations did not use the zero voltage vectors. From the simulation results it can be observed that the proposed algorithm will give good performance with small increment in the steady state current ripple with drastic reduction in common mode voltage variations when compared with the conventional FOC algorithm. | 2019-04-13T13:10:56.991Z | 2013-06-01T00:00:00.000 | {
"year": 2013,
"sha1": "a1d2f001fe2c9d2957aebe07e57573735f082ef1",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.11591/ijpeds.v3i2.2416",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "86d983447a4eadcaaf0b9bc4c38b88bc18ad1294",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
14834489 | pes2o/s2orc | v3-fos-license | Lie Bialgebra Structures for Centrally Extended Two- Dimensional Galilei Algebra and their Lie-Poisson Counterparts
All bialgebra structures for centrally extended Galilei algebra are classified. The corresponding Lie-Poisson structures on centrally extended Galilei group are found.
Introduction
Much interest has been attracted in last years to the problem of deformations of space-time symmetry groups [1], [2], [3], [4], [5], [6], [7]. In particular, in the recent paper [8] all inequivalent bialgebra structures on two-dimensional Galilei algebra were classified and the corresponding Lie-Poisson structures on the group were found.
From the physical point of view what is really interesting is the central extension of the Galilei algebra. This is because only the genuine projective representations of the Galilei group are relevant in nonrelativistic quantum theory [9]. In the present paper we classify all nonequivalent bialgebra Lie-Poisson structures for centrally extended two-dimensional Galilei algebra/group. In the two-dimensional case there exists two-parameter family of central extensions, the parameters being the mass of the particle and the constant force acting on it. We restrict ourselves to the case of free particles, i.e. only the mass parameter is kept nonvanishing.
The content of the paper is as follows. First, we find the general form of 1cocycle on centrally extended twodimesional Galilei algebra. Then the action of most general automorphism transformation on such 1-cocycle is considered and its orbits are classified which allows to find all nonequivalent bialgebra structures. The corresponding Lie-Poisson structures on Galilei group are then found. The whole procedure follows quite closely the one presented in Ref. [10] for E(2) group and in Ref. [8] for two-dimensional Galilei group. As a result we find 26 nonequivalent bialgebra structures (some of them being still one parameter families), 8 of them being the coboundary ones.
2 Two-dimensional Galilei group and algebra with central extension and their automorphisms The two-dimensional Galilei group is a Lie group of transformations of the space-time with one space dimensions. An arbitrary group element g is of the form g = (τ, v, a); here τ is time translation, a respectively v are space translation respectively Galilean boost. The multiplication law reads: The resulting Lie algebra takes the form: The central extension is obtained by replacing the second commutation rule by Therefore, we arrive finally at the following algebra Let us define the centrally extended Galilei group by the following global expotential parametrization of group elements Let us write g = (m, τ, v, a) Then we have the following multiplication law Lie algebra with central extension can be realized in terms of rightinvariant fields to be calculated according to the standard rules from the composition law (9) Let us now describe all automorphisms of the algebra (6). The group of automorphisms consists of the following transformations where : and, obviously, α 1 = 0, γ 3 = 0.
The Bialgebra structures on two-dimensional centrally extended Galilei algebra
Our aim here is to give a complete classification of Lie bialgebra structures for the algebra (6) up to automorphisms. Let us remind the definition of bialgebra. It is a pair (L, δ), where L is a Lie algebra while δ is a skewsymmetric cocommutator δ : We can find all bialgebra structures on our algebra. The general form of δ obeying (i) is a, b, c, d, e, f, g, h and j being arbitrary real parameters. From the condition (ii) we obtain: Eqs. (13) and (14) define all bialgebra structures on two-dimensional Galilei algebra (6). However, we are interested in classification of nonequivalent bialgebra structures. To this end we find the transformation rules for the parameters under the automorphisms (11). They read We are now in position to classify all orbits of automorphism group in the space of bialgebra structures. A simple but long and painful analysis leads to the complete list of Lie bialgebra structures summarized in Table 1. We have checked explictly that all the above bialgebra structures are consistent and inequivalent. It remains to find coboundary structures (listed also in Table1).
Conclusions
We have classified all inequivalent bialgebra structures on the centrally extended two-dimensional Galilei algebra and found the corresponding Lie-Poisson structures on the group. The resulting classification appears to be quite rich and contains 26 inequivalent cases, eigth of them being the coboundary ones. This is in contrast with semisimple case as well as the case of fourdimensional Poincare group where there are only coboundary structures.
Acknowledgment
The Author acknowledges Prof. P. Kosiński for a careful reading of the manuscript and many helpful suggestion. Special thanks are also due to Prof. S. Giller, Dr. C. Gonera, Prof. P. Maślanka and MSc. E. Kowalczyk for valuable discussion. | 2014-10-01T00:00:00.000Z | 1997-10-23T00:00:00.000 | {
"year": 1997,
"sha1": "8ddc4479b447e836b530074608fb69be5061c5c0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/q-alg/9710028",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8ddc4479b447e836b530074608fb69be5061c5c0",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
14239415 | pes2o/s2orc | v3-fos-license | Lennart Green and the Modern Drama of Sleight of Hand
The broad purpose of this essay is to suggest an approach to sleight of hand magic, which looks at its social resonances as a dramatic medium. I outline a modern tradition of sleight of hand, that is a form of sleight of hand that was self-consciously described as modern by magicians. This tradition takes shape in the midto latenineteenth century, and its stylistic influence extends well into the twentieth century. I argue that this modern style had a specific set of social resonances, which help to explain the power of sleight of hand magic as a form of performance. In particular, modern sleight of hand was intimately intertwined with the relationship between magic and crime, and with a now-unfamiliar distinction between magic and juggling. The point of this exploration, though, is not purely historical. Through it, I want to lay some ground for cultural criticism of contemporary magic. In this, my subject and stimulus is the Swedish card maestro Lennart Green.
LENNART GREEN
Lennart Green cuts an unlikely figure on stage.The Swedish doctor-turned-magician has a portly build, speaks with a thick accent, and muddles his words.An untidy mop of hair hangs over his forehead.At the beginning of his 2005 TED performance, he looks slightly as though he has been squeezed into a suit for the occasion (TED, 2008; all references are to this performance).He sits at a table and rubs an empty tumbler nonchalantly with a handkerchief.'Cheers', he says, raising the glass now full of beer.This, his first trick, gets a ripple of laughter and applause.We suspect, though, that the reason we missed how he did it was because we weren't paying close attention.This is not, it seems, a slick, master illusionist but more of a friendly con man.Leaning forward and gesturing with his elbows on the table, he looks more like he is trying to flog us something than show us a magic trick.
He recalls, in fact, a mix of stereotypes: perhaps an awkward, office-bound IT worker, or an embarrassing distant uncle who crops up at family gatherings.His outfit brings to mind Ken Weber's advice for the style-conscious magician: Never wear a short-sleeved shirt with a jacket!It is acceptable to wear a short-sleeved shirt with a jacket only if you are a NASA engineer, a high school math teacher, or a member of those few other professionals for whom the description 'nerd' is taken as a compliment.(Weber, 2003, p. 124) He also seems to like his drink, keeping his beer to hand as he continues his performance.He brings out some playing cards, and immediately starts to fumble and drop them.They scatter across the tabletop.His comic timing is unerringly precise, but the cards are all over the place, and we suspect nothing.Until, that is, he starts talking about cheating at poker and claims that he has stacked the deck to deal someone a full house, which raises a smirk.He deals to several imaginary spectators in a ring, until each has a complete poker hand of five.Then he turns over the cards, revealing not only a full house but a string of progressively stronger hands.'But of course, I will have the winning hand.'He turns over his royal flush.
Lennart Green is, we begin to realise, breathtakingly skillful.Underlying his angular exterior, he seems to have a Zen-like awareness of the position of playing cards as they scatter.Accounting for this awareness becomes more and more difficult.A volunteer shuffles -several times -and the cards are left at all angles in a heap.By the end of his act, his eyes are sealed with electrical tape and his head wrapped in tinfoil, and yet he still manages, gingerly, to fish the whole suit of diamonds out of the deck -in order.And by the time he reaches the royals, all hesitation has evaporated.The cards snap into his outstretched hand: jack, queen, king.
There is a tipping point in his act.The cards are, as usual, in a hectic state.He asks anyone to name a card and someone in the audience calls out the seven of diamonds.He squares and tenses his body, squints his eyes and holds open his empty hand.A card flicks faster than the eye can follow from the deck into his outstretched fingers: the seven of diamonds.The disorder and vagueness of the routine have suddenly been cut short, transformed in an instant into uncanny precision.This is one of his signature moments.It comes mid-flow.It is unexpected.It has no build-up, and no delayed climax.It is a mini-climax amid the fray of his act, a moment so exact and so sudden that it seems to pierce the general chaos and pull everything taut.
Who is this man and what is he trying to tell us?He doesn't exactly teach us anything, at least not any new insights or information.All information is mock information: 'There's a lot of water in beer'.His act is forged out of gesture, manner and timing, like a kind of physical theatre.It has a strange gravity.The figure on stage is a likeable anti-hero, a caricature of something or other, tragi-comic.He creates a mess of possibilities and questions, which seem to tumble out of his hands onto the table top, like the cards, forming haphazard arrangements.This mess is somehow open to an onlooker, like an offering, chaotic yet generous.He pauses: 'Do you see the pattern?'At the same time though his performance seems underpinned by an intangible precision.It has, overall, a definite quality, as though all the haphazardness amounts at some level to an exact proposition.And look at his hands.Large, chunky hands, with fingers like blunt chisels, at once hopelessly clumsy and impossibly nimble.
MODERN SLEIGHT OF HAND
Towards the beginning of Jean Eugène Robert-Houdin's seminal textbook on conjuring, Les Secrets de la Prestidigitation et de la Magie (The Secrets of Conjuring And Magic or How to Become a Wizard (1878)), there is a short chapter on 'The Hand'.'The five fingers', he writes, 'have each their distinctive names.They are, enumerating them in succession, the thumb, the index or first finger, the middle finger, the ringfinger, and the little finger ' (1878, pp. 38-9).The book was intended to give practical tuition in conjuring.Robert-Houdin, covering all bases, thought to include some elementary foundation in the hand's anatomy.He adds an illustration of his own hand.
Figure 1
Then, on this apparently unremarkable specimen, he comments: It may possibly suggest itself to some of my readers that the hand above depicted lacks elegance and grace of form; that the fingers, for example, might be longer and more slender, after the manner of the hands represented by our celebrated painters.[...] My hands are short, I don't deny it, reader, but allow me to tell you that that very shortness is a virtue, if not a beauty.
It has been remarked by a celebrated observer that 'the dexterity of the fingers is in inverse proportion to their length.'Notice, my dear reader, henceforth all the fingers of your acquaintance; see how they accord with the saying I have just quoted, and you will admit that it is strictly correct.
Having laid down this proposition, let me entreat those persons who have been gifted by Nature with long and delicate fingers not to be offended at my preference for short ones, particularly when they remember that everybody is not bound to possess manual dexterity, and if a long hand loses in that particular, it has greatly the advantage in point of elegance and aristocratic appearance.(Robert-Houdin, 1878, pp. 40-1) Sleight of hand textbooks are an intimate literature.They are preoccupied with minute movements of the fingers, and the manipulation of small objects: how to pass, or seem to pass, a coin from one hand to the other; how to position a deck of cards in the hand for dealing; how to roll a handkerchief.Typically, they contain precise technical diagrams.Such literature was sparse until the mid-nineteenth century; since then it has acquired its classics, and has been one of the main forms of writing produced by magicians (and hence an important chunk of the source material available to historians of magic).The Secrets of Conjuring And Magic was of unprecedented thoroughness.
These books hold a certain intrigue, as they promise to reveal how magic was done.They are equally likely -particularly for a 'lay' reader -to seem banal, because their unending detail seems to convey very little about the context and meaning of magic as a practice and spectacle.In fact, though, the significance of this literature is not immediately obvious.Consider the fact that magicians, as a knowledge community, are unusually dependant on the exclusivity of their technical knowledge.Consider the importance that that places on the teaching, or inheritance, of magic's methods.Magic's 'how-to' literature has carried a peculiar weight, both ideological and pedagogical.It has not only served to pass on technical expertise, it has also been a major form of expression within magic.Furthermore, it has been one of the main ways in which magic's traditions and self-understanding have been reproduced.Books on sleight of hand, therefore, pose a problem of interpretation, or methodology: the importance of this technical literature to the history of magic is not purely technical.It matters, in various ways, that Robert-Houdin should want to describe the anatomy of the hand in the manner of a scientist carrying out a dissection, and that he should worry about how his illustrations jar with the aristocratic sensibilities of 'our celebrated painters'.
Moreover, as the passage quoted suggests, the hands are themselves a site of social distinction.An interesting avenue, which is the concern of this essay, is to investigate how broader social forces at play in performance magic are refracted through its sleight of hand.The macro history of magic -its cultural import, its social dynamics, its ideological resonance -is articulated at a micro level.Viewed with this in mind, magic's technical literature presents a rich record.It is an ethnographic rarity: a detailed inventory of the actions, gestures, and equipment used by practitioners of a performance genre as it discovered its modern forms.Stylistic discernments and distinctions within sleight of hand are bound up in broader value structures with cultural, social and ideological dimensions.An understanding of this connection may, in turn, open up a nuanced reading of sleight of hand magic as a form of performance.This is where I am aiming towards the end of this essay with a discussion of the celebrated twentieth century magician Cardini, and, finally, in returning to Lennart Green.
Broadly, I have in mind a social history of sleight of hand, which unpicks its social resonance as a dramatic medium.This essay attempts to advance a particular methodology in the study of performance magic.As such, it takes in only one sweep of material.In what follows, I lay out three interrelated strands of this history.Firstly, I look at the rejection of the term 'juggler' by modern conjurors.This shift was famously announced by Robert-Houdin, but it remained a recurring theme in discussions of sleight of hand into the early twentieth century.Secondly, I look at the subject of crime in magicians' writings.I examine how it was taken up by proponents of magic's modernity and, in particular, in the context of accounts of the sleights used by criminals.Finally, I look at how these themes unfolded in turn of the century vaudeville and music hall.The term 'juggling', in particular, became associated with the fashion for manipulation acts in this period.
NOT A JUGGLER
As will be obvious to anybody interested in the history of performance magic, the terms 'magician' and 'conjuror' were not widely used in their familiar senses until the nineteenth century.Before then, the closest parallel to what we now think of as performance magic was known as juggling.Less obvious, though, is what this shift meant.It is important not to presume that it was a mere substitution of names.Likewise, subsequent assumptions should not be imposed on the figure of the juggler.Most obviously juggling did not denote, as now, a niche skill associated with the circus involving the throwing up and catching of objects; but nor did it denote a style of magic reminiscent of it.Butterworth, in perhaps the most rigorous dedicated study of juggling, finds that the juggler was routinely identified (in England, at least, from the 13th century onwards) with two forms of artifice: 'confederacy' and 'conveyance ' (2005, p. 5).Confederacy meant collusion, and conveyance meant something like sleight of hand.Having said this, it is difficult to generalise about the nature of juggling.The kinds of performance it encompassed, as well as the social position of the juggler, varied as much as magic and conjuring did subsequently.Moreover, juggling bordered on a range of other activities such as song and storytelling; and these associations were stronger still in the case of the related French term jongleur.
One thing is clear, however.From the mid-nineteenth century onwards, performers of magic explicitly distanced themselves from juggling.The most famous instance is from Robert-Houdin's The Secrets of Conjuring And Magic, in which he wrote that '[a] conjuror is not a juggler [jongleur]; he is an actor playing the part of a magician; an artist whose fingers have more need to move with deftness than with speed' (Robert-Houdin, 1878, p. 43).The phrase is often clipped and paraphrased, as for example in The Illustrated History of Magic, where Christopher has Robert-Houdin define 'a magician as an actor playing the role of a man who could work miracles ' (1973, p. 7).This rendering highlights a common interpretation of Robert-Houdin's legacy: as Steinmeyer puts it, he reminds us 'that a magic show is a piece of theatre ' (2004, p. 17).The original, however, is a more intricate statement of identity.It involves no fewer than five characters -conjuror, juggler, actor, magician, artist -who must be held in some relation to one another.Furthermore, it seems to imply not only who the conjuror is, but how he should move ('with deftness' rather than 'with speed').
Before discussing this statement, though, it must be set in the context of Robert-Houdin's modernising project.This is ground well trodden, so here I simply outline its basic aspects, by way of recent studies (During, 2002;Mangan, 2007;Jones, 2008).Firstly, Robert-Houdin attempted to dissociate conjuring from the occult, witchcraft and black magic.As such, he occupies a special place in the development of what During calls 'secular magic'.Secular magic is presented as illusion, and is supposed to be understood as such by its audience.It is a 'self-consciously illusory magic', which severs or weakens the relationship between performance magic and the supernatural (During, 2002, p. 27).With Robert-Houdin, this relationship is inverted.He worked to establish the conjuror as a man of science, and considered a knowledge of the sciences essential to his craft.His illusions made use of electromagnetism and intricate mechanical and electrical machines, known as automata, which were presented, to a large extent, as scientific marvels.Hence, the spectacle of conjuring acquired rationalist underpinnings.
A second shift concerned the social standing of the conjuror.Robert-Houdin strove to elevate the status of performance magic.Just as he dissociated it from the supernatural and superstition, he also tried to cleanse it of its popular roots in street performance and fairground attractions.He recast conjuring as a respectable form of entertainment, to be performed in theatres.Accordingly, the figure of the conjuror was remodelled as 'a perfectly socialized nineteenth-century gentleman', mirroring the upper-class audience he tried to attract (Mangan, 2007, p. 104).On stage, Robert-Houdin used amiable, educated patter to frame his illusions.He dressed in modest but elegant evening attire, in keeping with contemporary bourgeois fashion.He regarded the long, flowing robes and pointed hats worn by some of his contemporaries as old-fashioned.Thirdly, and relatedly, conjuring was presented as a high art form.It was to be judged, not on the strength of its supernatural claims, but on aesthetic grounds.This key ideological shift was, again, linked to a preoccupation with status.Conjuring was supposed to stand aloof from the crude, immediate gratification provided by street entertainers, and was to be brought in line with bourgeois good taste.The conjuror's principal virtues, therefore, were elegance, naturalism, attention to detail and moderation.
Mangan has done much to link Robert-Houdin's assertion that '[a] conjuror is not a juggler' to his overall conception of modern conjuring.The term 'juggler' denoted an available archetype, and one which, in mid-to late-nineteenth century France, still impinged on conjuring.Juggling bore connotations which Robert-Houdin sought to distance from himself and from his profession.The archetypical juggler was likely to make a living on the street.He had an air of buffoonery, and entertained the public with brazen displays of sleight of hand.His signature piece was the cups and balls.Many depictions of this act from the time, and before, show someone at the margins of the frame pick-pocketing an onlooker, highlighting that the juggler was likely to be viewed as an accomplice to criminals and confidence tricksters (Dawes, 1979, p. 125;Mangan, 2007, p. 62).These associations made juggling an obstacle to the conjurer's social aspirations.In rejecting the term, therefore, and in likening the conjuror to an actor, 'Robert-Houdin is making the point that his spiritual home is in the heart of the mainstream theatrical culture.It is, on one level, part of this larger project, to establish the conjuror as a 'respectable' kind of entertainer' (Mangan, 2007, p. 103).
To this analysis I would like to add a further layer.Whilst the connection between conjuring and acting evinces a social dynamic, its immediate upshot, for Robert-Houdin, was a particular way of moving: the conjuror is 'an artist whose fingers have more need to move with deftness than with speed' (Robert-Houdin, 1878, p. 43).The counter-distinction between juggler and conjuror signified a distinction between different kinds of sleight of hand.The immediate setting of the remark gives a further idea of what this distinction was.It is preceded in the text by a brief commentary on two terms that Robert-Houdin also considered inappropriate to describe conjuring: Escamotage will always recall to the mind the 'cup-and-ball' tricks whence it derives its origin, and referring specially, as it does, to one particular feat of dexterity, suggests but an imperfect idea of the wide range of the wonder-exciting performances of a magician.
Prestidigitation seems to imply, from its etymology, that it is necessary to have nimble fingers in order to produce the illusions of magic, which is by no means strictly true.
A conjuror is not a juggler; he is an actor playing he part of a magician; an artist whose fingers have more need to move with deftness than with speed.I may even add that where sleight-of-hand is involved, the quieter the movement of the performer, the more readily will the spectators be deceived.(Robert-Houdin, 1878, p. 42-3) Both escamotage and prestidigitation implied something about the conjuror's dexterity that Robert-Houdin sought to avoid.Escamotage suggested an over-reliance on a specific branch of sleight of hand magic, and hence failed to capture the breadth of a conjuring performance.Prestidigitation suggested speed of execution, or nimbleness.Hence, the passage implies, whilst both terms may have been correctly applied to juggling, they do not capture the kind of dexterity proper to the conjuror.His movements are smooth, naturalistic and discreet, and, furthermore, they are kept in balance with the other aspects of his art.
It is primarily in this form -as a social distinction made through a distinction between different styles of sleight of hand -that the rejection of juggling continued to appear in conjuring literature into the early twentieth century.The stylistic distinction is explicit, the social distinction implicit.Take for example C. Lang Neil's The Modern Conjuror and Drawing Room Entertainer (1903).He reiterates Robert-Houdin's stylistic preferences, noting the '[g]racefulness of movement and gesture' essential to conjuring (1903, p. 25).The proper manner of the conjuror, we are told, can be 'summed up in the one word natural ' (1903, p. 23).Juggling, meanwhile, is overreliant on speed: "It is the quickness of the hand deceives the eye" was a maxim correctly applied to the performances of the earlier conjurors, whose skill was of the juggling order.[...] But as descriptive of the secrets of conjuring and magic (I always use the word in its natural, not the supernatural sense) it is entirely erroneous.(Neil, 1903, p. 19) Neil elaborates on the distinction via an illustration of juggling: 'The performer who takes a card or coin and apparently throws it into space, immediately showing the hand which held it quite empty both back and front, has astonished his audience -he has not deceived them ' (1903, p. 19).When a card or coin is juggled, it simply leaves the audience at a loss: 'They have not been led to think it is anywhere.They merely wonder what he did with it and admire the quickness of the manipulation which made the object disappear without their being able to follow it ' (1903, p. 19).Conjuring, by contrast, consists in the performer's audience being led to believe that certain definite actions have been carried out before them, while they presently discover that the results of those actions are something directly contrary to any natural law.[...] It is thus the mind of the spectator which must be deceived.(Neil, 1903, pp. 19-20) On this account, a conjuring trick must make an exact proposition, a 'definite' claim.Cause and effect should be clearly presented to the audience, then confounded.In juggling, the proposition is incomplete.The juggler intimates that something has happened, without making a positive claim.He plays on baser instincts, and never penetrates the prized territory of the conjuror: the rational mind of the spectator.To grasp the full extent of cause and effect, that spectator must, it goes without saying, have an 'educated mind' (Neil, 1903, p. 20).This reliance on a rational intellect is a recurring theme, as, for example, in Edwin Sachs ' Sleight of Hand (1946[1877]).Sachs reiterates the chronological succession from juggling -which belongs to a bygone era -to conjuring -which describes contemporary magic.He cites Chaucer, who describes a juggler producing a windmill from under a walnut shell: There is doubtless some slight exaggeration in this statement, or else modern wizards are far behind those of early days -a hypothesis I cannot accept.In the superstitious lands of the East, jugglery was doubtless at the bottom of the many manifestations that were mixed up with religion, and the wily priests made the best (or worst) uses of its influence on the uncultivated mind.(Sachs, 1946, p. 3) Juggling, then, is intertwined with, and takes advantage of, superstition.It belongs to a superstitious age (and, in this case, the superstitious 'East', though that's another story).With regard to conjuring, meanwhile, Sachs upholds the stylistic norms laid out by Robert-Houdin, who is credited with 'elevating the art in the eyes of the public' and 'investing it with nearly all that it possesses of the graceful ' (1946, p. 3).Evening dress is 'now conventional', and the student is advised to acquire a 'neat method of manipulation' and 'suavity of manner ' (1946, pp. 3-4).
The juggler-magician distinction reappears in a late, ponderous form in Our Magic, by Nevil Maskelyne and David Devant (1912).This text, from the heart of the British magic establishment, again distinguishes magic by its reliance on the mind: 'It will be found that, so far from being bound up in jugglers and paraphernalia, the true art in magic is purely intellectual in character [...]' (1912, p. 2).Manual labour, meanwhile, can be left to the juggler (who's better at it anyway): [F]rom the standpoint of mechanical art, the juggler's attainments are far higher than those of the magician.The latter can only take a higher place by realising that he has to depend for success upon his brains, rather than upon his hands.In manipulative skill, he is hopelessly outclassed by the juggler.The amount of practice and physical training he requires cannot in any way be compared with that which is needed by the juggler.If, therefore, the Normal Artist in magic insists upon regarding his art as a mere congeries of mechanical accomplishments, he must be content to occupy a position inferior to that of a common juggler, and immensely inferior to that of a skilled mechanic.(Devant and Maskelyne, 1912, p. 20) Conjuring is not juggling.With Robert-Houdin, a set of associations come together.The juggler is the bugbear of magic's past, on the wrong side of history, on the opposite side of a dividing line between civilisation and barbarism, modernity and pre-modernity.This line also divides high from low, the street from the theatre, and art from vulgar entertainment.The juggler mixes in different circles, tainted by criminals and humbugs.These associations are, in turn, embodied in different ways of moving.The conjuror's movements are seamless, elegant, natural, economical and open; the juggler's are quick and shifty.Furthermore, whilst dexterity is the juggler's only weapon, the conjuror keeps his sleight of hand in balance with the other aspects of his art.It is not a means to an end.He aims to deceive rather than impress, and relies on the mind rather than the hand -which denotes, of course, a particular style of sleight of hand.
MAGICIANS ON CRIME
The juggler, then, provided a counter-distinction, both social and stylistic, for the modern conjuror.An abiding preoccupation with status was expressed through fine distinctions, within sleight of hand, between the conjuror's movements and juggling.Another distinction, running parallel within the same modern tradition, was that between magic and crime.We have touched on this already, as petty criminality was one of the associations colouring the juggler's reputation.However, crime and criminals form a far more extensive literary theme.Magic's literature contains a telling sub-strand of works about criminals and their methods.
It is worth remembering that one of the essential texts on card magic from the turn of the century, S. W. Erdnase's Artifice Ruse and Subterfuge at the Card Table : A Treatise on the Science and Art of Manipulating Cards (1902), devoted two thirds of its pages to methods for cheating at cards for money.The book details a long series of sleights which could be used by magicians and cheats alike.On the subject of crime, Erdnase takes an amoral stance, and makes no secret of the fact that his insights were learned in 'the cold school of experience ' (1902, p. 14).The author -who wrote under a pseudonym -had no reputation to preserve.Within the modern tradition I have been discussing, however, the subject of crime was more often an opportunity for the gentleman conjuror to moralise and demonstrate his integrity.Robert-Houdin is a case in point.The stated aim of his Card-Sharpers: Their Tricks Exposed or The Art of Always Winning (1891), was to prevent cheats from exploiting the well-to-do public: 'I have myself an excellent opinion of the respectable classes, and hope that the reading of my book will inspire no thought beyond that of guarding themselves against the tricks of sharpers' (1891, p. vi).
Robert-Houdin's text sets a number of precedents.Firstly, it assumes that, by the nature of his profession, the conjuror has a privileged vantage point onto the world of crime.Although the book's primary concern is to expose how criminals operateit contains long sections on sleight of hand -Robert-Houdin's claim to expertise is far more extensive.He discourses not only on the methods of criminals, but on their lives and habits, as well as the social ills they cause.Secondly, however, the author is careful to preserve his own good standing by specifying how he received this illicit knowledge.Not for Robert-Houdin was it gained in 'the cold school of experience'.He begins the book with a colourful account of how he went in search of a master of cheating with sleight of hand.He arrives at the man's house to find it disgusting and smelly.The crook then emerges from his bedroom and tries to rob his visitor at knifepoint.A few months later, we learn, he was arrested.Little more than an inquisitive naïveté, it seems, accounts for Robert-Houdin's contact with criminal elements.This episode, moreover, seems to have put him off, as thereafter his riskier research is carried out via an acquaintance, 'a young man whose life, although tolerably respectable, was passed in eating houses and gambling-places' (1891, p. 13).
Thirdly, the criminal underworld is divided into types.Robert-Houdin's categorisation makes room for higher and lower kinds of criminal.The world of cheats, whom he calls Greeks, can be divided three ways: Taken collectively, the Greeks do not present any marked type; it would be difficult to portray their facial appearance, because the species is so numerous and varied.I, however, think it necessary, in order to better describe them, to divide the Greeks into three categories: 1st.THE GREEK OF THE FASHIONABLE WORLD.2nd.THE GREEK OF THE MIDDLE CLASSES.3rd.THE GREEK OF THE GAMBLING HELL.(Robert-Houdin, 1891, p.
21)
As we descend the social ladder, the skill and dexterity of the criminal decreases.The Greek of the Fashionable World shows great refinement in sleight of hand and uses sophisticated methods.The Greek of the Gambling Hell, meanwhile, is to the higher classes of criminal 'what the whining beggar is to the virtuoso' (1891, p. 34).The quality most essential to this lower variety is 'the capacity to smoke and boose without being affected by either ' (1891, pp. 35-6).The degree of skill, therefore, mirrors the Greek's social standing: The lower type of Greeks are nearly all alike; they are, for the most part, wretches that idleness and debauchery have driven to ask from cheating what they will not attempt to win by honest industry.
Their tricks are usually as coarse as the people to whom they address themselves.(Robert-Houdin, 1891, p. 35) Why write a book like this? Erdnase said he did it for the money.If Robert-Houdin simply wished to distinguish and distance himself from criminals, or if he was driven purely by moral indignation, we would still have to explain his barely disguised admiration for The Greek of the Fashionable World.This class of cheat has extraordinary powers of perception, is finely attuned to human behaviour, and is explicitly compared to the conjuror: To these eminent qualities of mind, the Greek of the fashionable world unites a profound knowledge of the most difficult tricks of conjuring.Thus no one knows better than he how to draw the card or break the cut, to use or place aside concealed cards, etc. (Robert-Houdin, 1891, p. 24) This comparison suggests that what is at stake is not only the representation of criminals, but the representation of the conjuror in relation to them.In a recent study, Mangan considers the methodological challenges posed by the autobiographical writings of magicians, which are, he notes, prone to lying and selfaggrandisement.He concludes that '[t]he most realistic way to think about magicians' own accounts of their lives, careers and tricks is to consider them as extensions of their stage acts -as a particular kind of "performative writing"' (Mangan, 2007, p. xix).This is fruitful, as it suggests that we see the subject of crime as a platform on which magicians could elaborate their own stage identity.It is, furthermore, a platform with a strong dimension of class.
The relationship between conjuring and crime can be understood as a dramatic tension within modern conjuring itself, which magicians' writings on crime actively encourage and sustain.In Card-Sharpers, the tension is carefully modulated.The text is a finely tuned piece of social positioning.First, the author evokes a world, a sort of class-based mythology, which his readers -the text presumes -know little about.There is the risk of a dangerous association.'Cards, dice, and dominoes' can, in the hands of a criminal, be 'very dangerous things', but they are also the tools of the conjuror, who, by implication, has danger in his hands, though he refrains from abusing it (1891, p. 28).Then, Robert-Houdin openly flirts with the idea that the upper-class cheat and the conjuror think and act alike.The refinement of this kind of criminal earns a nod of recognition, like a glance exchanged among equals.Finally, though, when we come to the poorest, lowest kind of cheat, the tension is broken off.The Greek of the Gambling Hell is beyond the pale, morally and aesthetically repugnant: '[i]t is no longer the art of the conjurer; it is trickery without a name' (1891, p. 35).
Several of these themes are reiterated in 'Sharps and Flats': A Complete Revelation of the Secrets of Cheating at Games of Chance and Skill (1894) by John Nevil Maskelyne.His contribution to this literature is more staid than Robert-Houdin's, adopting a more resolute moral stance: '[t]hat the condition of affairs herein revealed should be found to exist in the midst of our boasted civilisation is a fact which is, to say the least, deplorable' (1894, p. 4).For him, the moral disease of cheating extends even to straight gambling, which he considered 'essentially dishonest ' (1894, p. 315).The central purpose of the book, meanwhile, is to protect the upright reader: '[m]y selfimposed task, then, has ever been to endeavour to educate the public, just a little, and to enlighten those who really seek the truth amid the noxious and perennial weeds of humbug and pretence' (1894, p. x).
In spite of this high moral tone, the theatrical aspect of the writing is still apparent.Maskelyne's career, which was strongly influenced by Robert-Houdin, had a strong rationalist bent.His opposition to spiritualism was the theme of his earliest stage shows, and became a lifelong preoccupation, as well as a considerable source of publicity (Dawes, 1979, p. 164-5).In 'Sharps and Flats', he implies that his exposé of crime should be seen as part of the same dramatic crusade for truth: This book, then, is but another stone, as it were, in an edifice raised for the purpose of showing to the world the real nature of those things which are not really what they appear to be, and practices with the very existence of which the average man is unacquainted.(Maskelyne, 1894, p. x) Meanwhile, although Maskelyne doesn't categorise criminals as thoroughly as Robert-Houdin, he doesn't shy away from pronouncements on cheating as a social phenomenon.On the origins of crime, he writes: 'To my mind, the only hypothesis which in any way covers the facts of the case is that some men are born to crime.It is their destiny, and they are bound to fulfil it' (1894, p. viii).Hence the magician's privileged vantage point onto the criminal classes is preserved.Again, though, he specifies how a man of his standing came into possession of the nefarious information contained in his exposé.He had the help of 'a friend who desires to be nameless'.Not a criminal, though, but a 'gentleman' in 'the assumed guise of an English "sharp"' (1894, p. 5).
The same tropes appear again in Harry Houdini's The Right Way to Do Wrong: An Exposé of Successful Criminals (2007 [1906]).In this text, the identity of the performer looms large.In the preface Houdini assumes a theatrical tone of address: There is an under world -a world of cheat and crime -a world whose highest good is successful evasion of the laws of the land.
You who live your life in placid respectability know but little of the real life of the denizens of this world.[...] Of the real thoughts and feelings of the criminal, of the terrible fascination which binds him to his nefarious career, of the thousands -yea, tens of thousands -of undiscovered crimes and unpunished criminals, you know but little.(Houdini, 2007, p. 3) With Houdini, the simmering tension between conjuring and crime becomes a raging contradiction.He opens up a panorama of the unknown, a montage of criminal ways and vices, which he is uniquely placed to understand.He makes his moral position clear at the outset: 'to those who read this book, although it will inform them "The Right Way to Do Wrong," all I have to say is one word and that is "DON'T" ' (2007, p. 11).Likewise, he makes it clear that he has not learnt the methods of criminals first hand, but by conversing with 'the chiefs of police and the most famous detectives in all the great cities of the world ' (2007, p. 4).At the same time, though, he freely indulges in this criminal connection.Each chapter of the book deals with a different type or aspect of crime, until we reach the final chapter, entitled 'Houdini' -who is, by inference, the ultimate criminal.There he regales us with his various feats, including his publicly staged jail escapes.With a characteristic egoism, Houdini has it both ways, both defying and deferring to the law.
The thematic of crime in conjuring literature is writ large in Houdini's text.His categorisation of the criminal classes is playful and uninhibited.He introduces us to the professional burglar ('a man of resources and daring'), the pickpocket ('a natural rover'), the 'Bunco' man, the forger, and the 'Fair [female] Criminal' (2007).At the top of the ladder is the 'Aristocrat of Thievery', at the bottom the 'Beggar' or 'Dead Beat', who, 'in ninety cases out of a hundred [...] is a cheat and a fraud ' (2007, p. 44).Above all, Houdini emphasises that criminals penetrate every layer of society in a way calculated to put the well-to-do reader on edge: Do you see that well-dressed, respectable-looking man glancing over the editorial page of the Sun?You would be surprised to know that he is a professional burglar and that he has a loving wife and a family of children who little know the 'business' which takes him away for many days and nights at a time!(Houdini, 2007, p. 7) Crime, here, is an unsettling spectacle.It cannot be held at a distance, morally or otherwise.Houdini claims to pierce the veneer of bourgeois society, and uncover a criminal universe lurking beneath the surface.At the same time, though, he is not too unsettling.He is subversive only within limits.The norms of propriety are not brought into question, still less the law.Criminals are among us, they threaten us, but 'us' remains the operative word.Unsurprisingly, then, whilst the plague of criminality infiltrates the middle-and upper-classes, it remains unproblematically associated with the lower classes.Of the 'humble criminal', for example, he writes: '[n]o avarice, but simple laziness keeps these thieves dishonest ' (2007, p. 9).Overall, Houdini's free-hand typology of criminals is thick with class-based caricature and noxious generalisations.At the end of a chapter on burglary, he drops in a quaint illustration, which could almost be passed over within the lively flow of the text.He shows a diagram of 'A Criminal Hand'.Underneath it is the following caption: 'The ordinary criminal's hand has a peculiarly rough shape, the thumb being very plump and short, while the fingers are uneven and heavy.The small finger is turned inward, and bluntness is the hand's chief characteristic ' (2007, p. 18).
Figure 2 MANIPULATION Houdini's fraught rhetoric gives us a clue as to what happened to magic's modernity in American vaudeville and in its British counterpart, music hall.Houdini was one of the best-paid vaudeville performers, and one of a host of acts which toured on both sides of the Atlantic.His knack for having it both ways -his ability to position himself both within and above the law -illustrates pervasive tensions within both vaudeville and music hall between the respectable and the illicit, or between high and low.It also suggests how magicians could take advantage of such tensions, exploiting them, inflating them, whilst remaining oriented towards the high.
Speaking broadly, the characteristic ambivalence of these early forms of massentertainment was linked to upward class dynamics.Over the course of their heydays, both music hall and vaudeville distanced themselves from their variegated popular origins and working class roots, and increasingly appealed to middle class audiences.Both were subject to moral censure, and both attempted to reform their images according to bourgeois sensibilities.Accordingly, variety theatre was torn between different class loyalties.Which way it tended to tip in a given case is a matter for debate (see e.g.Bailey, 1994;Faulk, 2004;Mintz, 1996).Provisionally, though, within magic we can note a general heightening of the tensions underlying the tradition of the gentleman magician.With Houdini, in his performances as much as his writings, a long-standing connection between magic and crime was thrown into relief.Likewise, one could think of how the patriarchal subtext of this tradition was dramatised by the cutting a woman in half illusion, pioneered by P.T. Selbit and quickly imitated (this was also, taken literally, a crime enacted on stage).Perhaps unsurprisingly, in this context, juggling also made a comeback.The term became bound up in new stylistic developments.
The presentational innovations of the variety 'specialists' are well known.They served a market for short, focused acts.Typically, a performer would develop a niche ability, designed to make a unique contribution to a variety bill.Such acts could last as little as eight minutes, and would be devoted to a particular branch of illusions.Many magicians built their careers on specific props, such as silks, watches or billiard balls (Christopher, 1975).At the same time, the pace of magic acts increased, and an increasing number of magicians performed without speaking, relying on visible cues and striking visual routines.Finally, manipulation -that is, effects produced purely by controlling objects with sleight of hand -came into fashion.Such effects were, necessarily, quite stark and small-scale.A manipulation act would typically involve the repeated appearance and disappearance of small objects such as cards, coins or cigarettes.The category of manipulation, though, also encompasses flourishes, or dramatic displays of dexterity, which produce no illusion as such.
Speed and an over-reliance on dexterity were characteristics that had traditionally been disdained in juggling.Furthermore, pure manipulation often produced the kind of stunted illusion that was considered the juggler's domain.When a manipulator makes an object disappear, the spectators 'have not been led to think it is anywhere.They merely wonder what he did with it and admire the quickness of the manipulation [...]' (Neil, 1903, p. 19).Hence, T. Nelson Downs, one of the most successful sleight of hand acts in Vaudeville, who specialised in coin manipulation, warned of the renewed threat that juggling posed to his profession.The Art of Magic, which he co-authored with John Northern Hilliard, reaffirmed the distinction between conjuring and 'the juggling order of sleight of hand ' (1980 [1909], p. 17): The last decade was devoted to manipulation and specialization.Kings and emperors and dukes and panjamdrums of cards and coins, monarchs of eggs and handkerchiefs, czars of cabbages and billiard balls sprung up like mushrooms.Magic degenerated into a mere juggling performance.Dexterity was paramount and the psychological side of the art neglected.Mind gave way to matter.(Downs, 1980, p. 12) The traditional cornerstones of refinement are invoked with renewed urgency: moderation, naturalism, smoothness, distinctness and the intellectual character of magical deception.Crucially, magic should convey more than 'mere rapidity of movement ' (1980, p. 13).Dexterity for its own sake must be rigorously curbed: [...] after the desired degree of dexterity is attained the student should not, in the vanity of his achievement, exhibit his dexterity and boast of the rapidity with which he can execute the various movements.It is not quickness of the hand that deceives the eye, as spectators so fondly imagine.The modern conjurer depends for success on a more adroit and more permanent foundation -psychology.The cunning hand works in harmony with the active mind, and by means of both mental and physical adroitness the spectators are deceived and mystified.The really expert performer, however, does not prattle of his dexterity.He lets art conceal art.(Downs, 1980, pp. 17-8) For Downs, the temptation to 'exhibit' sleight of hand was rife.Magic had, then more than ever, to guard against degenerating into a mere display of mechanical ability.Against this tendency, he invokes two traditional correctives: art and the mind.In the second part of Our Magic, Devant shows a similar concern.As a performer who straddled the divide between the old school of stage magic and music hall, Devant would have been acutely sensitive to the interplay between high and low.He, like Downs, disregards manipulation for its own sake: Catching cards from the air, or rather, appearing to do so, and making them vanish, one at a time, from the finger tips, are also effects much in vogue.They are apt to appear akin to the feats of jugglery often exhibited by conjurors, such as throwing cards boomerang fashion, or spreading them deftly along the forearm, springing them from hand to hand, and various eccentric shuffles, which can hardly be called feats of magic.In our opinion they are incomplete; they may impress the onlooker with the fact that the card manipulator is very clever, very dexterous, but the feats convey no mystery, and all idea of watching a real magician is destroyed by such diversions.(Devant and Maskelyne, 1912, p. 275) The 'effects much in vogue' would have referred to the craze for card manipulation in variety theatre.Although Devant disapproves of '[c]atching cards from the air' in this manner, he later describes an ostensibly similar feat in which he produces not cards but billiard balls at his fingertips.How does he distinguish what he is doing from jugglery?
It will be noticed that if the body is twisted to the left without altering the position of the hand holding the ball the performer will naturally show both sides of the hand as well as the ball and it will be obvious that nothing but the ball is in the hand.When a second ball appears suddenly beside it whilst the conjuror holds his hand thus outstretched, the full length of his arm from his body, and when the conjuror further proves that they are both solid ivory balls by knocking them together then indeed we have a surprise which savours of real magic.(Devant and Maskelyne, 1912, p. 314) The threat of juggling is set at bay by a deliberate clarity.The act of deception must have every outward appearance of transparency.No rapid or angular movements should arouse suspicion.The hand reaches outwards, extracting a moment of clear and distinct impossibility from the fray of manipulation.The body provides no cover.Only then, and only through a heightened attention to detail (the sound of the balls knocking together), can the performer achieve -not the full impression of real magic but -'a surprise which savours' of it.The strain in this distinction is palpable.Juggling, here, is an intimate anxiety.It is a temptation at the heart of sleight of hand magic, something to be reeled in and controlled.
On this note, I would like to turn from text to the act of performance.As should already be clear, the tension between magic and manipulation, and the other distinctions I have discussed, provided fertile ground for magic acts.In what remains of this essay, I consider how these themes have played out on stage.Among the most accomplished sleight of hand performers of the vaudeville era was Richard Valentine Pitchford, who is better known by one of his stage names, Cardini.He provides one of the richest case studies of a manipulation act from this period, owing to a 1957 video recording (National Broadcasting Company, 1957).Although Cardini had, by then, long since shifted to other venues, his repertoire was developed as he climbed through the ranks of vaudeville, first in Australia and New Zealand, and then the United States in the late twenties as the genre was in decline.
His act was built on character.On the NBC tape, he appears in a top hat and an opera cloak, with a cane tucked under his arm.Whilst manipulating cards, he wore a pair of white gloves, which presumably added to the difficulty of the sleights, but also served to highlight their refinement.His persona had aristocratic overtones, and a large element of the upper-class English toff, as suggested by his monocle.On close examination, though, he was not entirely genteel.His billing as 'The Suave Deceiver' had a hint of irony.He appeared tipsy on stage, pausing by a lamppost during an evening out, or stumbling in after a trip to the opera.The drink seemed slightly to have got to his head, making him smug and irritable, and he would only just manage to maintain appearances.A close parallel to Cardini's act can be found in the British swell song.'Typically,' writes Bailey in his analysis of this sub-genre in early music hall, 'the swell was a lordly figure of resplendent dress and confident air ' (1986, p. 49).He was recognisable by his top hat and his penchant for Champagne.Whilst, in some variations, the swell was fashionable and upright, he could also be indulgent and raucous, a lad about town with more money than sense.At one extreme, he was a counterfeit, an imposter in the upper-classes, assuming their airs and graces but barely disguising his courser habits: the term swell carried an early suggestion of the bogus, particularly in the appellation 'swell mob', denoting a class of pickpockets who dressed in style to escape detection as they mingled with their fashionable victims.But the sham swell was more commonly registered as a social rather than a criminal menace.(Bailey, 1986, p. 55) This was a controlled undertone, though, as usually the swell would combine caricature with glamour.He was someone to be admired, an object of aspiration.Cardini, like the swell, trod a fine line between mockery and homage, but gravitated none the less towards homage.This becomes clear at the end of the NBC video.Rather than stumbling off stage in the midst of a drunken haze, Cardini exits with steadier step and a wry smile.He turns once more to the audience and gives a knowing tip of his hat, as though to confirm a mutual acknowledgement that his antics are in jest and his style unquestionable (Bailey writes about the 'knowingness' of music hall (1994)).
How did sleight of hand figure?Cardini belongs to an interesting lineage in magic in which the effects, rather than seeming to spring from any magical capability, seem to happen to the performer.Vaudeville magicians exploited the potential of this perverse style of magic for physical caricature.Mr Hymack 'The Human Chameleon', for example, who Cardini knew and admired, had an act in which items of clothing -his gloves, his bow tie, his top hat -would suddenly change in colour or size, upsetting his well-groomed composure.Similarly, Cardini portrays a gentleman at his leisure, who is unexpectedly set upon by cards, cigarettes and billiard balls.These appear incessantly at his fingertips in spite of his attempts to shake them off.He becomes grouchy and flustered.Fisher, Cardini's biographer, captures the tone of these trivial apparitions: 'playing cards were a nuisance like wasps at a picnic, cigarettes their own persistent will-o'-the-wisp, and billiard balls, whatever their colour, as irksome as so many pink elephants ' (2007, p. 23).
Cardini's sleight of hand seems involuntary.It works through him, in spite of him, against him.One of his dramatic masterstrokes is the way his monocle keeps dropping from his eye.To concentrate, to see clearly, and to sustain his image, he needs it in, but as he raises his eyebrows in surprise, it falls out.His act was a study in vanity: the vanity of keeping up appearances, and of a mind struggling, in vain, to keep up.There is one moment during the NBC video, though, which seems to buck the trend.Having pulled a billiard ball from the air, he twirls it at his fingertips, shifting it from finger to finger at a lighting pace.His hand gyrates wildly.After a few seconds, his movement slows, and the ball seems to defy gravity for a moment, sliding down his vertical index finger.This sequence is the only prolonged, overt display of skill in his act.
Figure 4
This inconsistency, though, brings us to the crux of the performance.Sleight of hand is done both by him and in spite of him.His frenetic hand is another apparition, like the billiard ball itself.It takes over from his mind.This is simply to say, though, that sleight of hand represents another side of his nature.It is like an addiction; it seizes his body whilst his mind is idle.The drink has put him off guard, and he is beset by temptations.Cigarettes keep appearing, in spite of his attempts to discard them, until, at one point, he has one in each hand and another appears between his lips.The sequence in which he twirls and floats the billiard ball is of a piece with the performance as a whole: it is a moment of indulgence, of succumbing to temptation.Sleight of hand, in short, is portrayed as a vice, as excess.
By the available criteria, what Cardini does during his momentary lapse into overt dexterity is juggling.It is an overt display of dexterity for its own sake.His whirring fingers seem to taunt the conjuror 'whose fingers have more need to move with deftness than with speed' (Robert-Houdin, 1878, p. 43).Even when his movements slow down and the ball floats, he falls short of those 'definite actions' resulting in 'something directly contrary to any natural law', which are proper to conjuring.As Neil would have it, we 'merely wonder what he did with it' (1903, pp. 19-20).Cardini's spasm of virtuosity is a victory of the mechanical over the intellectual.The magician, remember, 'can only take a higher place [than the juggler] by realising that he has to depend for success upon his brains, rather than upon his hands' (Devant and Maskelyne, 1912, p. 20).With Cardini, the hands take over, his mind dulled by drink.(Of course, this is to say nothing of the actual intellectual demands of his act, but then again there is no reason to believe that conjuring's prejudices against jugglers ever held.) As we have seen, the term juggling was used in relation to variety theatre to denigrate manipulation.Cardini, though, was keen to define himself, as Fisher points out, 'as a manipulator first and a magician second', suggesting an interesting strand of manipulator's professional pride (2007, p. 186).However, his act was not as much of an about-face as this suggests.The traditional dichotomies remain in place: conjuror-juggler, mind-hand, dexterity-psychology, moderation-excess, high-low.Cardini only complicates matters by straining the tensions between them.Meanwhile, the whole roster of accompanying social tensions are there in the mix: theatre-street, fashionable-vulgar, leisure-work.Even the frisson between magic and crime lingers in the background, as it did with the swell.The vast quantity of playing cards he has about his person suggests a gambling habit, crooked or otherwise.
The manipulation of cards wearing gloves, then, was improbable in more than one sense.Even the hands that did the juggling were dressed in gentlemen's clothing.Gloves were a contradiction in terms, suggestive of a split identity.Cardini's act provides a potent example -the more rewarding for a good video record -of how the internal stresses and social resonances of the modern drama of sleight of hand manifested on stage.He is, needless to say, only one example, but one particularly redolent of the distinctions and nuances contained in that tradition.A close reading of Cardini's act suggests the ultimate rewards of the methodological approach to magic's literature I have been pursuing.Though the social history of sleight of hand is bound up in texts, full of technical detail -in the archive, so to speak -it has the capacity to return us with renewed insight to the performance of sleight of hand magic.
'AS THOUGH BEELZEBUB WERE HARD ON HIS HEELS'
The disparagement of juggling outlived the old meaning of the word.In the introduction to Expert Card Technique: Close-Up Table Magic, for example, first published in 1940, Jean Hugard and Frederick Braué write: The performer who constantly riffles the ends of the pack, who rushes through his feats as though Beelzebub were hard on his heels, whose movements are quick and jerky, is defeated before he starts, for his spectators always are conscious of the fact that he is employing sleight of hand; his very action betrays the fact.(Hugard and Braué, 1974, p. xx) By now this advice should sound familiar.It points to the continuing influence, at a stylistic level at least, of the modern school of sleight of hand through the twentieth century.The basic virtues of this modern style were gracefulness, clarity, succinctness and naturalism.Above all, it was distinguished by its moderation.It was not supposed to be an end unto itself; its aim was to engage the rational mind of the spectator.This essay has attempted to recognise some of the peculiar social resonances of this modern style.This has involved holding certain large themes in relation to certain small ones.The epochal shift from juggling to conjuring, the emergence of magic's modernity, the historical relationship between magic and crime, and the social antagonisms of turn of the century variety theatre were all played out on the terrain of sleight of hand.
The modern style I have outlined was -in spite of its seamlessness -underpinned by a cluster of social tensions.It courted an ambiguous relationship to criminals and their methods, both distancing itself from them and taking on some of their allure.Juggling, meanwhile, served as a counter-image, embodying, variously, the outdated, the inexpert, the extravagant and the vulgar.The juggler's movements were characterised as quick, jerky, ostentatious and suspicious, whilst the conjuror's were supposed to be natural and modest.Crucially, however, these tensions were not banished from modern magic at its inception.They continued within it, at varying degrees of intensity.Modern sleight of hand was a dramatic medium shot through with ambivalence.Its conflicting tendencies -at once stylistic and social -explain something of its power as a form of performance.
If, however, modern sleight of hand had, in its inception, a peculiar set of social resonances, then its perseverance begs the question: how does it transpose into different contexts?How does it translate, or mistranslate, across time?Have its aesthetic norms acquired different meanings?Have they come to signify, for instance, a nostalgia for magic's past?Or do they amount to no more than an empty formalism?Whatever the answer (there is no one answer) performance magic remains, to a striking extent, indebted to its modern forms.Commenting on the act of card throwing, which was performed by Isaac Fawkes in the eighteenth century and is still done today, During notes that '[m]agic has a slow history' (During and Najafi, 2007).This may be especially true of close up magic in which traditional props are still pervasive: cards, cups and balls, money, rope, linking rings, handkerchiefs, and so on.
More to the point, the stylistic legacy of modern magic can still be felt.There is, among magicians, a recognisable middle ground: polite, smooth, natural.More interesting, though, are those at either extreme of the modern tradition.It has its cultivated adherents -say, Michael Vincent or Guy Hollingworth -and its outliers.Examples of the latter that come to mind are Tommy Cooper, the close up work of Danny Sylvester, or recently Yann Frisch; all, in different ways, make a play of awkwardness, rapidity, angularity and confusion.Both extremes are, as we have seen, equally traditional.Formal tensions persist, then, with an indeterminate echo of the social tensions that once ran through them.
Against this backdrop, we can begin to make sense of Lennart Green.The appeal of his act becomes more intelligible when his style and persona are considered partly as a reaction to something.It somehow matters that his shifty, cack-handed manner would have been, for Robert-Houdin, unspeakably vulgar: It is not unusual to see conjurors affect a pretend clumsiness which they call a 'feint.'These hoaxes played on the public are in very bad taste.What should we think of an actor who pretended to forget his part, or of a singer who for a moment affected to sing out of tune in order to gain greater applause afterwards?(Robert-Houdin, 1878, p. 34) The drunk and the petty criminal rear their heads again in Lennart Green's act.He demonstrates various moves which could be used, apparently, to cheat at the card table.He places a card in front of him, and has a higher one called out.He readies himself, and, with a snap, the card on the table has changed to the one named.Or again, after having the deck shuffled, he riffles the cards onto the table, lunges with his hand, and pulls out a complete royal flush.Meanwhile his glass of beer -which he fills as his first trick -is on hand throughout.'When I'm sober,' he says, 'I do this much quicker.'Is this juggling?In his quick movements, his angularity, and his total lack of naturalism, he resembles the juggler.Juggling, though, can be a slick performance, which Lennart Green's is emphatically not.Rather, he is closer to the baser side of juggling: its showiness, its shiftiness, its vanity.He boasts of his skill, though at first we don't believe he has any.Once he has proven that he knows a move or two, he unashamedly shows them off.This, again, is sleight of hand as mere excess, as something for its own sake.Likewise, his movements are utterly unlike the 'definite actions' that Neil requires of the conjuror (1903, p. 19).They are spasmodic, sometimes confusing sometimes definite, but never structured by a clear narrative of cause and effect.Someone names a card -here it is.In a way, nothing could be more definite.But we are not prepared in advance; there is no premise.For a while in the middle of his act, the drama seems not to build.It becomes a succession of quick-fire moves, with little linking one to the next.
Yet his act does build.It gradually becomes clear that something confounding is going on.There is an overall effect that comes through.Lennart Green's clumsiness belies a profound precision, his drunkenness a penetrating insight.His successes are too consistent.He has produced the right card five times, ten times, too many times.There is some kind of order in his chaos, method in his madness.He cuts the deck into small piles all over the table and says 'when I lift the heap I peek'.This seems unlikely, but we are left with no better explanation.Either the mad cutting and reassembling is making everything more chaotic, or strangely ordered.We are left at a loss in the face of a whole series of mock-explanations: vague patterns in the cards, slow motion, Mandelbrot spirals, a high frequency laser.The basic notions of system, control and memorisation become more and more plausible.
For this reason, though, he is not quite juggling.The juggler, as we are told, resorts to the mechanical over the intellectual.He depends on his hands, rather than his brains.He claims the attention of his spectators by flaunting his skill.To begin with, this is what Lennart Green seems to be doing.He shows off his dexterity (though at first he seems not to have any).The mind, though, comes in gradually, unexpectedly, like an undercurrent.A series of coincidences begins to suggest some design.The impression mounts.His precision becomes increasingly uncanny, and his bluffs less and less laughable.
By the end of his performance, he is blindfolded with electrical-tape and tinfoil, looking alien and incapacitated like a Dalek, and yet he continues to produce the right cards: the entire suit of diamonds in ascending order.He seems more aware of his surroundings than his volunteer.He throws her the wrong card as a decoy ('misdirection'), she reaches to pick it up, and he brandishes the right one before she turns her head.His performance turns out to be about the mind after all -an acute, obscure mind, with an almost mathematical consciousness.It is tempting to say that there is, in the end, only one trick, only one effect: the impression of a brain at work.The blindfold seems to make no difference, as though he is gifted with some kind of blind sight.The lasting image of the act is the performer with his knowing head cocooned in foil and his hands seamlessly carrying out his bidding.
It is not exactly juggling, then.It is something like the juggler out-thinking the conjuror, the juggler beating the gentleman at his own game.It is an oblique retort to modern conjuring's originating drive for status.Lennart Green's rarest ability is to turn this into such a comic and likeable satire.It is always unlikely, never controlling.It leaves none of the bitter aftertaste of one-upmanship.Through all his flourishing and showing-off he is always the underdog, the butt of his own jokes.So when the tables turn and the juggler gets his own back, we are right there on his side. | 2016-07-05T18:43:41.079Z | 2013-10-18T00:00:00.000 | {
"year": 2013,
"sha1": "26260cd32ef772c1261b4f8e553a13774e5ad12b",
"oa_license": "CCBY",
"oa_url": "https://www.journalofperformancemagic.org.uk/article/id/211/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "26260cd32ef772c1261b4f8e553a13774e5ad12b",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Art"
]
} |
63826750 | pes2o/s2orc | v3-fos-license | Basic Framework and Significance on the Economics of Port Safety
This article is based on drawing lessons from the basic theory of safety economics, such as the basic concepts, principles, theories, research methods, etc.; and is combined with the actual on the safety production of port enterprises. It has expounded the basic theory on the economics of port safety, such as basic connotation, subject investigated and research method; and has pointed out the basic features. The significance and value of research on the economics of port safety have been analysed in article. The basic framework on the economics of port safety has sketched in this article.
Introduction
With the continuous development of national economy and society, the guidelines, policies, and theories of safety management have being continuous innovation, and the models on port safety management have significantly changed.It is transforming from paying attention to after the accident treatment, to attaching great importance to accident prevention before accident.Port enterprises gradually establish the safety concept such as safety first, prevention first, safety development, etc.; increase safety investments; introduce the methods of risk management; and have achieved good results.
However, port enterprises generally encountered some common problems in practical work.The first, how to define and measure safety investment (cost) and to know and evaluate safety benefit (output).The second, in the case of that safety investments are limited, how to optimize the distribution the resources of human, material and financial, in order to obtain the biggest safety benefits.The third, in the premise that fulfilling the requirements for the safety regulations and standards and ensuring the due level of safety, how to reduce the safety investments or to reduce unnecessary investments.The fourth, how to scientifically evaluate the risk of the safety in production, how to reasonably determine the indexes of risk control, and how to properly handle the relationship between safety investments and safety benefits.These problems are so confused not only port enterprises, and including with the correlative government departments on port safety production, scientific research and consulting institutions.
The above problems are the basic problems of safety economics [1,2].Over the years, according to the research on safety economics carried out less in China, and research on the economics of port safety at home and abroad is in the initial stage of starting, so the research results cannot see much.
The economics of port safety is a new subject.Because of the lack of theoretical guidance and scientific evaluation method, evaluation standard, for the research of this subject, there are some unreasonable, undesirable method of work, for safety investments, in some port enterprises.
For example, some port enterprises one-sided focus on the safety equipment and facilities investments (hardware) and ignore the safety management investments (software, including organization, personnel, education, training and so on), or on the contrary.Some port enterprises mistakenly believe that there are only investments and no benefits in safety production, or large investments and small benefits, therefore they are not willing to do investments, or reduce investments.And there are a small number of enterprises which constitute too high safety objectives that divorced from reality, causing that port aspire after zero accidents or zero risk, and blindly increase safety investments, too much to increase the economic burden of port enterprises.
With the importance of port safety has become increasingly prominent, the research of the actual effect and the economic relationship problems for port safety is especially urgent and important.From this point of view, there are very realistic basic functions that be as soon as possible to carry out the research of the economics of port safety, to research on the basic characteristics, content, research methods for the economics of port safety, and to define some basic concepts as the initial phase of research for the economics of port safety.[1,3,4].The economics of port safety is a science that studying, which study economy (the interests, investment, benefit) forms and conditions for port safety, through the reasonable organization, control and adjustment of port safety activities, in order to reach the biggest benefits of safety for human, technology, and environment [3].It is thus clear that the economics of port safety is a science which is to research the relationships between the safety activities of ports with the economic activities of ports.This definition has the following meanings: the subject investigated for the economics of port safety is the economic form and condition of port safety, that is, through theoretical research and analysis, in order to reveal and clarify the form of expression and the realization condition of the port safety behalves, safety investments, and safety benefits.The objectives for the economics of port safety are to achieve optimum safety benefit [5] the three which are human, technology, and environment.
The goal for the economics of port safety is achieved through the control and adjustment of the port safety activity.Port safety activities refer to the activities carried out in order to ensure the safety for port enterprises, which is including that: setting regulation and police for port safety, safety education and management, the implementation of safety engineering and technology, etc. Safety activities require a fair amount of human resources, material resources and financial resources, these resources are called port safety investments [6,7].It will hope to obtain the corresponding rewards when to invest.The investment return for port safety is to ensure safety, and reduce accidents and reduce the loss of accident in port transportation.This also indirectly promote the added value for port economy production.Therefore, the safety activities of port is not only a kind of "consumption activity", or "investment activities" and "benefit activities".
The economics of port safety also has itself subject investigated, is to study and solve the economic problems of port safety.It is not only a special economics, and is a specific application science which be in order to the specific application area for the management activities of port safety.The subject investigated for the economics of port safety, is the economic relations in the field of port safety, that is, the economic relations of port safety.The so-called economic relations of port safety contains the following four layers meanings: division of labor and cooperation relationship for port safety, economic interests relationship for port safety, economic quantity relationship among the various factors related to the port safety, the economic benefit relationship for port safety.These 4 aspects constitute the system of economic relationship for port safety.
The research objects for the economics of port safety, specifically, are in the operation process of the ports, according to the unity of opposites between port safety reality and economic effect, to research on how to combine port safety activities with the best way with port transportation production, to explore the best input and output ratio in production of port safety, to make the benefits maximization of port safety investments.
Specifically, it will study the following questions: The macro theory for the economics of port safety, including that the influence on port safety with social development, national economic conditions and port industry development, and research on the influence of port safety which impact on the whole social economy, national security, and even the country's foreign policy.Established the objective for the economics of port safety.
From theoretically to discuss the proportion relationship between the growth rate of port safety investments and the speed of development of port transport, to grasp and control the development direction and speed of the scale for the economy of port safety.Research on the range of port safety investment.
To define the scope of the direct economic loss and the indirect economic loss, and to do some discussions to estimate the direct economic benefits and the indirect economic benefits.
On the basis of the above research, combining with the development trend port development goals, to analyze and forecast the basic trend for that adapt to port safety investment, based on fast development of the ports.
The most important problems for the economic s of port safety The analysis of port safety activities The quantitative analysis for the port safety needs The qualitative analysis for the port safety needs The analysis on safety investments The investment of port safety production The investment of port safety management The comparative research on the status quo
The technical measures of the economics of port safety
The quantitative analysis for safety input and output The analysis research on investment source, decision, risks The research on safety measures, equipment, etc.
The analysis of port accident loss
To define the scope of the economic loss To establish estimation model for economic loss The analysis of port accident compensation
Port accident compensation system and case
The suggestions on port accident compensation There are many problems which worth studying in the economics of port safety.In view of the uniqueness and complexity of the economic problems for port safety, it cannot include all the contents of all.In the initial stage of research, it can choose the most important problems for the economics of port safety to analyze (Figure 1.).
First of all, it should be from the analysis of port safety activities, to the scope and type of, to illustrate the role of the safety for port transportation.On this basis, to analyze the influencing factors of port safety, to induce and summarize that the principle which the activities of port safety should adhere to.How to analyze the needs of the port safety?It needs to start from the two aspects of quantitative and qualitative, such as that: the quantitative analysis for the port safety needs, which be including that the historical investigation of port business and safety signs, the analysis of growth trend for port traffic, and the forecasting for port safety needs; the qualitative analysis for the port safety needs, which be including that the changes in the international relations, the changes of political situation and social security, the changes in the social security mechanism, etc.
The analysis on safety investments is one of the important research contents on the economics of port safety.It mainly includes the following aspects: the investments of port safety production, (the types of safety production activities, the investments of the resources on human, financial and material, investment amount ratio); the investments of port safety management, (the types of safety management activities, the investments of the resources on human, financial and material, investment amount ratio).
the comparative research on the status quo of Chinese port safety investment with foreign developed countries.
In the technical measures, based on the input and output theory of economics being applied to the research process, the related quantitative analysis is studying on the input and output of port safety.Using the investment theory and technology to the research on the analysis research for investment source, safety decision and investment risks of port safety and other issues.And being used the basic theory of economic management to study that such as the raising and management of funds for safety measures, the depreciation of safety equipment and facility, and the safety and economic management of port enterprises, and other issues.
The analysis of port accident loss is based on the necessary limits analysis and investigation of port accident and accident loss, to define the scope of the direct economic loss and the indirect economic loss, to establish the estimation model for direct economic loss and the indirect economic loss.
The analysis of port accident compensation should first introduce Chinese accident compensation system and international compensation of port accident, and to compare the different.This analysis is taking port compensation case as the analysis object, to study and put forward the suggestions on port accident compensation.
The safety benefit is the destination and goal of the research on port safety economics.The economic benefit for port safety should be including two aspects that the economic benefit for port safety and social benefit, to study on the basic content and evaluation method of the evaluation on the economic benefit for port safety, and to make prediction for the trend of port safety investments, based on the fast growing of port transportation demand.On the basis of this study, to put forward views and suggestions for the decision-making of the country's port safety.
Basic characteristics
The economics of port safety is a new special cross subject, which is being across the natural sciences and social.The subject investigated and task for the economics of port safety determine the subject characteristics (Figure 2.).From the research method for the economics of port safety, which is systemic, advance and superiority [1][2][3][4].From the discipline essence for the economics of port safety, which is marginal and practicability.Comprehensive.From research content, the economics of port safety related to economics theory, and related to the basic method of safety management and safety technology, also include the basic content of port transportation.From research method, the economics of port safety is to use the basic method of economics, also make the use of safety management, safety engineering theory, but also consider the basic rules and characteristics of port.Therefore, the economics of port safety needs very strong comprehensive to carry out research, and many of the necessary factors are indispensable.
Systemic.The problems for the economics of port safety are often complicated problems of multi-objective, multi variables.The safety factors and economics factors are considered in solving the problems for the economics of port safety.Also it should analyze its own factors of the subject investigated, and analyze the variety factors which associated with subject investigated.This constitutes the systematic for the research process and scope.
Advance.The output for the economics of port safety is often delay and lag behind, and the essence of port safety activities has the characteristics which are foresight and preventive.Therefore, the economic activities of port safety should have to be advance which adapt to the requirements of economic production activities Decisional.The economics activities of port safety should be based on scientific decision-making.In the port transportation, the decision-making of safety aspects is always the priority among priorities of port management.But to ensure safety need invest.What kind of method used in the process of port safety investment is the most reasonable and guaranteed for port safety, and what way can make the maximum benefit for safety investment.So the economics of port safety will provide optimization techniques and methods for the economic decision of safety.
Marginality.The economic problems of port safety like other economic problems, not only restricted by the laws of nature (the restriction of safety objective law), but also controlled by the economic laws.That is, the economics of port safety is necessary to study some natural laws of safety, also to study the economic laws for port safety.Therefore, the economics of port safety is a borderline science which is across the natural science and social science of safety.
Practicality.The economic problems of safety, which being researching on the economics of port safety, are all with strong technical and practical.The research on the economics of port safety is to service and protect the safety and health development of port production transportation, and port safety is as a required service in the operation of port enterprises.The economics of port safety provide the technical methods and guidance for how to ensure the safety of port transportation production, and how to achieve the rationalization, maximization, scientific for safety input and output.The economics of port safety is directly applied to the specific safety operation in port, and is really solve the practical problems of the port, which is be of great practical value.
Basic method
The basic method used in the research on the economics of port safety is the method of dialectical materialism [1,3,8].The basic method should pay specifically attention to reverse thinking method, comparative analysis method, investigation and study method, and the quantitative and qualitative analysis method, also should pay attention to the combination method of macroscopic and microscopic [9,10].And some scholars have also proposed the opportunity cost method.At the same time, it should absorb and apply the existing relevant disciplines, with the system method of multidisciplinary comprehensive research.
Reverse thinking method.In order to achieve safety, it should be to necessary research on the condition and cause of the accident.It should start from the observation and analysis of safety accident, reversely trace the reason and condition for the occurrence of safety accident, thus to eliminate the cause and condition for the occurrence of safety accident.
Comparative analysis method.Port safety system is a multi-variable, multi object system which is involving a wide range and complex contact factor.
Therefore, the research way and method should be scientific and reasonable, and should be in line with the objective requirement.The comparison analysis method is one of the basic method to grasp the characteristics and law of system.It can get accurate understanding only through analysis and comparison.
Investigation and study method.In order to understand the law for the economics of port safety, it should carried out according to the existing experience and data, and to get the truth from practice.Therefore, the investigation and study method is an important method to know the rules of port safety.The rules of accident loss can be prompted and reflected only through lots of investigation and study.
The quantitative and qualitative analysis method.The number relationship analysis and the scientific quantitative analysis for the economic problems of port safety, is the inevitable requirement of the development for the economics of port safety.Because of the constraint on the objective factors and the basic theory, some propositions for the economic field of port safety are not absolute quantification.It is necessary to use the method of quantitative and qualitative analysis, in order to obtain the reasonable and correct conclusion.
The concept of opportunity cost can lead into the process of research on the economics of port safety.In the real world, "no free lunch" is the true saying.Everything in the world has a price to pay.The economics of port safety also can be analyzed using the related theory of opportunity cost, in order to obtain greater benefits for input and output.In addition, it should also pay attention to set up the system and the dynamic point of view, and use the method of macro and micro.
Research significance
The task for the economics of port safety is to reveal the objective law on the occurrence, development and movement for the economic relations of port safety.And the task is to discuss the realization of port safety production, safety of life, and that the way, method and measures for safety of survival.The economics of port safety not only provides the basis theoretical and reference for the scientific safety guidelines and policies formulated by country and government, but also provide technical support and basic working method for the port enterprise to organize the safety resource allocation optimization and the safety technical management.The economics of port safety can not only provide theoretical guidance for the port safety work, but also provides the implementation options for the actual work of port safety management.Through research on the economics of port safety, thus it can provide the maximum protection for the health and development of port transportation production, so as to promote the harmonious development of the society and economy thriving and prosperous.
To discuss the relationship between port safety and port economy, and to study economic problems of port safety, it is an important topic which constantly sought by many experts and scholars, and the actual workers.The real development of the port also continuously puts forward new requirements to the theory workers who study port problems.
Safety economics [1][2][3][4]11,12] itself is in the growth stage, and the development of the discipline is imperfection.The economics of port safety as a part of the safety economics, also is imperfection, and there are very few existing research achievements.In this background, the economic significance for research on the economics of port safety is particularly important.Specifically, embodied in the following aspects: the research has great significance for port safety in production.Port safety is the basis guarantee for the development of economic benefits.There is the dialectical relationship such as correlation dependence and mutually reinforcing, between port safety and economic benefits.
the research provides a theoretical basis for government decision-making in the aspect of port especially to reverse the wrong understandings on port safety for some of the government departments and the port enterprises.In the port industry, it also exists the fuzzy recognition that the safety investment is not to create economic benefits, and safety investment is a completely free return.The research purpose on the economics of port safety is to make the government, corporate decision makers and safety workers who can correct understanding, on the role of safety and the benefit created by safety.So that the decision-making of port safety in the relevant departments is more scientific and practical.It must not only see the political significance, but also pay attention to the economic benefit.
the research provides a theoretical tool for safety activities of port enterprises.The realization of port safety needs to invest, and the reasonable investment should be how much, how to achieve greater safety benefit and economic benefit based on limited investment.These must be the guiding theory and method of scientific selection.The related research results can be used to guide port enterprises for the reasonable selection of safety investment and safety means, in order to improve the level for the economic decision-making of safety and management.
the research has practical value for establishing the system of safety management system for the throughout port industry, and for further improving the level of port safety in China.The correlation research on port safety in China is overall relatively weak.Undoubtedly, the research on the economics of port safety can greatly enrich the research content for port safety in China, and promote scientific research for port safety in China.
the research has important significance to promote the study on safety economics.Safety economics is a macro research for the safety economic problems.The safety economics for port is from a field to study on the economic problems.And it is based on the related theories of safety economics to research on safety of port industry.The safety economic problems of this field can enrich the research content of safety economics from the perspective of scientific research.It has positive role in promoting the research on safety economics.In a sense, the study on the economics of port safety is groundbreaking.
Conclusion
Based on the research on the basic characteristics, basic contents and research methods for the economics of port safety, this article tries to make a little contribution in the research framework aspect for establishing the economics of port safety.The research on the economics of port safety can from different angles to study, and it is a matter of opinion.The article is from start out the general system of economics, based on the basic theory of safety economics, along the idea of the input channels, cost accounting and benefit analysis.Without a doubt, the point of view in this article is only the statements of a school, only carries on the discussion to the economics of port safety from one aspect.
The complexity of the economics of port safety put forward the challenge of milepost significance to every interested person.It needs to give more attention and research, in order to continuously expand and improve the research on the economics of port safety, for the port service, the water transport industry service and the social service.
Figure 1 .
Figure 1.The most important problems for the economics of port safety.
Figure 2 .
Figure 2. The basic characteristic for the economics of port safety.
Figure 3 .
Figure 3.The basic method for the economics of port safety. | 2018-12-22T00:57:46.160Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "534706d7239d9cfd167ad0615d04c69b6a1af350",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/44/matecconf_ictte2016_02002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "534706d7239d9cfd167ad0615d04c69b6a1af350",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258533786 | pes2o/s2orc | v3-fos-license | Photography and the Organic Nonhuman: Photographic Art with Light, Chlorophyll, Yeasts, and Bacteria
ABSTRACT The concept of nonhuman in relation to photography has recently been mostly theorized through technology, while organic nonhuman agents and processes at work in photography have received less attention. In this article, works by three contemporary Finnish artists who incorporate organic materials and processes into photography are analyzed to renegotiate the borders between photography, bioart, and science. This leads to a rethinking of the dichotomy between the concepts of technological and organic in the context of contemporary photographic art. Combining new materialist and posthumanist theories with photography history and theory allows for a multidisciplinary method that recognizes the worth of both human and nonhuman agents and processes at work. The artworks are analyzed as temporal and processual, taking into account their performative qualities.
Recent writings on photography and the nonhuman have been centered on technological advances and the hardware and software used in digital photography, surveillance, photographic data and metadata, and the networked photographic image. Less attention has been given to the organic nonhuman materials, agents, and processes used in photographic work. In a wider contemporary context, the entanglements of human and nonhuman agents have been a main interest, especially to researchers with a new materialist and/or posthuman approach. The present article contributes to this growing field.
This study focuses on works by three contemporary Finnish artists who incorporate organic materials and processes into photography. Jenni Eskola (b. , Finland) produces images with chlorophyll and plant pigments by grinding them into a paste, spreading it on paper, and letting the papers fade in daylight. In Johanna Rotko's (b. , Finland) works, which she calls yeastograms, images are formed on a yeast culture in a petri dish, which is exposed to UV light through a stencil containing a photographic image. The UV light kills part of the yeast culture, and the remaining culture then takes the form of the image in the photograph stencil. Noora Sandgren (b. , Finland) utilizes compost to produce works in which expired photographic paper is half-buried in compost soil, exposed to its bacteria, humidity, and warmth.
The writing of this article started in the midst of the Covid- pandemic, which in itself provides an apt background for reconsidering our relations with nonhuman species. My choice of these three artists was based not only on interest in their work but also on locality. I wanted to be able to meet them and engage in a continuous dialogue with them. All three had exhibitions in Helsinki during the time of planning and writing this article, and because of the temporal and processual quality of the work, I wanted to be able to visit those exhibitions several times.
All three artists work in the spaces between photographic art and bioart in their own ways. Artist Eduardo Kac defines bioart through three criteria, of which one or more can be employed: "() the coaching of biomaterials into specific inert shapes or behaviors; () the unusual or subversive use of biotech tools and processes; () the invention or transformation of living organisms with or without social or environmental integration." How the concept of bioart is tightly entangled with technology is obvious in Kac's categorization. The aims of this article are to renegotiate the conceptual borders between photography, bioart, and science and to dismantle the dichotomy between the concepts of technological and organic in the context of contemporary photographic art. How does the life of the materials affect the production and interpretation of the images? Can there be such a thing as an organic image? In what ways do artistic and scientific methods meet in these works? The methods of bioart are deeply entangled with those of the natural sciences; however, they operate within a more subtle conceptual field, where beauty and aesthetics also come into play. Furthermore, bioart is profoundly enmeshed with artistic research, and much of the theoretical literature referred to in the present article has been written by artists, which gives a closer look to the embodiment of theory in practice and deepens the understanding of the artworks analyzed. The present article contributes to plant studies as an emerging academic field by analyzing the agencies of nonhuman, nonanimal others: chlorophyll, yeasts, and bacteria. If animals have been marginalized in Western thought, then "non-human, non-animal living beings, such as plants, have populated the margin of the margin, the zone of absolute obscurity undetectable on the radars of our conceptualities," as philosopher Michael Marder observes. Analyzing the agencies of microbial life probes into this margin of the margin inhabited by nonhuman, nonanimal others. By bringing together new materialist and posthuman thought with photography history and theory and by engaging in dialogues with the artists, an approach that appreciates the multiple agencies at workhuman and nonhumanbecomes established.
Since the s, a new theoretical interest in the material qualities and cultures of photographs, in addition to the information or visual content they carry, coincides with a broader eco-critical movement in the environmental debate. We are at a point in history where questions of materials and materiality have become nothing less than pivotal to the survival of our speciesand many other species besides. Sustainability and ethical considerations of interspecies relations are vital not only in art but also in a wider context. An underlying eco-critical stance is what gives the works analyzed not only a forceful affective impact but also an urgency. They call us to consider and reconsider our relationship with our surroundings, both physical and philosophical, and the other beings inhabiting them.
Delicate color variations
Jenni Eskola produces artworks with chlorophyll and plant pigments by spreading the plant-derived "paint" on paper and leaving it to gradually fade in daylight. This fading process continues as the works are exhibited in their different stages, often as a series that shows the gradual progression of the fading.
In the series Evergreen ( -) (Fig. ), the painted area fills the whole frame of the works, comprising a color field of green turning to brown and yellow and in the end fading almost completely. The absence of a singular image motif accentuates the importance of materials. The name of the series playfully makes clear not only the impossibility of the physical permanence of artworks but also their conceptual power to immortalize a subject by preserving the subject's likeness as an image.
For the series Color Study (On Permanence) () (Fig. ), Eskola produced pigments with different flowers, painted two color fields on paper with each, and then exposed the other to daylight and shielded the other from it. The results of this process are framed so that each frame contains two rectangular surfaces, the exposed one on top and the protected one below, making visible the reaction of each different flower pigment to daylight. The color surfaces bring to mind abstract art, and the method is reminiscent of a time before industrialization when artists would have to prepare their own paints and pigments from ingredients derived from nature. Identifying and naming the flowers turns the work into a herbarium, but instead of classifying the visual features of the flowers as plants and their morphology, the series brings out the processual fading of their colors. Using their taxonomical or common names makes the plants representative of their species rather than single individual plants.
Eskola creates her large works usually by using a paint roller, while in the smaller ones she utilizes a paintbrush. The green plant paint in the series Evergreen comes from plant juice, which Eskola extracts by using a juice press. In fact, her fascination with chlorophyll as the paint started from preparing wheat juice to drink. In the series Color Study, Eskola used flowers from her own garden and surroundings, both local wildflowers and garden flowers, in which the intensity of the colors and their permanence varied greatly. Eskola's works are luminograms, a special type of photography similar to photograms, but made using only light and light-sensitive material, without an object placed on the surface to form an image. One could even ask whether they are images in the first place, as there is no figurative subject. The color variations of the nonfixed plant substance bring to mind the days of early photography, when it was already possible to make photographic images, but a way to fix them was not yet found, and the images would fade away. Furthermore, Eskola's works are anthotypes, photographic images made using plant-based materials. Many natural substances are sensitive to light, and the use of organic materials in photography has a long history. Knowledge related to this is especially important in the preservation and conservation of historical photographic prints. The changing of colors in nature signals the changing of the seasons. The timeframe for Eskola's works comes borrowed from the life of the plants, from seasonal changes. Artists Heather Ackroyd and Dan Harvey, who have also used chlorophyll to produce artworks, write about senescence, leaf death, which is the phenomenon at the center of Eskola's works. Senescence is a survival method through which a plant can kill parts of itself in order to preserve its core functions in a hostile environment. "The disappearance of green color is the visible sign of a plant under stress," Ackroyd and Harvey write. The seasons determine the possibilities of working with plants. Eskola most often works with fresh plants and flowers, and as winters in Finland are long, the short time when flowers bloom needs to be seized.
The time span of the changing colors becomes dependable to that of exhibition schedules as the works fade according to their time being exposed and exhibited. Their seasons are thus related to the progress of the year and the natural fluctuation of light in an environmental sense, and the changes that take place in the works also become entangled with their appearances in exhibitions. Their continuing and volatile processuality thus makes exposure visible in another sense: the works that are exhibited the most and for the longest times fade faster. Whether the work is still the same as it keeps on changing is a question that Eskola has been occupied with. When can a work be considered ready or finished? Is it when the color has completely faded and does not change anymore, or is the work already destroyed at that time? Does the image still exist when the color has faded away? Their inherent ephemerality makes the works temporal and situates them also in relation to the field of time-based art, although their temporality does not have a specific or fixed duration, but rather a continuity.
The way light causes Eskola's works to fade and age makes them skin-like, life-like, rendering visible the fragility of both the pigment and the paper. The conservation of traditional analog/chemical photographs is based on the properties of different kinds of papers, as it is with graphic art and drawings. Marder emphasizes the fragile balance between light and darkness and how both are needed for a plant to thrive. The same is true for analog photography. Due to too much light, the image fades or burns away, and due to too much darkness, it does not come into being. Light, the thing that brings a photograph into existence, can also destroy it. The underlying paper becomes more visible after the color fades away; the paper itself changes by the passing pigment and exposure to light. Thus, the paper is not an unaffected carrier medium for a photographic image, but its own aging in the process becomes visible.
Eskola has been trained as a painter, and she does not primarily identify with photography or bioart. What connects Eskola's works with bioart and photography, however, is light-sensitive biomaterials and experimentality. In her work, the surprising outcomes of these experiments are more important than a hypothesis or a pre-planned workflow.
In Eskola's work, it is not so much about single images but seriality, the progression of all the works simultaneously and at a different pace. Flowers and plants as archetypal motifs in art become apparitional vestiges in Eskola's work; they are present as material, but not so much as individual visual subjects. The color fields divided into two remind us of Mark Rothko's and Piet Mondrian's color field paintings, but because they contain the essence of their origin, the plants, they are also pure presence. They are not only abstract in a non-figurative way; instead, their conceptuality is deeply rooted in their very materiality. They carry with them the lives of their subjects. This connection of the plant materials to their origin is why working sustainably is important to Eskola, and she is careful, for example, not to harvest too many flowers from one site, always leaving some for the pollinators. Residing between nature and art, Eskola's works bypass the question of mimesis. Rather than images, they are performative processual artworks, which bring forth a process that could be called photopoiesis, images formed through and with light. The nonhuman time of the plants is unmeasured, durational, and continuing, as the works extend photographic temporality from a fast shutter-click moment to an ongoing unfolding.
Life forms
In Johanna Rotko's yeastograms, images are made of preexisting photographs by exposing a yeast culture in a petri dish to UV light (Figs. and ). Rotko has been working with this method since . The photographs are computer-worked and printed as raster images on film, such that is used with overhead projectors. The image on the film needs to have high contrast in order to become well formed through the yeast. UV light kills the part of the yeast culture that is exposed to it, as the photograph stencil protects other parts of the culture. After the exposure period of approximately h, the petri dishes are taken out of the UV light, and the remaining yeast will start to grow following the shape of the image. This growth is then photographed by Rotko to document its stages, as the living yeast itself cannot be fixed into a stable image. Eventually, the yeast image in the petri dish will become unrecognizable as the culture itself develops into more elaborate forms and as other species such as molds start to develop in the dish. This growth is unpredictable, and some dishes develop more rapidly than others. As in Eskola's works, so in Rotko's, the uncontrollable colors and shapes of the yeast and molds resemble those of abstract art. The molds often grow around a central point, creating circular forms, as in Woman on Charcoal () (Fig. ).
The same work advances through different stages, starting out as fresh and developing into containing different types of life forms that function as parts of the image. As the object takes on new forms, the image also grows to contain new colors and forms. To document the development process, Rotko takes photographs of the yeastograms, which she then often exhibits alongside the developing cultures in the petri dishes. The documentation of growth is a documentation of life, as the yeastogram is alive in a literal sense. This act of photographing the process is similar to fixing a photographic image in the darkroom with chemicals, stopping the process at the optimal moment of the photograph's development.
The glistening white yeast and the slightly reflective darker background cause the fresh yeastograms to visually resemble daguerreotypes, which have highly reflective, almost mirror-like surfaces. As Rotko's works continue to develop, their freshness is replaced by advancing growths of the yeast and of molds and bacteria. Ripeness is a moment that passes, and the yeastograms become "overripe" as the image starts to deteriorate. This situates the works in relation to the tradition of the still life, where the depiction of decaying fruits and flowers has been a way to symbolize the fleetingness of time. They are an active and ongoing memento mori. This combination of action and process with the theme of the still life creates a sustained tensionan extended and recorded process of decay instead of a single suspended moment in time. In Rotko's yeastograms, this process can either be experienced during an exhibition through multiple visits or via photographic documentation recording the passage of time.
The growths on the surface of the yeastograms take on a three-dimensional form, creating rhizomes and layers, a micro-spatiality on the surface, which is difficult to see with the naked eye but possible to observe and document through photography with macro lenses. This brings to play the scientific gaze as the growths are observed and documented through optical instruments. From an institutional and curatorial point of view, the photographs also make the yeastograms more accessible and amenable to control.
In the exhibition Living Images at Bioart Society's Solu Art Space in Helsinki, Rotko included ongoing yeastograms in petri dishes as installations with the documentary photographic images, as well as close-up photographs of the growth's intricate colors and shapes. In conjunction with the exhibition, the artist led a workshop for exploring the method, in which I had the pleasure of taking part. The workshop was a communal experience, with the participants experimenting together with the artist through trial and error. During the workshop, we did everything from scratch, starting with the preparation of the petri dishes with a colored agar base. Agar is a gelatinous substance made from seaweeds, used as a medium in biological cultures. The agar itself is colorless, so natural ingredients such as charcoal, turmeric, or blueberries were added as a dye to function as a background color for the yeast. The dry yeast powder was then mixed with water to create a liquid solution to coat the agar base, followed by preparing the raster stencil images attached to the lids of the petri dishes. The petri dishes with the agar, the yeast, and the photographic image stencil on top were then set under UV lamps for a duration of h for the yeast images to start to form. There were some crucial stages to the process that one might not come to think about by only looking at the final images, such as boiling the water used in the process to minimize the presence of organic growth other than yeast, like bacteria, or making the yeast-liquid coating have just the right thickness, or choosing photographic images with enough contrast and sharpness to produce a clear image.
The image in the yeastograms is formed by the yeast not exposed to UV light. Whereas Eskola lets nature take its own course with the daylight slowly fading the chlorophyll and exposing the works to light over a long period of time, Rotko conducts a more active exposure to light by using UV lamps to kill parts of the yeast culture to produce the images.
Art theorist and curator Gunalan Nadarajan writes of ornamental life forms, which "have been historically created by selective breeding of varieties to cultivate qualities that have appealed to us for some aesthetic dispositions and values […]." They are to be separated from life forms bred for instrumental purposes, such as food or labor. Rotko utilizes yeasts used in baking and brewing beer, which raises an ethical question: are her works to be likened to baking and brewing or are they to be considered ornamental as in Nadarajan's description?
The answer depends on whether we think art has an instrumental purpose, as food and labor do.
Another realm where life forms are used for instrumental purposes is science, and using animals and other living beings as instruments of research is a fundamental ethical question in the natural sciences. Concerning bioart, the parameters change slightly. Even if the other instruments used, such as laboratories and petri dishes, remain the same, when the ends change, so do the ethical issues. Beauty can also be a desired goal or purpose in bioart, which radically separates it from the sciences, and makes the underlying ethical questions more nuanced. Whether it is right to kill a research animal is a question Rotko has reflected on in her thesis. Furthermore, she writes of the ever-present difficulty of combining the roles of the artist and the researcher in bioart and being constantly aware of the problem of the research being verifiable and repeatable.
Images from the compost
Noora Sandgren takes expired photographic paper and buries it in compost to produce images. The natural processes of the compost, like growing bacteria and molds, start to create visual effects on the paper, which is sensitive not only to light but also to changes in temperature, moisture, and chemical processes. In a darkroom, the chemical processing of an image is controlled through different phases, whereas in Sandgren's work, the stage is left open for the compost to do its work.
In the exhibition New Perspectives Through Photography - Years of the Helsinki School, Sandgren installed a glass box with the compost soil, including the photographic papers in Imagesseeds, planting and harvesting (.-..) (Fig. ), along with photographic prints of the results of the compost process, in Henko -Correspondence (compost days) () (Fig. ). In the installed compost box, there were fruit flies, centipedes, and little sprouts of plants to be seen among the soil and the photographic papers inserted in the midst. Sandgren has worked with scientists to be able to identify and name the bacteria and fungi at work in making the image and to credit them in the nameplate of the work. The glass box encasing the compost soil in a museum space functions as a vitrine and presents the living (in vivo) compost observable as a specimen (in vitro).
The papers can spend a while in the compost, days or weeks. As with Eskola's color variations and Rotko's yeastograms, the timeline is not fixed or the same for different works. After this phase, Sandgren either lets the papers evolve or scans them for further development. This is where the artist makes choices in what levels of participation in the image creation one exercises, for example, what to emphasize in the visual results. Still, the visual can only transmit some of the qualities of the compost images since each paper has its characteristic smell, feel, and life. Working with the image is a slow process, and Sandgren describes this phase as similar to getting to know a new person, listening, asking questions, and making observations. Sandgren regards photographic papers not as instruments but as partners, and there is an ongoing flow between letting the images develop by themselves and taking part in the development by emphasizing some qualities of the image. She notices the different qualities of the laboratory and the garden as working spaces; the former is a more control oriented, orderly choreographed minimalist space, and the latter is more collaboration oriented, multi-agential, unruly, and less predictable. The work happening under the soil remains unseen.
The chthonic is often associated with death; it is a dangerous underworld where living beings should not enter. The earth is where the dead are buried, out of sight, in darkness. Photography has also been associated with death from the beginning, and early photographs were often described as ghostly or eerie apparitions and lingering presences. This is fitting, as a ghost is something that has a visual presence, but no material being; it is ephemeral and spectral, something that can be seen but not touched.
The compost, however, is brimming with life, and for feminist theorist Donna Haraway, the concept of compost serves as a theoretical tool instead of posthuman(ism). She calls our time the Chthulucene, a combination of the Greek words khthôn and kainos. "Kainos means now, a time of beginnings, a time for ongoing, for freshness," she writes, whereas "Chthonic ones are beings of the earth, both ancient and up-to-theminute." Haraway's thinking has been an influence in Sandgren's work, where the artistic practice is accompanied by writing. In Sandgren's own view, the artist also works similarly to compost: accumulating material and living with it, processing it, and letting new things develop through this method. This resonates with Haraway's suggestion of the compost as an alternative for posthumanist thinking, a being-with in the present, rather than only after something. Haraway uses the word sympoiesis to describe "making-with": "Sympoiesis is a word proper to complex, dynamic, responsive, situated, historical systems. It is a word for worlding-with, in company. Sympoiesis enfolds autopoiesis and generatively unfurls and extends it." The compost is not a way to discard and get rid of, but an afterlife, generative and nourishing.
The vegetal world takes nourishment from dead things and repurposes it into energy, creating a mediation and passage between the living and the dead. Marder calls this "a non-mystified and material 'resurrection', an opportunity for mortal remains to break free from the darkness of the earth." The compost generates both warmth and moisture, the basic requirements for life, and contains nutrients for living organisms to consume. The compost, however, is dark; there is no light under the surface of the soil. In Sandgren's work, this brings to mind the darkroom. For an analog photograph to be developed, there needs to be a balance between light and darkness. There is fertile darkness both in the soil and in the darkroom.
Cameraless photography
The works of Eskola, Rotko, and Sandgren can all be considered cameraless photographs, although in Rotko's work the starting point of which the stencil is made is usually a photograph taken with a camera. Cameraless images have had a marginal part in the histories of photography, as photography historian Geoffrey Batchen points out. According to Batchen, histories of photography have been mainly written following the lines of technological advances, favoring photographs made with a camera. The starting point of such histories has often been the technological invention of the camera obscura, whereas cameraless photographs have been treated like "second-class citizens in such histories." Batchen characterizes cameraless photographs as photography's self-portraits. But whereas botanical specimens, flowers, leaves, and algae were often subjects in early experiments with cameraless photography, Eskola takes one step further and grinds the plant specimens into the very material to work with. She does not capture the outlines of plants on light-sensitive paper but produces light-sensitive paper with the plants themselves. Rotko in turn employs yeasts, molds, and bacteria to create her yeastograms, and Sandgren likewise leaves the work for bacteria and the humidity and warmth of the compost soil. These processes result in acheiropoietic images, which are not only of something, but that are something in themselves. They are not the absent subject made present by way of a figurative likeness, but the actual presence of that subject, in its ever-changing, ever-evolving temporal ongoing. The organic is not only the visual subject of the images but also their materially creative agent. The works can be seen as photography's self-portraits, as Batchen describes cameraless photographs, but if we take a broader look and regard the material actants as well, the nonhuman organic agents, then Haraway's abovementioned concept of sympoiesis becomes more suitable than mere autopoiesis, as the images are created together, with company.
At different stages, the artists all leave their works to develop on their own. This seemingly passive process of development can be seen as analogous with the growth of plants, with what Marder calls "non-conscious intentionality": "Instead of pursuing a single target, non-intentional consciousness uncontrollably splits and spills out of itself, tending in various directions at once, but always excessively striving towards the other." This non-teleological migration or growth Marder describes as "itinerant beauty," which also fits to describe the growths of bacteria and molds in Sandgren's and Rotko's works, as they trace the migration and temporal advancement of these organic nonhuman otherstheir lifebecoming a map and a trajectory of their journey. Receiving an image, rather than making one, was an idea that was central to early conceptualizations and writings on photography, which was later abandoned as the acceptance of photography as an individual art form required the emphasis of the role of the photographer as a creative agent. The artists here rather take the role of a facilitator for these nonhuman processes to take place.
The vibrancy of the living image
An image can be something fleeting, like a mirror image, a reflection, or a shadow, and the images on the retina are similar to those apparent in a camera obscura, changing, and non-fixed. A photograph, however, is considered to be a still image fixed on a material surface or viewable through a screen, and fixity and stability have been accepted as characteristic in contemporary views of what photography is. Art historian Kate Palmer Albers reminds us that early photography was more marked by ephemerality than fixity. She notices the challenge ephemeral photographs pose for conventional tools of art history, which privilege tangible, reviewable objects, and argues that for this reason, transience has been overlooked in histories of photography, even suppressed. A case in point is how perhaps the most known early photograph, Niépce's view from his studio window (), is known through a heavily retouched copy, as the original image is very faint and scarcely viewable. Another example of photography's history being a history of images rather than a history of objects is how the original of Daguerre's Intérieure d'un cabinet de curiosités () has survived only through reproductions, and the original plate containing the original image has all but faded to only a mirror surface. Ephemerality is a characteristic at the center of Eskola's, Rotko's, and Sandgren's works.
Rotko and Sandgren use photography and scanning as documentation methods, which accentuates the performative qualities of the organic processes. This optical observation links the workings of the microbes to the optical unconscious, a term used by Walter Benjamin to describe the optical world as yet undiscovered by the human gaze before the invention of photography and photography's ability to visually capture phenomena outside the human sensorium. Photography thus not only documents but also makes visible beings and activities not otherwise visible to the naked (human) eye. Without photographing by Rotko and scanning by Sandgren, the microbial workings would remain unnoticed, hidden outside the field of human vision. In Rotko's case, the works' visibility is also expanded through the artist's Instagram account, where she shares images of the yeastograms in various phases, also after their image content has become unrecognizable when the yeast and bacteria on the surface have taken over. Two kinds of duration are present here: the slow processual duration of the works and the extended duration achieved through the documentation, which produces the more "tangible, reviewable objects for conventional tools of art history," which Palmer Albers noted above. The ephemerality of Eskola's works in turn is emphasized by her habit of not documenting the progression of her works, but simply letting them fade.
The artworks analyzed here are unfixed as they continue to develop and change. Therefore, the viewer always sees only one phase of a process when encountering works in an exhibition. The slow processuality and durational temporality of the works can become visible in an exhibition through multiple visits, much like visiting a garden. Marder observes that for us to be able to notice the growth of a plant, there needs to be a break or a rupture in our temporal approach to it, like coming back home after a period of absence. Seen in one moment, a plant appears immobile, and its relatively slow changes become apparent through noncontinuous, repeated observation. In Sandgren's compost installation, for example, this effect became quite literal, as the vegetal sprouts in the compost were small in the early days of the exhibition and grew taller and more noticeable toward its end. Depending on the light conditions of the exhibition space, Eskola's works can noticeably fade even during one exhibition, and through extended observation, the changes appearing in Rotko's yeastograms become visible.
The ability of today's technology to observe, record, fix, preserve, and control can easily be taken for granted. The material fleetingness that was apparent in early photography has all but been eliminated nowadays, as photographic processes have become more and more controlled. In the works of these artists, ephemerality becomes central once more. The fading colors in Eskola's works as they are exhibited time and time again, the optimal moment of the image in Rotko's yeastograms before they pass their moment of ripeness and start to become unrecognizable, and the repurposing of already expired photographic papers by Sandgren all speak of seizing a moment, but also of letting it pass. The timeline of these works extends beyond what they are able to record. They do not so much preserve as make visible and tangible the passing of time. In analyzing such living images, concepts such as freshness and ripeness then become pertinent. The photographic in these works becomes a moment that passes.
Conclusion: between photography, bioart, and science
Artistic research is common to all three artists, and they are rather working with their materials and instruments than using them for observing and measuring. While in Rotko's yeastograms the process using the instruments of biology is somewhat controlled, Eskola and Sandgren rather take a de-instrumentalizing approach. By reducing the intentionality and teleology of the process, a space is opened up for receiving. Weakening the agency of the artist could be seen as analogous to the philosophical concept of weak thought. By taking a step back, the artists are weakening and dismantling the power of ontological systems and structures inherent in both research and art. Art's processes as an institution become visible in Eskola's works, as they become consumed by their exposure, Rotko in turn works with the tools of biology and chemistry in a subversive way, and Sandgren questions the controllability of photography as a way of making images by using expired papers.
Using the Latin names of plants and bacteria designates Western scientific tradition, taxonomizing, and control. Sandgren underscores accessibility issues with bioart: access to laboratories and experts, access through scientific language, and access to safe working conditions. DIY/DIWO has its own culture and possibilities, but there are also risks when working with poisonous or harmful materials, as opposed to working in well-equipped laboratory conditions with protocols and specialist support. The works of all three artists oscillate between control and receiving.
As the organic materials and agents that produce the images are alive, they can be thought of as hosts for the images. There is not only a liveliness but concrete life in the materials, and with life, there always comes death, the senescence of the plants in Eskola's work, the elimination of yeast cultures with UV light in Rotko's, and the decaying materials of the compost in Sandgren's. This opens up a space for ethical considerations in relations between human and nonhuman agents in art, a field where much further research is needed. Sustainability is an important starting point for all three artists, and what their works underscore is the need to bring into discussion the materials and processes of contemporary photographic art. Not only is it important to discuss art's materials from an analytical point of view to extend the interpretative horizons of the works, but it is also crucial to start to pay attention to the sustainability of photographic materials. The material a photographic image is viewable through can no longer be thought of as only a base or support for a photographic image, but it must be regarded as an integral part of the meaning-making of the works, including the accompanying ethical questions. Finding a way to work with the problems of material, or as Haraway so eloquently puts it, "staying with the trouble," is both a methodological and a philosophical question and challenge.
Disclosure statement
No potential conflict of interest was reported by the author(s).
ABSTRACT
The concept of nonhuman in relation to photography has recently been mostly theorized through technology, while organic nonhuman agents and processes at work in photography have received less attention. In this article, works by three contemporary Finnish artists who incorporate organic materials and processes into photography are analyzed to renegotiate the borders between photography, bioart, and science. This leads to a rethinking of the dichotomy between the concepts of technological and organic in the context of contemporary photographic art. Combining new materialist and posthumanist theories with photography history and theory allows for a multidisciplinary method that recognizes the worth of both human and nonhuman agents and processes at work. The artworks are analyzed as temporal and processual, taking into account their performative qualities.
Jane Vuorinen
Art History University of Turku Turku Finland Email: jkkvuo@utu.fi | 2023-05-07T15:03:51.241Z | 2023-01-02T00:00:00.000 | {
"year": 2023,
"sha1": "13e1a048b8a4cf034f1b679bd9b478fdbee2a95d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/00233609.2023.2194276",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "8d03f72367204ccfa12507171234256863bad6a3",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
} |
53513843 | pes2o/s2orc | v3-fos-license | Psychometric Indicators of the “ Students ’ Perceptions of Classroom Activities Questionnaire ”
The present study aims to investigate the factorial validity and reliability of a “students’ perceptions of classroom activities questionnaire” made by Gentry, Gable and Rizza (2002). In this regard, the students’ perceptions of classroom activities questionnaire was carried out on 360 students (252 girls and 108 boys) who were selected through stratified sampling method from among the students of the Faculty of Humanities at Islamic Azad University of Quchan. In order to assess the questionnaire reliability, Cronbach's alpha coefficient was used, and to determine the factorial validity, exploratory and confirmatory factor analysis was applied. In line with the results obtained by Gentry, Gable and Rizza (2002) and Karshky, et al. (2009, 2011), the present study showed that this questionnaire has acceptable internal consistency and Cronbach's alpha coefficient of the test is 0.901 and for the subtests, it is between 0.662 and 0.91. Also, the results of exploratory and confirmatory factor analysis indicate that the questionnaire structure has acceptable fit to the data and all the goodness of fit indices confirm the model (AGFI= 0.85, GFI= 0.90, RMSEA= 0.056, RMR= 0.07, CFI= 0.91). So, this questionnaire can be a useful tool for assessing students’ perceptions of classroom activities.
Introduction 1.
Classroom and school environment, school management system and teacher's educational approach have undeniable effects on academic performance and cognitive processes of students.One of the major outcomes of the school environment is students' perceptions which play an important role in their motivation, cognition and academic performance (Sunger, 2007).
Studies have shown that if the classroom environment is free from anxiety and stress and a strong human and social relationship exists between the teacher and students and also among students, they will have a more favorable attitude towards learning.When a friendly atmosphere is created in the classroom, strong and harmful sense of competitiveness disappears and a joyful environment is provided for students (Rohani & Maher, 2008).
The study concerning perceptions of class environment is based on the premise that students' perception of the environment is connected with their background and personal characteristics.This, in turn, affects the way they think about their social world and also their approach to the environment (Patrick, et al., 2007).Accordingly, students' perceptions of their learning environment have an impact on their participation in classroom activities and their relationships with peers.
Psychological atmosphere of the classroom learning environment, contextual and social characteristics and teachers support have significant effects on students' learning behaviors, their goal orientation, self-efficacy, causal attributions, strategies application, academic and social motivation, emotional functioning, involvement with assignments, educational values and their academic achievement (Ames, 1992;Davis, 2002;Pintrich, 2003).
Using students' perceptions to assess the class environment has roots in Kurt Lewin's field theory (1936) and Murray's (1938) need-press theory. Lewin (1935, 1936) and Murray (1938) have emphasized the importance of environmental features in human behavior.Their theories suggest that the interaction between environmental features (such as the perception of class) and personal characteristics is in fact predictive of human development and behavior (cited in Fraser, Dorman & Aldridge, 2004).
Following the theories of Lewin and Maria, various questionnaires on the perception of classroom activities were made and introduced.One of these questionnaires is a "students' perceptions of classroom activities questionnaire" made by Gentry, Gable and Rizza (2002).This questionnaire has been widely used to measure perceptions of classroom activities on a global scale (For example, the questionnaire validity and reliability have been approved in studies conducted by Karshky, et al. (2009Karshky, et al. ( , 2011)), Church, Elliot and Gable (2001), Gentry, Gable andRizza (2001, 2002), Sünger and Güngören (2009) and also in most cited studies.
According to Wang and Holcombe (2010), several studies have demonstrated that educational, organizational and social climates of learning environments affect processes such as cognitive conflict, self-regulation and academic achievement.Their research findings showed that perceptions of the school environment directly and indirectly influence academic achievement through school involvement.Church, Elliot and Gable (2001) concluded that environmental perceptions have an impact on the adoption of goal orientation and finally, motivation and performance.Karshky, et al. (2009Karshky, et al. ( , 2011) ) demonstrated that self-regulated learning is predictable through components of perceptions of the classroom, and all components of family and environmental perceptions, motivational beliefs and self-regulated learning are mutually correlated.In the research done by Hejazi, et al. (2008), it was found that class structures directly and through goal orientation and self-efficacy have an influence on math self-regulation.
Thus, the perception of classroom activities is considered as a key variable in education and other relevant fields.In order to strengthen and improve the students' learning and teaching environment, studying the students' perception of their learning environment and the factors influencing this perception is crucial both for teachers and educational researchers.Identifying and providing appropriate tools to measure perceptions of classroom activities and demonstrating its importance in various fields of study should be noted.Therefore, the research question of this study is to introduce the "students' perceptions of classroom activities questionnaire" along with reporting the results of its validity and reliability among the students of the Faculty of Humanities at Islamic Azad University of Quchan.
2.
The aim of this study is to determine the psychometric properties of the "students' perceptions of classroom activities questionnaire" (SPCA-Q).So, the present study is descriptive, a type of exploratory and confirmatory factor analysis.
Statistical Population, Sample and Sampling Method 3.
Statistical population of the research consisted of all the students in the second semester of the academic year 2013-2014 in the Faculty of Humanities at Islamic Azad University of Quchan.Given the number of questions and based on Cochran's formula, a sample size of 360 students (252 girls and 108 boys) was formed.
Stratified sampling method was used to select the sample.Considering the number and proportion of classes (fields), 360 students with the following proportions were selected from among the students of different fields in the Faculty of Humanities: Clinical Psychology 55%, counseling 65%, family counseling 8%, academic advising 11%, rehabilitation counseling 14% and career counseling 7%.
For a detailed examination of students' perceptions of classroom activities, statistics class was considered, and this explanation was also added to the questionnaire guide that the researcher's aim is the statistics class.After implementing and collecting the questionnaires, 8 incomplete questionnaires were eliminated.Finally, 352 questionnaires were scored and analyzed.
Research Tools 4.
The "students' perceptions of classroom activities questionnaire" (SPCA-Q) has been made by Gentry, Gable and Rizza (2002) to measure students' perceptions of classroom activities.This scale has 31 questions and 4 subtests as follows: Interest (8 questions), challenge (9 questions), choice (7 questions) and enjoyment (7 questions).Each question has five options (never, rarely, sometimes, often and always).The respondents should select the option which is closer to their opinion.Validation and factor analysis of the questionnaire were carried out by Gentry, Gable andRizza (2001, 2002) and Karshky, et al. (2009Karshky, et al. ( , 2011)).
In this study, the original questionnaire was first translated into Persian.Then, three professors of psychology and English language reviewed the translated text and agreed on one translation.Finally, members of the scientific board of the faculty (professors of psychology) confirmed the questionnaire content validity.This explanation was also added to the questionnaire guide that the researcher's aim is the statistics class.In addition, before the main implementation, the questionnaire was preliminarily conducted on a group of students.In cases of ambiguity and the sentences which were not easy for students to understand, changes occurred.
In the present study, exploratory and confirmatory factor analysis was applied for validation, and Cronbach's alpha coefficient was used in order to assess the questionnaire reliability.Data Analysis was performed using SPSS-21 and Amos-21 softwares.
Reliability
The reliability of the "students' perceptions of classroom activities questionnaire"was measured by Cronbach's alpha.The results of calculating reliability coefficients are shown in Table 1.Indicators of reliability coefficients show that this questionnaire has sufficient and acceptable reliability and the coefficients obtained are analogous to the results achieved by Gentry, Gable and Rizza (2002) and Karshky, et al. (2009Karshky, et al. ( , 2011)).
Exploratory factor analysis
Before conducting exploratory factor analysis, two assumptions must be considered in order to ensure the suitability of the sample.Kaiser-Meyer-Olkin measure of sampling adequacy is equal to 0.904 and Bartlett's test of sphericity (4789.47) is significant at the level of 0.0001.This means that the sample and correlation matrix are suitable for factor analysis.Principal component analysis was performed through varimax method on 31 questions in "students' perceptions of classroom activities questionnaire".According to the scree plot and eigen values, four factors were extracted by principal component analysis and varimax rotation.The results of exploratory factor analysis are presented in Table 3.Also, correlations between the subtests are shown in Table 2.All the correlations are statistically significant.The above table shows the factor loadings for the rotated factors.As you consider, four factors have been extracted.
Factor loading is the correlation coefficient between the factor and question.The positive sign indicates a direct relationship between the factor and question.Also, its value shows the intensity of the relationship (priority of the question for that factor).Based on the results in the above table, all questions in each subtest (Enjoyment, Interest, Choice and Challenge) were loaded on their respective dimensions.Only question number 7 in challenge subtest shows relatively lower factor loadings (0.280).Other coefficients have desirable values.Kumeri andLee (1992, cited in Sharifi, et al., 2012) have described coefficients of 0.70 as excellent, 0.63 as very good, 0.55 as good, 0.45 as medium and 0.32 as insignificant.According to this criterion, most of the coefficients are in good to excellent category (0.395 to 0.788).Therefore, the "students' perceptions of classroom activities questionnaire" with 31 questions is ready to perform confirmatory factor analysis.
Confirmatory factor analysis:
In order to confirm the factor structure of the "students' perceptions of classroom activities questionnaire", confirmatory factor analysis was performed using Amos-21 software.To carry out the analysis, maximum likelihood estimation and the following indices were used for the fit of the model: Chi-square index (x 2 ), root mean square error of approximation (RMSEA), goodness of fit index (GFI), adjusted goodness of fit index (AGFI), normed fit index (NFI), non-normed fit index (NFI), incremental fit index (IFI), and comparative fit index (CFI) were considered as the compliance criteria of the model with the observed data.The values of indices are listed in Table 5.The results showed that the assumed model has been satisfactorily fitted since x 2 divided by d.f should not be more than 3 and root mean square error of approximation (RMSEA) should be less than 0.1 (the value 0.05 or less is good and 0.08 is suitable).Also, the values of fit indices GFI, AGFI, RMR and CFI are in a range of zero to one, and the closer the value is to 1, the better the fit of the model will be.Based on the above data, it is concluded that the assumed model has been satisfactorily fitted.That is, the underlying factor structure of the questionnaire on perceptions of classroom activities is confirmed.Four-factor structure of the questionnaire on perceptions of classroom activities is presented in Figure 1.
Table 1 :
Reliability coefficients of the present study,Gentry, et al. and Karshky, et al.
Table 2 :
Correlations between the subtests in the questionnaire on perceptions of classroom activities
Table 3 :
Statistical indicators of four factors in the questionnaire after varimax rotation through principal component analysis.These four factors account for 51.37% of the variance in perceptions of classroom activities.Strictly speaking, as shown in Table2, this tool has four significant factors with the eigenvalue greater than 1.The first factor has the highest eigenvalue (4.832) and is able to explain 16.111% of the variance.The fourth factor has the lowest eigenvalue (3.013) and is able to explain 10.043% of the variance.
Table 4 :
Factor matrix of the "students' perceptions of classroom activities questionnaire" after varimax rotation
Table 5 :
Indices of model Fit | 2018-10-24T18:54:55.880Z | 2016-03-31T00:00:00.000 | {
"year": 2016,
"sha1": "2849f24f646a4e63cba8edc0451ade1b12a83d16",
"oa_license": "CCBY",
"oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/9006/8698",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2849f24f646a4e63cba8edc0451ade1b12a83d16",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
227127414 | pes2o/s2orc | v3-fos-license | Calabi-Yau threefolds in $\mathbb{P}^n$ and Gorenstein rings
A projectively normal Calabi-Yau threefold $X \subseteq \mathbb{P}^n$ has an ideal $I_X$ which is arithmetically Gorenstein, of Castelnuovo-Mumford regularity four. Such ideals have been intensively studied when $I_X$ is a complete intersection, as well as in the case where $X$ is codimension three. In the latter case, the Buchsbaum-Eisenbud theorem shows that $I_X$ is given by the Pfaffians of a skew-symmetric matrix. A number of recent papers study the situation when $I_X$ has codimension four. We prove there are 16 possible betti tables for an arithmetically Gorenstein ideal $I$ with $\mathrm{codim}(I)=4=\mathrm{reg}(I)$, and that exactly 8 of these occur for smooth irreducible nondegenerate threefolds. We investigate the situation in codimension five or more, obtaining examples of $X$ with $h^{p,q}(X)$ not among those appearing for $I_X$ of lower codimension or as complete intersections in toric Fano varieties. A key tool in our approach is the use of inverse systems to identify possible betti tables for $X$.
Introduction
In their 1985 paper [6], Candelas-Horowitz-Strominger-Witten showed that Calabi-Yau threefolds play a central role in string theory. This was further developed in works by Candelas-Lynker-Schimmrigk [8] and Candelas-de la Ossa-Green-Parkes [7]; the book of Cox-Katz [10] gives a comprehensive overview of the field. From the perspective of physics, the case n = 3 is of paramount interest, and a first example of a CY threefold is a quintic hypersurface in P 4 . Generalizing the hypersurface case, when X is a complete intersection (CI) of type {d 1 , . . . , d n−3 } ⊆ P n we have So a complete intersection Calabi-Yau (CICY) threefold in P n must have {d 1 , . . . , d n−3 } satisfying in P 5 {2, 2, 3} in P 6 {2, 2, 2, 2} in P 7 Green-Hübsch-Lütken characterize CICY's X ⊂ m i=1 P ni in [17]; when m = 1, h 1,1 (X) = 1 and h 1,2 (X) ∈ {65, 73, 89, 101}. Projective space is the simplest complete toric variety [11], and in [1], Batyrev shows how to obtain CY's as hypersurfaces in toric varieties corresponding to reflexive polytopes. Much activity over the last three decades has been devoted to this situation. A complete intersection is the first avatar of a Gorenstein ring; a Gorenstein ideal of codimension two is a complete intersection, and Buchsbaum-Eisenbud [5] show that a codimension three Gorenstein ideal is generated by the Pfaffians of a skew-symmetric matrix. From the CY perspective, this is investigated in [28], [31] and subsequent papers. The codimension four case was first studied systematically by Bertin in [2]; in [9] Coughlan-Golebiowski-Kapustka-Kapustka list 11 Gorenstein Calabi-Yau (GoCY) threefolds in P 7 and ask if the list is complete. For other recent work on GoCY's, see [3], [4], and [27]. 1 1.1. Preliminaries. For algebraic background, we refer to [12]. The first observation to make is that if S = K[x 0 , . . . x n ] and I is a nondegenerate (i.e. containing no linear form) homogeneous ideal in S such that R = S/I is arithmetically Cohen-Macaulay, then the canonical module ω R is isomorphic to a shift R(a) exactly when R is arithmetically Gorenstein (henceforth Gorenstein). In general, ω R ≃ R(−n − 1 + regularity(R) + codim(R)), so we have Lemma 1.2. If X = P roj(R) is a arithmetically Cohen-Macaulay threefold, then K X = O X ←→ −n − 1 + n − 3 + regularity(R) = 0 ←→ regularity(R) = 4. (1.1) For R Gorenstein, we may quotient by a regular sequence of linear forms, reducing to an Artinian Gorenstein ring with the same homological behavior, described below. Any Artinian Gorenstein ring arises ( [12], §21) via Macaulay's inverse system construction: Let F be a homogeneous polynomial in S of degree d over a field K of characteristic zero. The set of differential operators P ( ∂ ∂x0 , . . . , ∂ ∂xn ) which annihilate F generates an ideal I F in T = K[ ∂ ∂x0 , . . . , ∂ ∂xn ], which is called the inverse system I F ⊆ T . The corresponding ring T /I F is an Artinian Gorenstein ring of regularity d. In betti table notation [13], these numbers are displayed as an array with top left entry in position (0, 0) and position (i, j) equal to b i,i+j . The reason for this indexing is so that is given by the index of the bottom row of the betti table.
Example 1.4. For a GoCY threefold X ⊆ P 6 given by the Pfaffians of a skew-symmetric 7 × 7 matrix M of generic linear forms, Rødland [28] shows that h 1,1 (X) = 1 and h 1,2 (X) = 50. By [5] and [15], for a generic quartic F in K[x 0 , x 1 , x 2 ], I F is the Pfaffians of a skew 7 × 7 matrix M of linear forms, with betti table below. So for this example, the betti table of I F can be realized by a GoCY.
For S/I Artin Gorenstein of regularity 4, the Hilbert function of S/I is Migliore-Zanello [26] show that Stanley's example of a Gorenstein ring with non-unimodal H-vector (1, 13, 12, 13, 1) is minimal, so for n ≤ 11, n + 1 ≤ h 2 . If I can be lifted to a prime ideal in four more variables (Example 2.1 shows this can occur), the corresponding threefold X will have degree Proof. Apply Equations 1.1 and 1.2 and the result of [26] to the Artinian reduction of S/I X .
Example 2.1. The inverse system of a generic quartic in four variables yields an ideal with betti table labelled CGKK 11 below. For an n × n matrix M with M i,j = x ij , Gulliksen-Negȧrd [18] determine the resolution of the ideal I n−1 of n − 1 × n − 1 minors: it is Gorenstein of codimension four, and has regularity 2n − 4. Hence if n = 4, this yields a Gorenstein codimension four ideal in P 15 . Quotienting with a regular sequence of eight linear forms yields a smooth GoCY threefold in P 7 , with betti diagram equal to that of CGKK 11. The Hodge numbers are h 1,1 = 2 and h 1,2 = 34; this example was first identified by Bertin in [1].
2. An Artin Gorenstein algebra A = S/I with regularity(A) = 4 = codim(A) and I nondegenerate has one of the 16 betti diagrams below. Table 1 below corresponds to the 11 classes of GoCY in [9], and Table 2 to the remaining classes. We defer the proof until the end of this section. Table 2 occurs for a smooth, irreducible nondegenerate threefold in P 7 .
Proof. In §3.4 we apply results of [33] to prove a structure theorem for any irreducible nondegenerate threefold in P 7 with betti diagram of Type 2.4, and show the resulting variety cannot be smooth. For the other betti tables, we apply a result of [30]. A matrix of linear forms is 1-generic if no entry can be reduced to zero by (scalar) row or column operations; a linear n th syzygy is an element of T or S n+1 (S/I, K) n+2 . For a nondegenerate prime ideal P , Theorem 1.7 of [30] shows: (1) P cannot have a linear n th syzygy of rank ≤ n + 1, or P is not prime.
(2) If P has a linear n th syzygy of rank n + 2, then P contains the 2 × 2 minors of a 1-generic 2 × (n + 2) matrix. (3) If P has a linear n th syzygy of rank n+3, then P contains the 4×4 Pfaffians of a skew-symmetric 1-generic n + 4 × n + 4 matrix. A betti table of Type 2.1 is ruled out by (1), and a betti table of Type 2.2 is ruled out by (2), since the 2 × 2 minors of a 2 × 3 matrix have two independent linear syzygies. For the three betti tables having top row of the form (c, c, 1), we argue as follows. When c = 3, the linear second syzygy can have rank at most 3, since it involves the 3 first syzygies. Hence by (1), the ideal cannot be prime. When c = 4, the linear second syzygy can have rank at most 4, and in this case by (2) it contains the 2 × 2 minors of a 1-generic 2 × 4 matrix, which would yield a top row of the betti table with entries (6,8,3). When c = 5, (3) implies that P contains the Pfaffians, and since there are only five quadrics, the quadratic part of the idea is exactly the Pfaffians, which do not have a linear second syzygy. For Type 2.3, we will show that a prime non-degenerate ideal P cannot have top row of the betti table equal to (4, 3, 0). Let J 2 be the subideal of P generated by quadrics in P . By (1) and (3) the first syzygies all have rank three; take a subideal I ⊆ J 2 consisting of three elements, which by (2) is generated by the 2×2 minors of a 2×3 matrix, and let F denote the remaining quadric, so J 2 = I + F . Consider the mapping cone resolution of S/J 2 from the short exact sequence It follows that I : F must have a linear generator L, so LF ∈ I. If I is prime, then either L ∈ I or F ∈ I, a contradiction. So suppose I is not prime, and take a primary decomposition Since I is codimension two and Cohen-Macaulay and deg(I) = 3, we must have m ≤ 3.
(1) Case 1: m = 3. Then Q i = P i and I = ∩ 3 i=1 P i with P i generated by two linear forms.
If deg(P 1 ) = 3, then I is prime, and if deg(P 1 ) = 1 or 2, P 1 contains a linear form. In particular, we see that P is degenerate. For Type 2.8, both second syzygies must have rank six, because if either had lower rank, then we would be in one of the cases (1), (2), (3), all of which are inconsistent with a betti table having top row (5,6,2). Let M denote the corresponding 6 × 2 matrix of linear second syzygies; M is 1-generic: if not, there is a second syzygy of rank ≤ 5, a contradiction. We claim that M t has no linear first syzygies (notice that kernel of M t will contain any linear first syzygies on J 2 , which is the subideal of P generated by quadrics in P ). This follows because coker(M t ) has a Buchsbaum-Rim resolution [12], Theorem A2.10. The Buchsbaum-Rim complex is a resolution for coker(M t ) iff the 2 × 2 minors of M have depth 6 − 2 + 1 = 5; since M is 1-generic, the 2 × 2 minors are Cohen-Macaulay with an Eagon-Northcott resolution; in particular depth(I 2 (M )) = 5. As the first syzygies in the Buchsbaum-Rim complex for M t come from Λ 3 (S 6 ) via the splice map described in [12], they are quadratic. We conclude there are no linear syzygies on coker(M t ), hence no linear first syzygies on J 2 , a contradiction.
For the proof of Theorem 2.2, we will need the theorems of Macaulay and Gotzmann [29]: for a graded algebra S/I with Hilbert function h i , write Macaulay proved that h i+1 ≤ h i i , and Gotzmann proved if I is generated in a single degree t and equality holds in Macaulay's formula in the first degree t, then We also need the following lemma Proof. To see that v = (2, 1) cannot occur, observe that if it did then there would be a unique relation L 1 · V 1 + L 2 · V 2 = 0 where L 1 , L 2 are linear forms, and V i are vectors of linear first syzygies. Changing variables so L 1 = x 1 and L 2 = x 2 , we have that To prove part (b), the key point is that v = (3, 1) implies that I 2 contains {Lx 1 , Lx 2 , Lx 3 } with L a linear form. When a ≥ 4 the mapping cone construction implies I 2 is inconsistent with the Gorenstein hypothesis (IGH). If v = (3, 1) then the unique linear second syzygy S must have rank 3, otherwise the argument showing that v = (2, 1) is impossible applies. After change of variables, we may write S as below, with a i , b i , c i linear forms: So the rows of the matrix of linear first syzygies on I 2 are Koszul syzygies on [x 1 , x 2 , x 3 ] t , that is to say If a ≥ 4, I 2 must contain a quadric Q which is a nonzero divisor on {Lx 1 , Lx 2 , Lx 3 }. To see this, note that if Q ∈ L then codim(I 2 ) = 1. After a change of variables I 2 consists of a linear form times a subset of the variables, so that I 2 has a Koszul resolution, hence b 45 ( This is IGH, because T or 4 (R/I 2 , K) 6 = 0, and adding additional generators to I 2 cannot force cancellation: for a cubic F , we have the short exact sequence and the associated long exact sequence gives exact sequence of vector spaces 0 → T or 4 (R(−3)/I 2 : F, K) 6 → T or 4 (R/I 2 , K) 6 → T or 4 (R/I 2 + F, K) 6 .
A direct computation shows that for an ideal generated by three quadratic monomials in R, v ∈ {(0, 0), (1, 0), (2, 0), (3, 1)}, all of which occur in Tables 1 and 2. By uppersemicontinuity, is impossible. When a ≥ 4, the set of betti tables possible for quadratic monomial ideals has an element that is so large that a similar analysis via the initial ideal becomes cumbersome.
(3) a = 5: The Hilbert function is (1, 4, 5, 4, 1) so the betti table is Note that h 3 = 20 − 5 · 4 + b, so h 3 = b. By Macaulay's theorem We conclude c ≤ 4. We can immediately rule out c = 0, as then I would be an ACI, which is IGH. The possibilities c ∈ {2, 3, 4} are also ruled out by Macaulay; we illustrate for c = 2: h 4 = 35 − 5 · 10 + 4 · 4 + 2 = 3, so h 4 4 = 3 ≥ h 5 = 4 + b 25 (I 2 ), which would force b 25 (I 2 ) ≤ −1. Finally, suppose c = 1, so I = I 2 +q for a single quartic q. Since I 2 + q has codimension four, the codimension of I 2 must be three or four, and if codim(I 2 ) = 4 then I 2 contains a complete intersection C. We claim this is impossible: write I 2 = C + f with f ∈ I 2 \ C. Since b 23 (C) = 0 the fact that b 23 (I 2 ) = 4 means that C : f = x 1 , x 2 , x 3 , x 4 , whose mapping cone is inconsistent with the betti table for I 2 . Hence codim(I 2 ) = 3, and q is a nonzero divisor on the codimension three associated primes of I 2 . Since h 4 (I 2 ) = 2, Macaulay's theorem implies the degree of I 2 is one or two. Observe that the rank of the linear second syzygy S cannot be 4; if it was then S = [x 1 , x 2 , x 3 , x 4 ] t . By the symmetry of the differentials in the free resolution, this means that I 2 : q = x 1 , . . . , x 4 . By additivity of the Hilbert polynomials on the short exact sequence 0 −→ R(−4)/(I 2 : q) −→ R/I 2 −→ R/I −→ 0, this is impossible. Hence rank(S) = 3, and as in the proof that v = (3, 1) is impossible for a = 4, I 2 must contain, after a change of variables, {L · x 1 , L · x 2 , L · x 3 } for a linear form L. Since codim(I 2 ) = 3, this forces L, q 4 , q 5 to be a regular sequence. In particular, deg(I 2 ) = 4, a contradiction.
This rules out c ∈ {3, 4}, and shows if c = 2 then b 25 (C 3 ) = 0. So in this case h 5 (C 3 ) = 3, and Hence in the 5 × 5 submatrix M of d 3 representing the "bottom right corner" of the table for I, two of the five columns of M are zero, which by symmetry of the betti table means that two of the five rows of the matrix M t of linear first syzygies on I 2 are zero. Hence the five linear first syzygies on I 2 only involve a subideal J ⊆ I 2 generated by 3 quadrics, which is impossible. Case 3: Suppose b = 6; the only case that actually occurs is v = (6, 2).
So c ≥ b 24 (I 2 ) + 2. Let C 3 denote the subideal of I generated in degrees two and three. We have shown that when b = 6, the only value possible for v is (6, 2).
(4) a = 6: The Hilbert function is (1,4,4,4,1) so the betti table is As , Macaulay's theorem shows h Hence there are 16 betti tables for an Artin Gorenstein algebra A with regularity(A) = 4 = codim(A). All diagrams in Table 1 and Table 2 do occur, which can be checked via a Macaulay2 search.
Gorenstein deviation two
The deviation of an ideal I is the number of generators of I minus the codimension of I. Complete intersections are the simplest Gorenstein rings, and have deviation zero; in [23], Kunz shows a Gorenstein ring cannot have deviation one. In this section, we study Gorenstein rings of deviation two. This is similar to the codimension three case, where the classification of [5] shows that such ideals come from Pfaffians. In [20], Huneke-Ulrich give a construction for Gorenstein rings of deviation two; Let Y be a 2n × 2n skew symmetric matrix of variables, and X a 1 × 2n vector of variables. Then the ideal generated by the quadrics in Y · X plus the Pfaffian of Y is Gorenstein deviation two. Such an ideal will have regularity four iff n = 3, and we analyze this case in §3.2. The corresponding GoCY threefold X ⊆ P 8 has Hodge numbers different from the h p,q (X) for any X ⊆ P n with n ≤ 7.
By the Buchsbaum-Eisenbud theorem, the Pfaffians of a skew 5 × 5 matrix M also have deviation two, and quotienting such an ideal by a regular sequence preserves the deviation two property. If M is a matrix of linear forms, in order to have regularity four, the regular sequence must consist of two quadrics or a single cubic; if M has linear and quadratic entries, the regular sequence is a single quadric. We analyze these ideals in §3.3.
On the other hand, if we quotient the pfaffians of M by a generic cubic, this yields an ideal I with betti table CGKK 2. Quotienting with two generic linear forms yields a smooth GoCY X ⊆ P 7 of degree 15, discovered in [32], which has h 1,1 (X) = 1 and h 1,2 (X) = 76. A key tool in our analysis is a result of Vasconcelos-Villereal [33], which shows that if R is a Gorenstein local ring and 2 ∈ R is a unit, then if I is a Gorenstein ideal of codimension 4 and deviation two, such that I is a generic complete intersection (the localization at all minimal primes is a complete intersection), then I is a hypersurface section of a Gorenstein ideal of height 3. Table 1 and Table two contain two examples of Gorenstein ideals of codimension four and deviation two: CGKK 2, and Type 2.4; the first can be obtained by quotienting the Pfaffians of a skew matrix of linear forms by a cubic. We start with several preparatory lemmas. Note that a betti diagram of Type 2.4 cannot arise as the mapping cone of a cubic, so will arise from quotienting the Pfaffians by a quadric.
Lemma 3.2.
There is a prime subideal J ⊆ I 2 generated by three quadrics, such that J consists of the 2 × 2 minors of a 1-generic 2 × 3 matrix M , and the quadric q 4 ∈ I 2 \ J is a nonzero divisor on R/J.
Proof. By Theorem 1.7 of [30], a linear first syzygy on I 2 of rank four would imply that I 2 contains the Pfaffians of a 5 × 5 skew matrix of linear forms, while if there was a linear first syzygy on I 2 of rank two, I would not be prime. So Theorem 1.7 implies that I 2 contains a subideal J of 2 × 2 minors of a 1-generic 2 × 3 matrix of linear forms. The ideal J must be prime, for if not, it would have a primary decomposition into components of degrees one or two, which would force I to be degenerate. Finally, q 4 is regular on J, for if not, then codim(I 2 ) = 2 and degree one or two; the two cubics in I must be nonzero divisors on the codimension two primary component, because codim(I) = 4. But this would imply that deg(I) is 9 or 18, contradicting the fact that deg(I) = 16. In what follows, we use the notation of Lemma 3.2, so J is the ideal of 2 × 2 minors of the one-generic matrix M . The entries of M are linear forms, because J is prime the linear forms span a space of dimension {4, 5, 6}. This means V (J) is a cone, with singular locus of dimension (respectively) {3, 2, 1}. Let C be the ideal generated by q 4 and the two cubic generators of I; intersecting V (J) with V (C) drops the dimension by two, so if the linear forms of M span a space of dimension four or five, V (I) is singular. It remains to deal with the case that the span of the linear forms has dimension six; after a change of variables we may assume Lemma 3.3. Let I be a codimension four Gorenstein prime ideal with betti diagram Type 2.4. If I 2 contains an ideal J consisting of the 2 × 2 minors of M as above, then I = I ′ + F , with codim(I ′ ) = 3 and I ′ Gorenstein, and F a nonzero divisor on R/I ′ . Hence R/I has a mapping cone resolution.
Proof. Because the two linear first syzygies on I 2 are of the form [x 1 , x 2 , x 3 ] t and [x 4 , x 5 , x 6 ] t and I is nondegenerate, I contains no linear form, so {x 1 , . . . , x 6 } are all units when R/I is localized at I. Thus, in the localization, two of the generators for J are redundant, and therefore I is a generic complete intersection, of deviation two, so the result of [33] applies (we assume henceforth that 2 is a unit).
Lemma 3.4. Assume Y is an arithmetically Gorenstein variety of codimension 3 and X is a nondegenerate hypersurface section of Y with betti diagram of Type 2.4. Then Y must have betti diagram: Proof. The Hilbert series of X is So d ∈ {1, 2}. But X does not lie in any hyperplane. Therefore d must be 2 and Y has the desired betti table.
Proposition 3.5. Let V (I) be GoCY in P 7 with betti diagram of Type 2.4. If the linear forms of the matrix M span a space of dimension six, then up to a change of basis, I is generated by the Pfaffians of a 5 × 5 skew symmetric matrix N as below, along with a quadric q 4 which is a nonzero divisor on R/ Pfaff(N ). The ideal Pfaff(N ) is singular along a P 1 , and so V (I) has at least two singular points.
where the q j 's are quadrics.
Proof. Combining Lemmas 3.2, 3.3, and 3.4 and the results of [33] shows that I is of the form above. To see that the singular locus is as claimed, we compute that where J is the ideal of the minors of the matrix M above Lemma 3.3. In particular, and V (Pfaff(N )) is singular along this P 1 , because the Jacobian matrix of Pfaff(N ) is where * are quadrics. Hence when {x 1 , . . . , x 6 } vanish, Jac(Pfaff(N )) has rank ≤ 2, so is singular along the P 1 . Intersecting V (Pfaff(N )) with the hypersurface V (q 4 ), we find that V (I) must be singular (at least) at a degree two zero scheme.
Computational aspects and Ideals with mostly quadratic generators
As noted earlier, GoCY's were first investigated systemically from a computational standpoint by Bertin in [2]. Below we describe algorithms which, in certain situations, offer a substantial speed up in processing; in some cases, we have seen an improvement in runtime by a factor of 500.
Let I be the ideal sheaf of a smooth Calabi-Yau threefold in P n . This implies that From the fundamental short exact sequence we have that χ(Ω 1 X ) = χ(Ω 1 P n ⊗ O X ) − χ(I/I 2 ). The Euler characteristic of both sheaves can be computed from the corresponding modules of global sections via Gröbner bases; it turns out we actually only need one computation. Tensoring the short exact sequence 0 −→ Ω 1 P n −→ O n+1 P n (−1) −→ O P n −→ 0 (4.2) with O X , and using the cohomology vanishings for H 1 (O X ) and H 2 (O X ) yields that χ(Ω 1 P n ⊗ O X ) = (n + 1)HP (S/I X , −1). 13 In particular, all the computational expense to compute χ(Ω 1 X ) comes from computing in(I/I 2 ). To compute h 1,1 (X), we use the long exact sequence in cohomology of Equation 4.1. Vanishing of H 0 (Ω 1 X ) due to the Calabi-Yau property and the long exact sequence in cohomology yield 0 −→ H 1 (I/I 2 ) −→ H 1 (Ω 1 P n ⊗ O X ) −→ H 1 (Ω X ) −→ H 2 (I/I 2 ) −→ H 2 (Ω 1 P n ⊗ O X ) −→ · · · Now, H 2 (Ω 1 P n ⊗ O X ) = 0, because it sits in the exact sequence and from Equation 4.2 we have h 1 (Ω 1 P n ⊗ O X ) = 1. Hence h 1,1 (X) = h 2 (I/I 2 ) − h 1 (I/I 2 ) + 1.
Writing I/I 2 for both the graded S-module and the sheaf, by local duality (see [12]) If the projective dimension of the S-module I/I 2 is less than n−2, then the vanishing of the Ext modules above is automatic, and h 1,1 (X) = 1. If not, we can compute h 1,1 (X) from the formula. Projective dimension can be computed quickly using the Macaulay2 command minimalBetti, which we illustrate below. Since the Euler characteristic is 51, h 1,2 (X) = 52. | 2020-11-24T02:01:27.707Z | 2020-11-21T00:00:00.000 | {
"year": 2022,
"sha1": "4bc724d54b3ddb9fd8b1e915be64f21b59b00aca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4bc724d54b3ddb9fd8b1e915be64f21b59b00aca",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
270416306 | pes2o/s2orc | v3-fos-license | The perceived competence of paramedics to operate in different CBRNE incidents
Purpose – TheaimofthisstudywastoidentifytheperceivedcompetenceofFinnishparamedicstooperatein different chemical, biological, radiological, nuclear, and explosive (CBRNE) incidents. Design/methodology/approach – This was a descriptive cross-sectional survey study. The material was collected using a previously developed questionnaire, which was modified in accordance with the study aim. The target group was paramedics of the P € aij € at-H € ame region of Finland ( N 5 166), whose role entailed active operational duties during the survey. Descriptive statistical methods were used. Findings – Paramedics reported low levels of training related to CBRNE incidents, and most felt that more training was needed. Chemical and explosive-related incidents were regarded as more likely to occur than others. Additionally, paramedics with more work experience perceived themselves as having higher competence only in chemical and explosive-related incidents. Overall, paramedics perceived their CBRNE competence as low. Originality/value – The perceived CBRNE competence of paramedics has not been studied sufficiently. Paramedics felt chemical and explosive related incidents were more likely to occur than others, and competence related to those two was also better perceived. This study showed that paramedics could benefit from more training to respond to CBRNE incidents to improve perceptions of their competence. However, the desired competence,actualcompetence,andappropriatetrainingtorespondtoCBRNEincidentsrequirefurtherresearch.
Introduction
The role of Emergency Medical Services (EMS) is to offer prehospital emergency care around the clock to citizens.In major incidents or disasters, paramedics are often the first professionals on the front line (Horrocks et al., 2019;Beyramijam et al., 2020a, b).However, major incidents are significantly different from the daily work of paramedics.In addition to logistical challenges and possibly treating several patients simultaneously, staff must manage chaotic and stressful working conditions (Berben et al., 2021).Preparedness, IJES 13,2 140 hospital located in Lahti, the primary town of the region.The nearest university hospital is in Helsinki, approximately one hour away by ambulance.From September to December 2022, 166 paramedics worked in the region.
The P€ aij€ at-H€ ame region has 27 institutes where dangerous substances are stored.Six of these institutes are classified as being at risk of a major accident (P€ aij€ at-H€ ame Rescue Department, 2023).Dangerous substances are also transported by road and rail.Two major highways and one railway pass through the largest population center in P€ aij€ at-H€ ame.According to Str€ ommer (2019), 8.0 million tons of flammable liquids and 2.7 million tons of corrosive substances are transported by road.Accidents involving dangerous substances in rail traffic are rare, but there is such potential in Finland (Finnish Transport and Communication Agency, 2022).Up to 5.0 million tons of dangerous substances travel by rail annually (Str€ ommer, 2019).An accident related to a dangerous substance could result in a leak, explosion, or toxic gas cloud, which can travel up to 2 km, depending on the chemical and environmental properties.The nuclear power plant located in Loviisa, which is less than 40 kilometers from the P€ aij€ at-H€ ame border, is considered a risk site outside the area (P€ aij€ at-H€ ame Rescue Department, 2023).
All 21 Wellbeing Services Counties in Finland (Act on Wellbeing Services Counties, 2021) organize EMS in their own areas of responsibility (Decree of the Emergency Medical Services, 2017).There are 3,900 full-time EMS-personnel working in Finland (Venesoja et al., 2021).In Finland, ambulances are typically staffed by two paramedics working together.These paramedics handle a range of prehospital emergency care missions dispatched by the emergency response center.When an ambulance is sent to the scene of an incident, paramedics evaluate the patient's care needs and decide either to transport the patient to the hospital or make a non-conveyance decision.Multiple ambulances are dispatched for larger incidents.In larger incidents, EMS may have to cooperate with the rescue department, police, border guard, and Finnish defense forces (Health Care Act, 2010; Act on the Defense Forces, 2007).EMS cooperates most with the rescue department in daily operations (Regional State Administrative Agency, 2022).The EMS field supervisor manages the daily operational activities of EMS.Paramedics can always consult an on-call physician by phone, but only the largest cities have physicians actively working at the scene.In addition, helicopter EMS (HEMS) physicians operate across the entire country from seven bases.
In P€ aij€ at-H€ ame, paramedics work at three different levels: basic, advanced, and critical care.In addition to these professional groups, there is an EMS field supervisor who manages the units.The Finnish paramedic training system is unique and is not directly comparable to the EMS systems of other countries.In a basic level EMS unit, one paramedic must have health care professional training (a three-year vocational upper secondary qualification) specializing in prehospital emergency care; another must be a health care professional or firefighter.In an advanced-level EMS unit, one paramedic must hold at least a bachelor's degree.This can either be a bachelor's of emergency care (240 ECTS -European Credit Transfer and Accumulation System) from a University of Applied Sciences or a registered nurse bachelor's degree (210 ECTS) with the completion of a 30 ECTS advanced-level prehospital specialization course.In a critical care-level EMS unit, both paramedics must have the same education as an advanced level unit, and in addition, they must have completed an individual one-year training course organized by the P€ aij€ at-H€ ame region (Service level agreement, 2017; Decree of the Emergency Medical Services, 2017).
The EMS field supervisor has at least the same educational background as advanced level paramedics, with operative leadership training and sufficient work experience.The EMS field supervisors also contribute to the multi-authority tasks and participate in the patient's treatment when needed (Decree of the Emergency Medical Services, 2017).IJES 13,2
The questionnaire
The questionnaire used in this study was based on the validated New South Wales (NSW) ambulance survey of paramedics used in "Determinants of paramedic response readiness for CBRNE threats" by Stevens et al. (2010) from the University of Western Sydney, Australia.Permission to use parts of the questionnaire in this study was granted by the authors.
The original questionnaire (Stevens et al., 2010) was modified for this study (Appendix 1).The background questions were adapted for this study.We utilized work experience categories and age groups instead of specific years to ensure the anonymity of the respondents.We also omitted the original questions regarding gender, relationship status, and family size as they were outside the scope of this study, and the answers could make it easier to identify the respondents.Finally, the profession group, education level, work experience in prehospital emergency care, age group, and special training, experience, or responsibility related to CBRNE were included in this study as background questions.
In addition, the original Australian questionnaire (Stevens et al., 2010) was modified to reflect the local circumstances regarding the training and prehospital emergency care systems.There were no open-ended questions.Ten questions were included for each CBRNE threat (Appendix 1).The respondents were asked to evaluate the possibility of the threat and their perceived competence related to the CBRNEs.In contrast to the original questionnaire, each of the five sections had questions regarding an unintentionally caused event and an intentionally caused terrorist event occurring in Finland and the P€ aij€ at-H€ ame region.In addition, the terrorist bombing questions in the original questionnaire (Stevens et al., 2010) were modified to relate to an explosion incident, including the same ten questions as the CBRNEs.The original questions about the psychological burden/stress of CBRNE incidents were omitted as they were outside the scope of this study.
Data gathering
All paramedics in the P€ aij€ at-H€ ame region with operational duties were invited to participate in the study in the fall of 2022 by email.The cover letter explained the purpose of the study and included a link to the questionnaire in the Webropol platform.It also contained information about privacy and emphasized the voluntary nature of participation in the study.The data gathering was conducted between October 28 and December 31, 2022.The paramedics were reminded of the survey by e-mail and in weekly morning meetings.Eightyfour completed questionnaires were returned, of which 83 were suitable for analysis.The response rate was 50%.
Statistical methods
From the background information, CBRNE areas of special responsibility and received CBRNE training were reported in two groups: less than 10 years of work experience and 10 or more years of work experience, and these were reported as frequencies and percentages.
The ten-item questions related to CBRNE incidents were answered using a five-point Likert scale, as in the original survey.The answer options are: 1: Not at all, 2: A little, 3: Moderately, 4: Very, 5: Extremely, and Don't Know (Stevens et al., 2010).
The summary scales were formed for the perceived threat of an accident based on three questions (How likely do you think it is that an accident related to x (x being C, B, R, N, E) will occur in Finland?How likely do you think it is that an accident related to x will occur in P€ aij€ at-H€ ame?How concerned do you feel about a possible accident related to x?), and for the perceived threat of a terrorist attack based on three questions (How likely do you think it is that a terrorist act related to x will occur in Finland?How likely do you think it is that a terrorist act related to x will occur in P€ aij€ at-H€ ame?How concerned do you feel about a possible terrorist act related to x?) per C, B, R, N, and E separately.
Similarly, a summary scale for perceived competence was formed based on four questions (Within my work role, I feel competent to respond to the effects of a task related to x, Within my work role, I feel competent to respond to the effects of a terrorist task related to x, Within my work role, I have the resources to protect myself against the effects related to x, Within my work role, I have the training to manage the fear and behavior of members of the public who may have been exposed to x) respectively.
The three summary scores were analyzed separately for each of the CBRNEs.In perceived threats, higher scores (range 3-15 each) reflected a higher perceived threat.Higher scores (range 4-20) reflected a higher perceived competence, respectively.Mean, standard deviation (SD), minimum (min), and maximum (max) were calculated for each summary score in both groups.Differences between work experience groups were examined using the Mann-Whitney U test, and a p-value less than 0.05 was considered significant.In addition, further analyses were performed for the perceived competence questions, and mean, SD, min, max, and significance were reported.The analyses were performed using SPSS version 28.
Ethical considerations
According to Finnish law and the ethical guidelines provided by The Finnish National Board on Research Integrity TENK (2019), ethical pre-evaluation is not needed for survey studies with a non-sensitive topic, such as self-reported perceived competence, when conducted among working adults whose participation is based on informed consent.
Results
65.1% of the participants were advanced level paramedics and 75.9% had a bachelor's degree (Table 1).51.9% had ten or more years of work experience and 48.1% had less than ten years of work experience.89.2% of the participants did not have separately obtained special CBRNE training, experience, or a related responsibility area.28.9% had received basic CBRNE training within the last three years, 28.9% within the last five to ten years, and the remaining 42.2% had not received basic CBRNE training at all.However, 90.4% felt that they needed more training for CBRNE incidents.
More paramedics with ≥10 years of work experience had received CBRNE training (72.1%) than those with <10 years of work experience (42.5%) (Table 2).Among the less experienced, 17.5% of paramedics who received CBRNE training stated that the training gave them confidence, and 22.5% felt their confidence was unaffected by the training.Among the more experienced paramedics who had received CBRNE training, the training built confidence in 39.5% of the paramedics, while 20.9% did not feel that it had a positive effect on their confidence.
In the case of a perceived threat of an unintentional accident, those involving chemicals or explosives emerged as the biggest perceived threat in both work experience groups (Table 3).Paramedics with ≥10 years of work experience perceived the threat of chemical incidents and explosive incidents as greater than the less experienced paramedics, but the difference was not statistically significant.Similarly, in both groups, the biggest perceived threats of a terroristic incident were those involving chemicals or explosives.However, the only statistically significant difference was found in a perceived nuclear terroristic incident threat IJES 13,2 (p 5 0.029): the less experienced paramedics evaluated a nuclear terrorist incident as a greater threat (mean 6.85 (SD 2.424)) than the more experienced paramedics (mean 5.79 (SD 2.559)).
Notably, more experienced paramedics perceived themselves as having higher competence to respond to incidents involving explosives (p5 <0.001) (Table 3).The difference between the groups was also found in the chemical incident section (p 5 0.002).Paramedics with more work experience also perceived higher competence in biological, radiological, and nuclear incidents, but the results were not statistically significant.
The further analyses of perceived competence questions showed that those with more experience felt more competent (p 5 0.028) to respond to chemical-related incidents (Appendix 2).A similar result was found regarding explosives (p 5 <0.001).The results regarding perceived competence to respond to biological, radiological, and nuclear incidents were similar in both work experience groups.The lowest perceived competence was seen in nuclear incidents (mean 1.92 (SD 0.703) and mean 2.07 (SD 1.009), respectively).
Journal of Emergency Services
When paramedics evaluate their own competence to respond to incidents related to terrorism, the terrorist nuclear incidents received the lowest competence rating among the less experienced paramedics (mean 1.85 (SD 0.630)), but the corresponding value was clearly better with those who had more work experience (mean 2.97 (SD 1.121)) (Appendix 2).The difference was not statistically significant.The only statistically significant difference (p 5 0.001) was found in explosives-related terrorist incidents, where more experienced paramedics perceived themselves as being more competent.
The paramedics with less experience felt less competent to protect themselves against chemicals than the more experienced paramedics (p 5 <0.001) (Appendix 2).A similar significant difference (p 5 <0.001) was also seen in protecting themselves against explosives.The lowest and most similar perceived competence were felt against nuclear-related effects in both groups (mean 1.92 (SD 0.664) and mean 2.28 (SD1.182),respectively).
In the case of managing the fear and behavior of members of the public who may have been exposed to different CBRNE subjects, the perceived competence was similar in both work experience groups and lowest in terms of nuclear incidents (Appendix 2).Only in the case of explosives incidents did those who had more work experience feel more competent (p 5 0.012).
Discussion
The aim of this study was to identify the perceived competence of paramedics to operate in different CBRNE incidents.The main results were: (1) Paramedics reported low levels of training related to CBRNE incidents, and most of them felt that more training is needed; (2) Paramedics felt that chemical and explosive-related incidents are more likely than other types; (3) Paramedics with more work experience perceived their competence being higher only in chemical and explosive-related incidents; (4) Overall, paramedics perceived their CBRNE competence as low.The competence to respond to CBRNE incidents is currently emphasized due to the growing threat of hostility, at least in Europe.Finland is participating in the European Union's rescEU project, which aims to create a strategic reserve against CBRN threats (European Commission, 2023).Thorough preparedness is crucial, as paramedics risk their own health and safety when a major incident or disaster occurs.In this study, a significant proportion of paramedics felt that they needed more training to operate in CBRNE incidents, supporting the findings of Novack et al. (2022).Training should be available for paramedic students and paramedics already working to maintain and strengthen their perceived competence (Smith et al., 2018).Previous studies show that paramedics need to receive sufficient training to be competent enough to identify and prevent threats (Melnikova et al., 2019) and manage these incidents (Rebmann et al., 2019;Farhat et al., 2022).Previous studies also note that paramedics cannot be expected to participate in a mission for which they do not have the necessary training, resources, and support (Smith et al., 2018).However, it should be noted that this study only identified that the perception of competence is quite low, which does not mean that competence itself would be insufficient in the case of an CBRNE incident.
The findings of this study showed that explosive and chemical incidents were considered the most likely threats compared to other threats.The best perceived competence was felt in these two areas.One possible explanation is that incidents related to biological threats are not as practiced and familiar in prehospital emergency care, and preparedness for radiation and nuclear accidents might be generally weaker (Rebmann et al., 2019).In line with this, previous studies have shown that paramedics lack confidence in their competence in radiation and nuclear incidents (Dallas et al., 2017;Rebmann et al., 2019;Blumenthal et al., 2014).The radiological and nuclear threats had the lowest perceived level of competence in this study.
Since 2020, the global impact of the COVID-19 pandemic has heightened awareness of personal protective equipment and underscored the significance of proper protection (Bourassa et al., 2022).Overall, the need to proficiently use personal protective equipment is a commonality across CBRNE incidents.Proper usage is pivotal in ensuring safety.(Bourassa et al., 2022;Melnikova et al., 2019.)Future research could emphasize the safe and effective use of such resources, for example, paramedics' ability to operate in protective gear should be examined, for instance, regarding fine motor tasks when protective equipment hinders movement and obstructs visibility.The impact of individual characteristics on the functionality of protective gear would also be a good subject for more detailed studies, such as the effect of facial hair on the tightness of gas masks.
In this study, paramedics with more work experience perceived themselves as having higher competence only in chemical and explosive-related incidents.However, as the competence was self-reported and not tested, it is important to consider the possibility of the Dunning-Kruger effect, meaning that incompetent individuals may overestimate their abilities (Mazor and Fleming, 2021).Still, it should be noted that those who had worked more than ten years had received more CBRNE training than those who had worked for less than ten years, which could affect a higher perception of competence.For example, a study conducted in Sweden has shown that more work experience was associated with higher competence (Jansson et al., 2020).The finding that the length of work experience was not so important in this study may also be explained by the organization-driven improvement of CBRNE training and the increase of CBRNE preparedness in the region during the last ten years.
EMS organizations and educational institutions, in addition to the paramedics themselves, have an obligation to ensure the sufficient competence of paramedics (Tavares et al., 2012).In order to evaluate competence, it would be beneficial to have a predetermined level of desired competence (Tavares and Boet, 2015).Paramedics' daily work necessitates extensive knowledge of patient care and the ability to make independent decisions.Beyond routine tasks, they must also handle major incidents (Nilsson et al., 2020).The diverse areas of expertise required make defining the desired competence challenging (Tavares and Boet, 2015;Wihlborg et al., 2014).
International
Journal of Emergency Services Thus, paramedics and their supervisors might be the most informed to define and develop these desired competencies (Wihlborg et al., 2014).In this context, it would also be beneficial to determine how well paramedics can theoretically identify different toxicological syndromes in various scenarios when initially only the patient's symptoms are known.Overall, these issues have not been studied sufficiently (Nilsson et al., 2020).According to Houser (2022), the willingness to respond is part of preparedness.Unwillingness to work during major incidents can affect the preservation of the EMS system (Rebmann et al., 2020;Barnett et al., 2010) and increase the burden on those paramedics who do report to work (Rutkow et al., 2014).For example, previous studies have revealed that some paramedics would not be willing to respond to biological incidents (Rebmann et al., 2020;Barnett et al., 2010).However, training can increase their willingness to respond (Le et al., 2018;Rebmann et al., 2020;Houser, 2022).
The actual training and suitable pedagogy would benefit from a closer examination, as there is currently an insufficient understanding of how, for example, CBRNE competence should be developed.In this study, a quarter of paramedics trained in CBRNE incidents felt the training did not give them the confidence to respond effectively.According to a recent systematic review, paramedics' training for managing major incidents employs both traditional methods and technology-based training.While neither approach has been proven superior, technology-based training has been effective in enhancing paramedics' competence.(Baetzner et al., 2022.)In addition, scenario-based training helps to identify gaps in preparedness (Rebmann et al., 2020).
Methodological considerations
The present study was designed with a practical research question and a well-thought-out target group.In order to better align with the research question, the questionnaire developed by Stevens et al. (2010) was adapted accordingly.The results of this study are consistent with previous research on the topic, which enhances the reliability of the study.However, it is important to acknowledge that this study used self-reported data, which may contain respondent-originated biases for various reasons.Moreover, the descriptive nature of the results should be considered when interpreting the findings.Additionally, the modified questionnaire used in the study was not validated.It may not have included all relevant questions, even though it was well-targeted from the perspective of the research question.In follow-up studies on the subject, it would be beneficial also to consider qualitative approaches when examining perceived competence.
The target population of this study consisted of paramedics working in a single Finnish wellbeing county.In this study, a response rate of 50% was achieved from a total population of 166 paramedics.The paramedics were reminded of the survey several times, and the 2 weeks response time enabled reaching all the paramedics with operational duties during the study period.While the 50% is a substantial proportion, several factors must be considered when interpreting the results.The potential for non-response bias exists, as whether non-respondents systematically differ from respondents remains a consideration.Due to the small number of participants, the participants were divided into two work experience groups, each with an equal number of participants, and a non-parametric statistical test was used.Moreover, the small size of the target population may limit the generalizability of the study results both within Finland and internationally.Nonetheless, the findings of this study provide valuable insights and encourage further research in this area and among other EMS systems.
Conclusions
Paramedics assessed their CBRNE competence as low.According to the results of this study, paramedics would benefit from additional training to strengthen their perceived competence IJES 13,2 to work in CBRNE incidents.In this study, chemical and explosive-related incidents were felt more likely to occur than others, and competence related to those two subjects was perceived as better than others.The connections between work experience and CBRNE competence need further research.
Enhancing CBRNE awareness is important.The current prevailing global circumstances emphasize the need to enhance the competence and preparedness of EMS.Paramedics' desired competence, actual competence and appropriate training to respond in CBRNE incidents require further research. | 2024-06-13T15:25:15.332Z | 2024-06-12T00:00:00.000 | {
"year": 2024,
"sha1": "81d2804a93d50e691f7dccddfd274231664bcbee",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/IJES-06-2023-0025/full/pdf?title=the-perceived-competence-of-paramedics-to-operate-in-different-cbrne-incidents",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e547c354e0010781eadfa03fbfd0f36bd5cf338d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
12932795 | pes2o/s2orc | v3-fos-license | Discrete Imaging Models for Three-Dimensional Optoacoustic Tomography Using Radially Symmetric Expansion Functions
Optoacoustic tomography (OAT), also known as photoacoustic tomography, is an emerging computed biomedical imaging modality that exploits optical contrast and ultrasonic detection principles. Iterative image reconstruction algorithms that are based on discrete imaging models are actively being developed for OAT due to their ability to improve image quality by incorporating accurate models of the imaging physics, instrument response, and measurement noise. In this work, we investigate the use of discrete imaging models based on Kaiser-Bessel window functions for iterative image reconstruction in OAT. A closed-form expression for the pressure produced by a Kaiser-Bessel function is calculated, which facilitates accurate computation of the system matrix. Computer-simulation and experimental studies are employed to demonstrate the potential advantages of Kaiser-Bessel function-based iterative image reconstruction in OAT.
I. INTRODUCTION
Optoacoustic tomography (OAT), also referred to as photoacoustic computed tomography, is an emerging hybrid imaging modality that combines the high spatial resolution and ability to image relatively deep structures of ultrasound imaging with the high optical contrast of optical imaging [1], [2]. OAT has great potential for use in a number of biomedical applications, including small animal imaging [3]- [6], breast imaging [7], [8], and molecular imaging [9]. In OAT, an object is illuminated with short laser pulses that result in the subsequent generation of internal acoustic wavefields via the thermoacoustic effect [1], [10]. The initial amplitudes of the induced acoustic wavefields are proportional to the spatially variant absorbed optical energy density within the object, which will be denoted by the object function A(r). The acoustic wavefields propagate out of the object and are detected by use of a collection of wide-band ultrasonic transducers that are located outside the object. From these acoustic data, an image reconstruction algorithm is employed to obtain an estimate of A(r).
As in other tomographic imaging modalities [11], [12], iterative image reconstruction algorithms can improve image quality in PACT [13]- [18]. Moreover, the development of advanced iterative image reconstruction algorithms can allow for the design of PACT systems that acquire smaller data sets, thus reducing the total data-acquistion time. In a previous study, it was demonstrated that iterative image reconstruction algorithms, in general, yield more accurate OAT images than those produced by a mathematically exact filtered backprojection algorithm [18].
Most OAT iterative reconstruction algorithms are based on discrete-to-discrete (D-D) imaging models [19]. D-D imaging models employ a discrete imaging operator, also known as a system matrix, to map a finite-dimensional approximation of A(r) to the measured data vector, which is inherently finite-dimensional in a digital imaging system. The finite-dimensional approximation of A(r) is often formed as a weighted sum of a finite number of expansion functions. The choice of expansion functions can be motivated by numerous practical and theoretical considerations that include a desire to minimize representation error, incoporation of a priori information regarding the object function, or ease of computation. Common choices of expansion functions in OAT include cubic and spherical voxels [14], [20]- [22], and linear interpolation functions [22]- [24].
It should be noted that none of these expansion functions are differentiable at their boundary, and therefore the pressure signal produced by each of them, when treated as optoacoustic May 12, 2014 DRAFT sources, will possess an infinite temporal bandwidth. As discussed later, this leads to numerical inaccuracies when computing the associated system matrices. In general, different choices for the expansion functions will result in system matrices that have distinct numerical properties [25] that will affect the performances of iterative image reconstruction algorithms. There remains an important need for the further development of accurate discrete imaging models for OAT and an investigation of their ability to mitigate different types of measurement errors found in real-world implementations.
In this work, we develop and investigate a D-D imaging model for OAT based on the use of radially symmetric expansion functions known as Kaiser-Bessel (KB) window functions, also widely known as 'blob' functions in the tomographic reconstruction literature [26]- [28].
Radially symmetric and smooth expansion functions such as these possess a convenient closedform solution for the optoacoustic pressure signal produced by them, which facilitates accurate OAT system matrix construction. KB functions have been widely employed to establish discrete imaging models for other modalities such as X-ray computed tomography [27], [29] and optical tomography [28]. They have several desirable features that include having finite spatial support, being differentiable to arbitrary order at the boundaries, and being quasi-bandlimited. The statistical and numerical properties of images reconstructed by use of an iterative algorithm that employs the KB function-based system matrix are systematically compared to those corresponding to use of an interpolation-based system matrix. We also demonstrate the use of non-standard discretization schemes in which the KB functions are centered at the verticies of a body centered cubic (BCC) grid rather than a standard 3D Cartesian grid, which reduces the number of expansion functions required to represent an estimate of A(r) by a factor of √ 2. It should be noted that the proposed D-D imaging model is general in the sense that the KB functions can be replaced by any other radially symmetric set of expansion functions that possess a closed-form solution for the optoacoustic pressure generated by them. See, for example, [28], for descriptions of alternative forms of radially symmetric expansion functions.
The remainder of the paper is organized as follows. A previously employed linear-interpolationbased OAT imaging model is reviewed in Section II and the new KB function-based imaging model is described in Section III. A description of the numerical and experimental studies are provided in Section IV. Section V contains the results of these studies and the paper concludes with a discussion in Section VI.
A. General formulation of discrete-to-discrete (D-D) imaging models
An OAT imaging system employing point-like ultrasonic transducers can be accurately described by a continuous-to-discrete (C-D) imaging model as [18], [19], [21] [ where h e (t) is the electrical impulse response (EIR) of the transducer [21], [30], * t denotes the temporal convolution operation, δ(t) is the one-dimensional Dirac delta function, and β, c 0 and C p denote the thermal coefficient of volume expansion, (constant) speed-of-sound, and the specific heat capacity of the medium at constant pressure, respectively. The vector u ∈ R QK represents a lexicographically ordered collection of the sampled values of the electrical signals that are produced by the ultrasonic transducers employed, where Q and K denote the number of transducers employed in the imaging system and the number of temporal samples recorded by each transducer, respectively. The notation [u] qK+k will be utilized to denote the (qK + k)-th element of u. Here, the integer-valued indices q and k indicate the transducer position r s q ∈ R 3 and temporal sample acquired with a sampling interval ∆ t . The object function A(r) is assumed to be bounded and contained within the volume V. The imaging model can be readily generalized to account for the spatial impulse reponse of a transducer [21].
In practical applications of iterative image reconstruction, it is convenient to approximate the C-D imaging model in Eqn. (1), which maps the object function to a finite-dimensional vector, by a fully discrete model. This requires introduction of a finite-dimensional representation of A(r).
A linear N-dimensional approximation of A(r), denoted by A a (r), [19], [25] can be expressed as where α ∈ R N is a coefficient vector whose n-th component is denoted by May 12, 2014 DRAFT where the QK × N matrix H is the D-D imaging operator, also known as system matrix, whose elements are defined as The image reconstruction task is to estimate α by approximately inverting Eqn. (3), after which an estimate of A(r) is obtained by use of Eqn. (2). In principle, the expansion functions ψ n (r) can be arbitrary. However, for a given N, they should be chosen so that A(r) ≈ A a (r) and therefore u ≈ u a .
B. Linear interpolation-based D-D imaging model
Linear interpolation-based D-D imaging models have been employed for OAT iterative image reconstruction [23], [24]. These imaging models typically employ spatially-localized expansion functions that are centered at the verticies of a Cartesian grid. As an example, when a trilinear interpolation method is employed, the expansion function can be expressed as [19], [31]: where r n ≡ (x n , y n , z n ) specifies the location of the n-th vertex of a Cartesian grid with spacing ∆ s . For this particular choice of expansion function, the expansion coefficient vector will be denoted as α int and can be defined as [α int ] n = A(r)| r=rn , for n = 0, 1, · · · , N − 1. The system matrix whoses elements are defined by use of Eqn. (5) in Eqn. (4) will be denoted as H int and the associated D-D imaging model is given by Note that the numerical implementation of H int requires an additional discretization of the volume integral in Eqn. (4). Details regarding the numerical implementation of H int can be found in Ref. [22].
A. Kaiser-Bessel expansion functions in OAT
The KB function of order m is defined as [26], [28] b where x ∈ R + , I m (x) is the modified Bessel function of the first kind of order m, and a ∈ R + and γ ∈ R + determine the support radius and the smoothness of b(x), respectively. Following previously employed terminology [29], we refer to the expansion function ψ KB n (r) ≡ b(x)| x=|r−rn| as a KB function centered at location r n .
The system matrix whose elements are defined by use of ψ KB n (r) in Eqn. (4) will be denoted by H KB . Unlike with H int , the elements of H KB can be computed analytically, as described below.
This is highly desirable, as it circumvents the need to numerically approximate Eqn. (4) [32].
In contrast, the linear interpolation-based models usually require numerical approximations to compute the system matrix [22], which can introduce errors that ultimately degrade the accuracy of the reconstructed image. A similar phenomenon has been analyzed in differential X-ray phasecontrast tomography image reconstruction [27]. Several linear interpolation methods have been proposed to analytically calculate the imaging operator acting on each voxel, but numerical instabilities are present corresponding to certain tomographic view angles [33].
It will prove convenient to formulate the KB function-based imaging model in the temporal frequency domain [18]. Consider that the discrete Fourier transform (DFT) of the sampled temporal data recorded by each transducer is computed. Letũ denote a temporally Fourier transformed data vector formed by lexicographically ordering these data. The imaging model in the temporal-frequency domain will be expressed as The elements of the modified system matrixH KB are given by [18] [ where ∆ f denotes the temporal frequency sampling interval,h e (f ) is the one-dimensional Fourier transform of h e (t), andh s q (r n , f ) is the spatial impulse response (SIR) in the temporal frequency domain [18], [21], [34]. When a point-like transducer assumption is justified,h s q (r n , f ) degenerates to the Green functioñ where r s q and r n are locations of the q-th transducer and the center of the n-th KB function, respectively. The quantityp KB 0 (f ) is the temporal Fourier transform of the acoustic pressure generated by a KB function located at the origin and is expressed as [32], [35] (See Appendix) where j m (x) is the m-th order spherical Bessel function of the first kind. Equation (9) is valid for any radially symmetric expansion function. Note that a previously proposed OAT imaging model that employed uniform spherical voxels as the expansion functions [18], [21] is contained as a special case of the KB function-based imaging model corresponding to m = 0, γ = 0, and a = ∆ s /2.
Selection of parameters for the KB function in Eqn. (7) has been comprehensively described in the literature [36]. The parameter m, for example, determines the differentiability of the [26], and optical tomography [28] because it provides the smallest representation error when estimating a piecewise constant function [28, see, for example, Fig. 4]. The choice of optimal parameters is, however, application-dependent [37], [38].
B. Kaiser-Bessel functions on non-standard grids
The expansion functions {ψ n (r)} are typically positioned on a 3D Cartesian grid when constructing D-D imaging models for OAT, including the linear-interpolation-based imaging models. The Cartesian grid, also referred to as simple cubic (SC) grid, is a natural choice if the support volume of ψ n (r) is cubic. When the support volume is a sphere, however, body centered cubic (BCC) and face centered (FCC) grids, as sketched in Fig. 2, can have advantages and have been proposed for use in X-ray computed tomography [39]. Let ∆ s , ∆ b s , and ∆ f s denote the grid spacing of the SC, BCC and FCC grids, respectively. When the grid spacing satisfies s , the three types of grids will be referred to as "equivalent" [39] because the highest spatial-frequency of the object function is equivalently limited by 1/(2∆ s ) if an unaliased sampling is desired. Accordingly, the BCC and the FCC grids can potentially reduce the number of required expansion functions by factors of √ 2 and 3 √ 3/4 respectively [39]. Unlike with an FCC grid, the implementation of an imaging model corresponding to a BCC grid is very similar to the implementation of one corresponding to a SC grid because the BCC grid can be interpreted as two interleaved SC grids. In the numerical studies described below, we investigate the use of the KB function-based imaging model for 3D OAT assuming a BCC grid with spacing ∆ b s = √ 2∆ s .
IV. DESCRIPTIONS OF NUMERICAL STUDIES
Numerical studies were conducted to compare the numerical properties of the system matrices H int andH KB and analyze differences in the numerical and statistical properties of images reconstructed by use of them.
A. Simulation of noise-free data and imaging geometry
In this work, the numerical phantoms representing A(r) consisted of a collection of spheres.
Each sphere possessed a different center location, radius and absorbed optical energy density, denoted by r i , R i and A i for the i-th sphere. The noise-free data for the phantoms were simulated by two steps: first, samples of the acoustic pressure generated by each spherical structure were analytically calculated as [2], [19] May 12, 2014 DRAFT Second, the resulting p i (r s q , t)| t=k∆t were subsequently convolved with h e (t) and summed to generate the noise-free data as where h e (t) was experimentally measured [30] (3 MHz bandwidth with 3 MHz center frequency.) We ignored the SIR in order to facilitate the implementation of the linear-interpolation-based imaging model. Also the point-like transducer assumption is likely to be sufficiently accurate for our experimental system when the object is located near the center [5], [21], [40]. From the time domain data u, the temporal-frequency domain dataũ were computed by use of the fast Fourier transform (FFT) algorithm.
The simulated imaging system is described as follows. We employed a spherical measurement surface of radius R s = 65 mm centered at the origin of a global coordinate system as shown in The object was contained in a cube of size 8.96 mm in each dimension that was centered at the origin.
B. Image reconstruction algorithms
Image reconstruction was conducted by first solvinĝ andα to estimate the expansion coefficients for the linear-interpolation-and KB function-based imaging models respectively. Here, R(α) is the regularization penalty and β int and β KB are regularization parameters. A conventional quadratic penalty was employed to promote local smoothness, i.e., May 12, 2014 DRAFT where N (n) is an index set of the neighboring voxels of the n-th voxel. We implemented a linear conjugate gradient algorithm to solve Eqns. (14) and (15) iteratively based on the associated normal equations [41]. The iteration was terminated when the residual of the cost function was reduced to a prechosen level in its Euclidean norm [41]. From the resulting coefficient vectorŝ α int andα KB , images were estimated by use of Eqn. (2), rewritten aŝ and for the linear-interpolation-and KB function-based imaging models respectively, where N int and N KB are the total number of corresponding expansion functions.
C. Singular value analysis of D-D imaging models
A singular value analysis was conducted to gain insights into the intrinsic stability of image reconstruction by use of system matrices H int andH KB . We reduced the number of rows of both H int andH KB to circumvent the great demand of memory in the calculation of singular values.
More specifically, if the reduced-dimensional system matrices H int (orH KB ) act on α int (or α KB ), the resulting vector will estimate the voltage signals (or the temporal-frequency spectra) received by a single transducer located at (R s , 0, 0) mm. We expect the singular value spectra of the reduced-dimensional system matrices to be similar to those of the original system matrices because the imaging system is approximately rotationally symmetric. The relation between the singular values of the reduced system matrices and those of the original system matrices can be found in [42]. The QR and QZ algorithms [43] embedded in MATLAB were employed to calculate the eigenvalues of the reduced-dimensional H int H † int andH KBH † KB respectively. By taking the square root of the eigenvalues, singular value spectra of the reduced-dimensional H int andH KB were obtained.
D. Simulation of random object functions
In order to investigate the effect of representation errors on the reconstructed images, we employed a random process to generate an ensemble of object functions [37]. The random object function will be denoted by A(r). Here and throughout this manuscript, the underline indicates that the corresponding quantity is random. Each realization of A(r) consisted of 9 smooth spheres (indexed by i for i = 0, 1, · · · , 8) with random center locations, radii, and absorbed optical energy densities, denoted by (x i , y i , z i ), R i , and A i , respectively. A slice through the plane z = 0 of a single realization of A(r) is provided in Fig. 3-(b). The statistics of A(r) are listed in Table I, where the standard deviations (STD) are given in units of either mm or percentage of the corresponding mean values. The spheres indexed from 1 to 5 were blurred by use of Gaussian kernels G i (r) whose full width at half maximums (FWHM) are also given in Table I. The blurring of the spheres was implemented by modifying Eqn. (13) as where g i (t) is a Gaussian kernel whose FWHM is that of G i (r) scaled by a factor of 1/c 0 [44]. We generated 64 realizations of A(r), each of which will be denoted by A (j) (r) for j = 0, 1, · · · , 63.
E. Simulation of measurement noise
In order to analyze the noise properties of H int andH KB , an additive Gaussian white noise model was employed to simulate electronic noise: where n is the Gaussian white noise process, u is the noiseless voltage data corresponding to A(r), and u is the measured noisy data. The STD of n was set to be 10% of the maximum of u. We simulated 128 realizations of u. The corresponding temporal-frequency domain dataũ were computed by use of the FFT algorithm.
F. Assessment of reconstructed images
The accuracy of a reconstructed image, in principle, can be assessed by an error functional where d (r m ) denotes the estimation of the object function found by samplingÂ(r) onto a fine SC grid and r m specifies the location of the m-th vertex on the fine SC grid with spacing ∆ d . The grid spacing ∆ d is required to be smaller than ∆ s to justify the approximation in Eqn. (22). The fine SC grid will be referred to as a "display grid" and is used throughout the manuscript to compare reconstructions using the linear interpolation-and KB functionbased image reconstruction algorithms. Furthermore, in order to investigate the dependence of reconstruction accuracy on various object structural features, regional mean-square errors (MSE) are introduced as where S r is the index set of display grid vertices contained within a certain ROI, and M r is the dimension of S r . We defined 5 ROIs (see Fig. 6 ROIs are centered in the plane z = 0 and are marked in Fig. 6-(a). Besides the 3D ROIs, we also calculated the regional MSE across the 2D plane z = 0 as an overall accuracy measure.
For both the 3D ROIs and the 2D plane z=0, the MSE was calculated for each realization of the object function. Due to object variablity, the MSE for each realization of the object function is random and will be denoted by MSE. From the ensemble of object functions, the ensemble mean-square error (EMSE) was calculated as where the MSE (j) denotes the j-th realization of MSE.
The accuracy of reconstructed noisy images were quantified by their first-and second-order statistics. From J noisy realizations, the mean and variance of reconstructed images were estimated by and respectively. Because the statistics of the reconstructed images depend on the regularization parameter [18], [45], [46], we swept the regularization parameter over a wide range to generate a curve of Var A against Mean A for each system matrix. From these curves, we investigated the performance of H int andH KB on balancing the bias and variance of the reconstructed images.
G. Experimental validation
We investigated the performance of H int andH KB by use of experimentally measured data.
The experimental data were collected by use of a custom-built optoacoustic imaging module [5], [18]. Hz with output wavelength of 780 nm. Data acquisition was performed with analog amplifiers set to 75 dB with a sampling rate of 20 MHz. More details regarding the system can be found in [5], [47].
A phantom was built that contained transparent 10% gelatin shaped in a cylinder of radius temporal samples were acquired for two consecutive illuminations and then averaged together, improving the signal-to-noise ratio. Accordingly, the dimension of the measured data set was 1024 × 150 × 63. Note that the data acquired by the first element on the 64-element transducer array were employed for time alignment intead of for image reconstruction. We repeated the data acquisition procedure described above 64 times, creating an ensemble of noisy measurements.
Images were reconstructed by first solving the penalized least-squares objectives defined by Eqns. (14), (15) and (16) Image quality was assessed based on a parameter-estimation task. The parameter to be estimated was the average value within an ROI of size 1 × 1 mm 2 in a single plane of the object, denoted by θ true . We set θ true to be the one estimated from a reference image as where ref d (r m ) denotes the reference image, evaluated at r m , that was iteratively reconstructed by use of H int with ∆ s = 0.14 mm and β int = 1 × 10 −2 from the data averaged over the 64 noisy measurements. Estimates of θ true from noisy measurements, denoted by θ, were calculated where d (r m ) is the random image, evaluated at r m . We employed the bias and variance of θ as the figures of merit to evaluate the quality of images reconstructed by use of H int andH KB .
The bias of θ was estimated by where J is the number of realizations of θ, Note that this choice of reference in Eqn. (27) actually favors the performance of H int . Also, the variance of θ was estimated by We swept the regularization parameter over a wide range to investigate the performance of H int andH KB on balancing the tradeoff between Bias θ and Var θ [18], [45], [46].
A. Singular value analysis of the D-D imaging models
Singular value spectra of H int andH KB were calculated with equivalent SC and BCC grids, The singular value spectra ofH KB is, in general, spread over a wider range compared to that of H int as shown in Fig. 5. Note that only the first ∼160 singular values of H int fall above our truncation threshold of 10 −4 . Since both H int H † int andH KBH before the final convergence is achieved. If measurement noise can be approximated as white, the singular value spectra also suggest that iterative image reconstruction based onH KB is more robust to measurement noise because the singular values ofH KB are in general larger than those of H int [25]. Note that when using the reduced grid spacing, the singular values have larger magnitudes than with the coarser spacing in the range of the 70-th to 130-th singular value, suggesting more components of the object function can be stably reconstructed. This gain, however, is traded with a cubical increase in computational time.
B. Images reconstructed from an ensemble of noiseless data
Images were reconstructed from noiseless simulated measurement data by use of a leastsquares (LS) objective, i.e. β int = β KB = 0 in Eqns. (14) and (15). We set ∆ s = 0.14 mm, ∆ b s = 0.2 mm, a = 0.28 mm, γ = 10.4, and m = 2. Accordingly,α int andα KB were of dimensions 64 3 and 45 3 × 2, respectively. In addition, a display grid of spacing ∆ d = 0.0175 mm was selected for image quality assessment as described in Sect. IV-F. Images reconstructed by use ofH KB , shown in Fig. 6-(c), are more accurate than those reconstructed by use of H int as shown in Fig. 6-(b). The MSE of the 2D slice in the plane of z = 0 of the image reconstructed by use ofH KB (MSE = 3.50 × 10 −3 ) is only 13.3% of that by use of H int (MSE = 26.32×10 −3 ). Here, iterations were terminated when the Euclidean norm of the residual of the cost functions was reduced to 0.01% [41]. We enforced this stringent stopping criterion in order to approach the Moore-Penrose pseudoinverse solutions [25]. Note that the LS objectives, i.e, u − H int α int 2 and ũ −H KB α KB 2 , were monotonically decreasing during the iteration. Even though the images were reconstructed from noiseless data, one observes that artifacts are present (see Fig. 6). These artifacts are due to the errors in the system matrices as well as the responses of the system matrices to the errors. These results suggest thatH KB more accurately approximates the true underlying C-D imaging model, i.e., Eqn. (1), than does H int .
The residual of the cost functions decays faster in general whenH KB is employed as shown in Fig. 7. It took 2675 and 1782 iterations to achieve the stopping criterion by use of H int and H KB , respectively, suggesting a faster convergence rate by use ofH KB as predicted by the SVD analysis in Sect. V-A.
As shown in Fig. 8-(a), the minimal MSE appeared at the 37-th and the 68-th iteration by use of H int andH KB respectively, far before the final convergence. Images corresponding to the minimal MSEs are displayed in Fig. 9 Fig. 9-(a)) contains more ripple artifacts than does the image corresponding toH KB (Fig. 9-(b)). This observation is especially evident in the slowly-varying region as shown in Fig. 9-(c).
It is also interesting to note thatH KB results in a larger overshoot in the region containing a small sharp structure ( Fig. 9-(d)), which is consistent with those observations made in previous studies of KB function-based image reconstruction [49]. However, the circular shape of the small structure is better preserved by useH KB (see the reference in box-0 in Fig. 6-(a)). In summary, H KB resulted in more accurate reconstruction than did H int .
It is notable that the minimal MSE defined in the plane of z = 0 implies little on the accuracy of other regional MSE's as shown in Fig. 8. As expected, all regional MSE's increase after initially declining because the errors in approximating the true C-D model (i.e. Eqn. (1)) with the system matrices are amplified during iterations and present as artifacts in the reconstructed images. However, the regional MSE's corresponding to H int increase more rapidly than do those corresponding toH KB , suggestingH KB is numerically more stable. Also, the minimal values of various regional MSE's corresponding toH KB are in general smaller than those corresponding to H int . This observation is especially evident in the uniform and slowly-varying ROIs (see The EMSE's given in Table II further confirm that images reconstructed by use ofH KB are more accurate than those by use of H int .
C. Images reconstructed from an ensemble of noisy data
An ensemble of noisy images were reconstructed by solving Eqns. (14) and (15) Fig. 12-(a). Figure 12-(a) suggests that, for any choice of β int , there exists a β KB such that images reconstructed by use ofH KB are more accurate as well as less varying among realizations. Since they were calculated between the phantom and mean images, the MSEs describe image bias averaged over ROIs. Within various ROIs, images reconstructed by use ofH KB are always less biased than those reconstructed by use of H int when both are at the same variance level except for the region containing the small sharp structures (See Fig. 12). In addition, when β int and β KB took large values, the difference between the performace ofH KB and H int is less obvious. These observations are also consistent with those observed in other imaging modalities [45], [50], [51].
D. Experimental Results
The optimal performance of H int andH KB is displayed in Figs 14-(b)). This is expected since the choice of {ψ KB n (r)} constrains KB (r) to be differentiable in space. Further, profiles of the reconstructed images (see Fig. 15) indicate a notable quantitative error in the images reconstructed by use of H int . This observation is consistent with our computer-simulation results that suggest that slowly varying regions can be more accurately reconstructed by use of H KB (see Fig. 8-(c) and Table II). In addition, one observes spatially dependent variances among images reconstructed from 64 measurements as shown in Fig. 14-(c) and -(d). Specifically, the variance maps contain structural patterns, suggesting object dependent noise statistics [52]. At the optimal performance, the average variance corresponding toH KB (∼ 6.14 × 10 −4 ) is about 78% of that corresponding to H int (∼ 7.83 × 10 −4 ). This observation is predicted by the singular value analysis in Sect. V-A.
We estimated the optical energy densities within two ROIs marked in Fig. 13-(a), where the true energy densities estimated from the reference image were 0.64 and 0.45 in arbitrary units, respectively, for ROI-A and ROI-B. Both ROIs are of dimension 1×1 mm 2 . We swept the values of β int and β KB within the ranges [0, 0.15] and [0, 1.0] respectively. Within these ranges, the plots corresponding toH KB are always below the plots corresponding to H int as shown in Fig. 16.
The results suggest that optical energy densities can be more accurately and stably estimated by use of ofH KB than by use of H int .
VI. DISCUSSION
The KB function-based imaging model investigated in this work generalizes the uniformspherical-voxel-based imaging model we proposed earlier [18], [21]. This generalization maintains the convenience in modeling the finite aperture size effect of ultrasonic transducers (see Eqn. (9)) while reducing computation by a factor of √ 2 with the use of an equivalent BCC grid.
Computer-simulation and experimental results have demonstrated that the KB function-based imaging model is, in general, not only quantitatively more accurate but also numerically more stable than a conventional linear-interpolation-based imaging model. By use of iterative image reconstruction algorithms based on KB function, absorbed optical energy densities can be more accurately estimated with smaller variances.
The KB function-based imaging model possesses at least two limitations. First, if the object contains fine sharp structures possessing a dimension that is smaller than the KB function radius, the KB function-based imaging model may lead to a overshoot in the reconstructed images as shown in Fig. 9-(d). Second, the computational complexity for KB function-based iterative image reconstruction is, in general, higher than that for interpolation-based iterative image reconstruction. As described below, for the application presented in this study, the computational time required to complete one iteration was approximately 50% longer for the KB function-based Fig. 8-(b) and -(c) and Table II). Moreover, the KB function-based imaging model appears to be more robust to random noise as predicted by the singular value spectra (see Fig. 5). These advantages are due to the fact that the KB function-based representation constrains reconstructed images to be spatially differentiable as well as the fact that the KB function-based system matrix is analytically calculated with no numerical approximations on the time derivative term [27], [33].
Therefore, we believe that the superior performance of the KB function-based imaging model will persist even if different optimization algorithms or different linear-interpolation-based imaging models [13], [20], [23], [24] are employed.
To our knowledge, this is the first study in which iterative image reconstruction algorithms were evaluated by use of a parameter estimation task in OAT [53]. Task-based imaging quality assessment is seldom employed in OAT studies [53]. An important reason is that the necessary statistical studies [25] Our task-based image quality assessment study is far from comprehensive, but it is interesting to observe the dependence in the noise pattern on the image reconstruction algorithms (see Fig. 14-(c) and -(d)). How the noise pattern affects tasks such as tumor detection remains an interesting and open topic for future studies [25], [53].
APPENDIX A
FUNCTIONS
In a homogeneous medium in three-dimensions, the pressure,p(r, f ) induced via the photoacoustic effect is given bỹ where f is the frequency and k = 2πf /c 0 . Suppose the source is described by a spherically symmetric function, namely, A(r) = a(r), where r ∈ R + . The pressure is then given bỹ The last step was performed by evaluating the integral in spherical coordinates over the azimuthal and polar coordinates. Introducing the auxiliary function the expression in Eq. (33) can be simplified tõ where A(k) is the one-dimensional Fourier transform ofā(r) and the derivative identity for Fourier transforms was used. Equation (35) can be used to calculate the pressure induced by any integrable and radially symmetric expansion function. In the specific case that a(r) represents a KB function, the Fourier transform of the KB function of order m can be found in p spatial dimensions via Sonine's second integral formula [54,see Sec. 12.13] as described in Lewiit [26]: where A (p) m (k) is the spatial Fourier transform of a KB function of order m in p-dimensions. Substituting the form for the Fourier transform of the KB function into Eq. (35) for p = 1 dimensions, the pressure generated by a KB function centered at the origin is given by the temporal frequency domain expression: where, again, k = 2πf /c 0 and is the spherical Bessel function of order m.
Note that taking the inverse Fourier transform of the expression for the pressure in Eq. (35) gives an exact expression for the time-domain pressure generated by a spherically symmetric source: which agrees with previous results [35]. | 2014-05-09T00:30:27.000Z | 2013-10-03T00:00:00.000 | {
"year": 2013,
"sha1": "328e127eb9eaf2a9f7af39437a722bdcd4995a0d",
"oa_license": null,
"oa_url": "https://europepmc.org/articles/pmc4374808?pdf=render",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "328e127eb9eaf2a9f7af39437a722bdcd4995a0d",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Medicine"
]
} |
209909213 | pes2o/s2orc | v3-fos-license | Measurement of $^{58}$Ni($p$, $p$)$^{58}$Ni elastic scattering at low momentum transfer by using the HIRFL-CSR heavy-ion storage ring
The very first in-ring reaction experiment at the HIRFL-CSR heavy-ion storage ring, namely proton elastic scattering on stable $^{58}$Ni nuclei, is presented. The circulating $^{58}$Ni$^{19+}$ ions with an energy of 95 MeV/u were interacting repeatedly with an internal hydrogen gas target in the CSRe experimental ring. Low energy proton recoils from the elastic collisions were measured with an ultra-high vacuum compatible silicon-strip detector. Deduced differential cross sections were normalized by measuring K-shell X-rays from $^{58}$Ni$^{19+}$ projectiles due to the $^{58}$Ni$^{19+}$-H$_2$ ionization collisions. Compared to the experimental cross sections, a good agreement has been achieved with the theoretical predictions in the measured region, which were obtained by using the global phenomenological optical model potentials. Our results enable new research opportunities for optical model potential studies on exotic nuclides by using the in-ring reaction setup at the HIRFL-CSR facility.
I. INTRODUCTION
The investigation of direct reactions induced by light ions, e.g. proton and alpha particles, provides important information on nuclear structure and astrophysics [1]. Elastic scattering of light ions, since the Geiger-Marsden experiment in 1908 [2], has been used widely, not only to study fundamental properties of nuclei, such as nuclear matter distributions [3], but also to extract optical model potentials (OMPs) [4], which are essential for the description of direct reactions on exotic nuclei with the distorted-wave Born approximation (DWBA) [1].
The proton elastic scattering on stable nuclides has been investigated both theoretically and experimentally [5]. The phenomenological and microscopic OMPs were developed to understand and predict reaction cross sections. With scattering data from stable nuclides, the global phenomenological OMP parameters for proton elastic scattering on heavy ions in the energy region up to 200 MeV have been extracted theoretically [6,7]. However, since it became clear that the radial shape and strength of the OMPs depend strongly on the protonneutron asymmetry [8,9], the OMPs extracted from stable nuclei can not directly be used to predict the scattering cross sections on unstable nuclei.
Direct reactions induced by light ions were mostly performed in direct kinematics, where the light-ion beams interact with a target made of the nuclei of interest [10]. * tuxiaolin@impcas.ac.cn Obviously, such kind of experiments are limited to stable or very long lived nuclides. To explore direct reactions on exotic nuclei, experimental methods based on inverse kinematics, such as active gas targets [3] and 6 Li scattering [11], have been developed. In recent years, a new experimental method, namely, studies with stored beams in storage rings interacting with internal gas-jet targets has attracted much interest. The EXL (EXotic nuclei studied in Light-ion induced reactions at storage rings) project [9,12] has been developed to study nuclear matter distributions [13][14][15], giant resonances [16] and astrophysical reaction rates [17,18], at the GSI and later FAIR facilities [9]. It has been demonstrated that direct reactions induced by light ions, especially for scattering processes at very low momentum transfer, which play an important role in studies of isoscalar giant monopole resonances and nuclear matter distributions, can be successfully approached with the novel in-ring reaction experimental methods [13,16,19,20]. For proton elastic scattering at low momentum transfer, the cross sections are rather high. Most important is that the differential cross sections in the low momentum transfer region are very sensitive to deduce nuclear matter distributions, thus the size and radial shape of nuclei can be determined precisely [21][22][23].
As one of the existing facilities, the Cooler Storage Ring at the Heavy Ion Research Facility in Lanzhou (HIRFL-CSR) [24] provides an opportunity for performing in-ring reaction experiments by using internal gas-jet targets to study, e.g., proton elastic scattering on nuclei at low momentum transfer. Proton scattering on stable 58 Ni nuclei has been measured widely at different energies, see Ref. [6] and references cited therein, however, data on cross sections at low momentum transfer are still scarce. Furthermore, for the 100 MeV proton elastic scattering on 58 Ni nuclei in Ref. [25], the authors found that the differential cross sections would not be described by the optical model with a good χ 2 /N , and a search for physical reasons was proposed [25]. The inconsistency was also observed for the global phenomenological OMPs in Ref. [6]. In this work, a first in-ring reaction experiment on proton elastic scattering of 95 MeV/u 58 Ni 19+ ions at low momentum transfer was conducted at the HIRFL-CSR heavy-ion storage ring. The measured differential cross sections can be used to check the inconsistency reported in Refs. [6,25]. The present successful experiment enables the capability for OMPs studies of exotic nuclei at the HIRFL-CSR heavy-ion storage ring.
II. EXPERIMENT
The experiment was carried out in inverse kinematics at the experimental storage ring CSRe of HIRFL-CSR [24]. The 58 Ni 19+ beam was accelerated up to an energy of 95 MeV/u by the heavy-ion synchrotron CSRm, then extracted and transported to the CSRe via the second Radioactive Ion Beam Line in Lanzhou (RIBLL2). In general, radioactive ion beams can be produced by using projectile fragmentation reactions at the RIBLL2, as discussed in, e.g., Refs. [26,27]. The 95 MeV/u 58 Ni 19+ ions were stored in the CSRe with an intensity of about 10 7 particles in each measurement cycle. The electron cooling has been applied to reduce the emittance and velocity spread of the beam [28]. The cooled 58 Ni 19+ beam interacted with an internal hydrogen gas-jet target oriented perpendicular to the beam direction. The internal gas-jet target has been previously applied for atomic physic studies at the CSRe [29]. A typical diameter of the gas-jet target is about 4 mm at the interaction point. A target density of about 10 12 atoms/cm 2 has been achieved [30].
In order to measure very low energy recoil protons, a single-sided silicon detector (SSSD) with a thickness of 300 µm has been installed in the ultra-high vacuum (UHV) chamber, which is connected to the CSRe. The SSSD is fully compatible with the UHV environment. For more details on the SSSD see Ref. [31]. Figure 1 shows the schematic drawing of the experimental setup. The SSSD with an active area of 48×48 mm 2 , mounted at a distance of 503 mm from the reaction collision point, covers the laboratory angular range from 85 • to 90 • for proton recoils. The signals from the SSSD were fed to the Mesytec MPR-16 preamplifier. The MSCF-16 shaping amplifier was used to process the signals from the preamplifier. Afterwards, all signals were recorded by the data acquisition system (DAQ). The DAQ was triggered by a logic OR of signals from the SSSD. A typical energy spectrum of measured proton recoils is shown in Fig. 2. It was calibrated by using 207 Bi, 239 Pu and 241 Am radioactive sources. According to the kinematic calculation, only elastic scattering protons can be detected in the covered laboratory angular range in the absence of inelastic scattering events.
The knowledge of the reaction luminosity is essential for determining absolute cross sections. It is not easy to determine the luminosity for in-ring reaction measurements, not only due to the changes of the beam intensity and gas-target density in time, but also due to the uncertainty on the overlap between beam and gas-target. To determine accurately the reaction luminosity, the Kshell X-rays from inner-shell ionization of 58 Ni 19+ ions, which were produced in the collisions with the H 2 target, have been measured simultaneously with a Si(Li) detector. As shown in Fig. 1, the Si(Li) detector is placed at 35 • , at a distance to the collision point of 488 mm. The detector was separated from the UHV environment of the SSSD strip number CSRe by a 100 µm beryllium window and collimated by a hole of 4 × 8 mm 2 . The Si(Li) detector was calibrated with 55 Fe, 109 Cd 133 Ba, and 241 Am radioactive sources. A typical K-shell X-ray energy spectrum obtained in the experiment is shown in Fig 3. The cross sections for the X-ray emissions have been extensively studied both theoretically and experimentally in atomic physics, and can be calculated with high precision [32]. Combined with the detection efficiency of the Si(Li) detector, the absolute luminosity for in-ring reaction measurements can be obtained. A Similar method has been used to determine the luminosity for in-ring reaction experiments on bare nuclei in Refs. [17,18].
III. DATA ANALYSIS AND RESULTS
According to the two-body kinematics in the inverse framework [33], the SSSD with a thickness of 300 µm is thick enough to effectively stop the elastic scattered protons. The maximum measured energy of proton recoils is about 2.8 MeV, see Fig. 2. The proton scattering angle in the laboratory (LAB) frame (θ LAB p ) can be determined through measuring the proton kinetic energy (K LAB p ), by using the following equation [33].
in which, m p , p CM , and γ CM are the rest mass of the proton, the momentum in the center-of-mass (CM) frame, and the Lorentz factor of the CM frame relative to the LAB frame, respectively. In this experiment, p CM and γ CM were 0.426 GeV, and 1.098, respectively. It is convenient to make use of the Mandelstam variable −t to extract the differential cross sections for proton scattering [33].
where −t is defined as the square of the four-momentum transfer, which can be expressed in terms of the proton kinetic energy (K LAB p ) after collision with heavy ions. L is the integrated luminosity. θ CM is the scattering angle of the proton in the CM frame. ∆N t is the number of protons in the bin size ∆t. According to Eqs. (2) and (3), for elastic scattering, θ CM and −t can be determined by measuring the kinetic energy of the proton. Since the used SSSD has an energy resolution of better than 1% [31], this can be done with high accuracy. Thus, one can also deduce the cross sections as a function of −t by using the acquired data from the SSSD. According to the position (angle) relations between the Si strips and the hydrogen gas target, the elastic scattering energy peak from the hydrogen gas target can be identified from the background events at each strip, especially for the measured proton peaks with energies > 370 keV, see the inset in Fig. 2. The backgrounds were mainly the scattering events of diffusion hydrogen gas. In order to reduce background effects, only protons in the elastic scattering peaks with K LAB p > 370 keV were adopted in the present work. However, due to the probability distribution, the background events close to the tails of the elastic scattering peaks may be included, but the effects are only around 1%.
The differential cross sections can also be expressed as a function of the CM angle (θ CM ), by using [33], As we know, the differential cross sections for proton elastic scattering on heavy ions have been investigated for over 100 years. The OMPs have been widely used to describe the differential cross sections at low and intermediate energies. The OMP parameters for proton elastic scattering on stable nuclei in the energy region up to 200 MeV have been extensively studied [6,7]. In the following we employ the OMP parameters from A. J. Koning and J. P. Delaroche (KD03) [6] and X. Li and C. Cai (LC08) [7]. The differential cross sections were calculated with the coupled-reaction channels program FRESCO [34]. On average, an agreement between the global OMP predictions and the previous experimental results can be achieved within 10% [6]. To obtain absolute differential cross sections of elastic scattering in this analysis, the luminosity was deduced by using the measured K-shell X-rays.
where Ω and θ lab are the solid angle, and measuring angle of Si(Li) detector, respectively. A detection efficiency (ε) of 100% can be achieved for X-rays with an energy of 10 keV by using the Si(Li) detector [35]. The solid angle of the effective area for the Si(Li) detector is 0.134(2) msr, which was obtained by a Geant4 simulation. γ = 1/ 1 − β 2 is the relativistic Lorentz factor of the projectile. ω K is the K-shell X-ray fluorescence yield, which depends on the charge state of the ion [36]. However, the K-shell fluorescence yield only increases by several percent for ions with the electronic configuration (1s) 1 (2s) 2 (2p) 5 [37], compared to neutral atom. The Kshell fluorescence yield for neutral Ni is 0.4 [38]. This value has been used in the calculations reported here. A K-shell ionization cross section (σ K ) of 1050 barn for Ni 19+ ions was adopted to deduce the absolute luminosity in the present work, which was determined with the Relativistic Ionization CODE (RICODE). The RICODE is based on the relativistic Born approximation [39] and is a further development of the LOSS and LOSS-R codes [40]. The RICODE has been widely applied to predict the single-electron loss cross sections for collisions of heavy many-electron ions with neutral atoms in the relativistic energy region. In this work, a luminosity of 328(6) mb −1 was deduced where the error is the statistical uncertainty of the measured X-rays. Compared to the global high accurate OMP predictions, an inconsistency with experimental results was observed for the 100 MeV proton elastic scattering on 58 Ni nuclei in Refs. [6,25]. The obtained absolute differential cross sections in the present work as a function of the scattering angle (θ CM ) are shown in Fig. 4, compared to the global OMP results. A good agreement has been achieved, which proves the reliability of the KD03 calculations [6] in the measured angle region and clarifies the inconsistency of the cross sections reported in Ref. [6,25].
The real part of OMPs is related to the nuclear matter distribution [41,42]. In the present work, a simple method, suggested by Greenlees et al. [42], has been used to estimate the nuclear matter rms value of the 58 Ni nucleus.
where r 2 m is the mean square radius of the nuclear matter distribution, r 2 2b is the mean square radius corresponding to the spin and isospin independent part of the two-body potential, an updated value of 4.27 fm 2 is adopted in the present analysis, which is obtained by assuming the Gaussian Two-Body Force [43]. All OMP parameters from KD03 [6] are fixed. Only the nuclear radius (R v = r v A 1/3 ) and diffuseness (a v ) of the real volume potential remain as adjustable parameters to fit the experimental differential cross sections by using the SFRESCO code [34]. The values r v and a v of the real volume potential are determined to be 1.161(15) fm, and 0.667(50) fm, respectively. Then, the nuclear matter rms value of 3.74(13) fm is obtained via Eq. (6), which is consistent with the literature results [13,44,45], see Fig. 5. Since technically we are able to use exotic nuclei in the same way as described here, our work illustrates the new possibility for performing such studies also on rare systems. [44] and Lombard et al. [45]. The error bar in the present work reflects only the statistical uncertainty.
IV. SUMMARY
The first nuclear reaction experiment with 95 MeV/u 58 Ni 19+ ions impinging on a hydrogen gas-jet target in a storage ring was successfully performed at the HIRFL-CSR heavy-ion storage ring. A recently developed in-ring experimental method was employed in the experiment. The low energy protons from the 58 Ni(p,p) 58 Ni reaction have been measured to determine the differential crosssections. The reaction luminosity was obtained by using the K-shell X-rays from the ionization of 58 Ni 19+ projec-tiles by the H 2 target, and thus, the absolute differential cross-sections for proton scattering were obtained. Our experimental results are in good agreement with KD03 predictions [6], which shows the reliability of the KD03 calculations for the measured angular region and clarifies the inconsistency of cross sections reported in the literature [6,25]. The first successful in-ring reaction experiment demonstrates the applicability of the HIRFL-CSR facility for internal target nuclear reaction studies at the CSRe, and shows a great potential for extracting reliable OMPs of unstable nuclei. A new storage ring complex, the High Intensity heavy ion Accelerator Facility (HIAF) [46], will be constructed in China. The first in-ring reaction experiment at the CSRe is an important step towards the completion of a large angular coverage detection setup intended to be used at a dedicated storage ring of the future HIAF. | 2019-11-22T00:51:07.703Z | 2019-11-14T00:00:00.000 | {
"year": 2019,
"sha1": "c0726b21670683895fdcd5daaf6ed6138afd69d9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2010.13971",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "62b816ee6bf3cd6495cae100e1f63bf9fede40f1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
36009363 | pes2o/s2orc | v3-fos-license | Parameter Estimation for Traffic Noise Models Using a Harmony Search Algorithm
,
Introduction
It is important to evaluate the primary impact of traffic noise for several different road surfaces in order to predict the interactive noise between road surfaces and vehicle tires.A number of noise prediction models [1][2][3][4][5][6] have been developed for environmental estimation of traffic noise levels in terms of vehicle and pavement types.For example, the vehicle types in the ASJ model [1,2] are large vehicles, medium vehicles, light trucks, and cars, and the pavement types are dense-graded asphalt (DGA) and permeable asphalt (PA).Thus, the ASJ traffic noise prediction model is restricted to four vehicle types, varying vehicle speeds, and two pavement types.
In order to expand the applicability of the ASJ model, the present paper introduces a parameter estimation procedure based on a harmony search (HS) algorithm as follows: (a) the traffic noise for a targeted road surface is measured, (b) the parameters of the noise prediction model are estimated via the HS algorithm, and (c) the resulting coefficients are evaluated using another set of measurements (consisting of vehicle speeds and vehicle types) for a different traffic volume on the targeted road surface.To validate the proposed traffic noise prediction technique, traffic noise measurement sets from three different surface types were used in this study: stone mastic asphalt surfaces (SMA), 30 mm transversely tined Portland cement concrete surfaces (30 mm trans.), and 18 mm longitudinally tined Portland cement concrete surfaces (18 mm long.), as shown in Figure 1.The measurement site is a 7.7 km long.2-lane highway along the side of the south bound of Jungbu inland highway in South Korea.This measurement section includes both asphalt and Portland cement concrete pavements.
This paper is organized as follows.Section 2 describes the ASJ model and vehicle characterization.Section 3 explains the application of the HS algorithm to estimate the parameters of ASJ-based noise prediction models, and Section 4 presents the conclusions of this research.
ASJ Model
The Acoustic Society of Japan (ASJ) published the ASJ model [1,2] for calculating road traffic noise.The procedure involves calculating the noise level generated by traffic as well as the attenuation during noise propagation.The octave-band power spectrum for nominal midband frequencies (63 Hz to 8 kHz) can be generated according to the ISO 9613-2 standard [7].The ASJ model classifies vehicles into four types listed in Table 1.
The A-weighted overall sound power levels ( WA ) of the noise emitted by interactions between vehicles and pavement are listed in Table 2 for a dense-graded asphalt (DGA) surface.In terms of nominal midband frequencies, the individual Aweighted sound power level for each octave band ( WA, ) is calculated as follows: where WA is the overall sound power level (dB) and Δ is the relative level (dB) at the th nominal mid-band frequency given by ] , ( where is the nominal mid-band frequency.Finally, Δ adj is the correction factor defined by
Equivalent Sound Power
Level for a Road.The A-weighted sound power level emitted by a specific type of vehicle moving along a road over a specified time period, WAT , can be calculated via the following equation: where WA is the basic sound power level (dB) emitted by the vehicle (listed in Table 2), Δ is the length of the road segment (in meters), is the mean speed for the vehicle type (in km/h), and is the hourly traffic flow for the vehicle type (in vehicles/h).The equivalent sound power level, eq(WA) , emitted by all vehicles moving along the road can then be calculated as follows: where WAT, is the A-weighted sound power level for each vehicle type.Here, WAT,1 , WAT,2 , WAT,3 , and WAT,4 are the A-weighted sound power levels for a large vehicle, a medium vehicle, a light truck, and a car, respectively, as listed in Table 2. Furthermore, the attenuation (e.g., geometrical divergence, atmospheric absorption, ground effect, and screening structures) was calculated based on ISO 9613-2 [7].
Application of the Harmony Search Algorithm
3.1.Harmony Search Algorithm.This section describes the procedure for estimating the parameters of noise prediction models, using a harmony search (HS) algorithm that employs a heuristic algorithm based on an analogy with natural phenomena [8][9][10][11][12].The detailed procedure for applying a harmony search consists of four steps as follows.
(1) The algorithm parameters are specified.These include the harmony memory size (HMS), initialized as the number of solution vectors in the harmony memory (HM), the harmony memory consideration rate (HMCR, between 0 and 1), the pitch adjustment rate (PAR, between 0 and 1), and the maximum number of improvisations (or stopping criterion), which terminates the HS program.The optimization problem is specified as follows: Minimize () subject to ∈ = 1, 2, . . ., , where where eq(WA) is the predicted equivalent sound power level of ( 5), which can be calculated as follows: eq(WA) = 10 log 10 [10 0.1{ WAT,1 + WAT,2 + WAT,3 + WAT,4 } ] , where WAT,1 , WAT,2 , WAT,3 , and WAT,4 are the Aweighted sound power levels for a large vehicle, a medium vehicle, a light truck, and a car, respectively. eq(WA) is the measured equivalent sound power level obtained from previous research [13,14].In this optimization problem, the A-weighted sound power levels can be defined as given in Table 3.Thus, the coefficients of 1 , 2 , 3 , and 4 must be determined via the HS algorithm.The slope is fixed in the ASJ models for both surface types (DGA and PA); therefore, the slope given in Table 3 is fixed at 30.
(2) The HM matrix is initially filled with randomly generated solution vectors up to the HMS, together with the corresponding function values of the random vectors ():
𝑓 (𝑋 HMS
(3) A column vector of the newly generated harmony, , is improvised utilizing the following three mechanisms: (a) random selection, (b) memory consideration, and (c) pitch adjustment.In the random selection, the value of each decision variable, , in the column vector is randomly chosen within the range of values with a probability of (1-HMCR).HMCR (which is between 0 and 1) is the rate at which a single value is chosen from the historical values stored in the HM.The value of each decision variable selected by memory consideration is examined in terms of pitch adjustment.This operation uses the PAR parameter (which is the rate of the necessary pitch adjustment according to the neighboring pitches) with a probability of HMCR × with a probability of HMCR × PAR × 0.5 − × , with a probability of HMCR × PAR × 0.5 , with a probability of HMCR × (1 − PAR) .
If the newly generated column vector is better than the worst harmony in the HM, based on evaluation of the objective function, the newly generated column vector is included in the HM, and the existing worst harmony is excluded from the HM.(4) If the stopping criterion (or maximum number of improvisations) is satisfied, the computation is terminated.Otherwise, Steps 3 and 4 are repeated.
Application to Parameter Estimation for Noise Prediction
Models.In order to estimate the parameters of noise prediction models based on the objective function of (8), noise measurements for three different road surfaces were obtained from previous research, which was conducted on a test track [13,14].The vehicle velocities and hourly traffic flows are listed in Table 4.
To apply the HS algorithm to parameter estimation for the noise prediction models, the four coefficients were determined for each road surface type (stone mastic asphalt (SMA) surface, 30 mm transversely tined Portland cement concrete surface (30 mm trans.), and 18-mm longitudinally tined Portland cement concrete surface (18 mm long.)), as shown in Figure 2 and Table 5, based on the training data from Table 4.In this way, the coefficients of the noise prediction models, which are dependent on the road surface type, can be updated via the HS algorithm.As a result, noise prediction models can be provided for various surface types by using the HS algorithm to update the ASJ model equations.
Another set of testing data (given in Table 6) was used to evaluate whether or not the updated noise prediction models (with the four coefficients estimated by the HS algorithm) provided results consistent with measured noise levels.The predictions and measurements are compared in Table 7; good agreement was noted for all three surface types.Using the original ASJ model, the prediction value is resulted in a same noise level regardless of surface types and in bad agreement when compared with the measured noise levels.
Finally, the A-weighted sound power levels for the individual octave bands were estimated for the SMA, 30 mm trans., and 18 mm long.surface types, utilizing the parameters estimated via the HS algorithm and (1).For example, Figure 3 shows the A-weighted sound power levels for the octave bands for the three different surface types in the case of the large vehicle speeding at 80 km/h.
Conclusions
In this study, it was shown that the optimization problem related to updating the noise prediction models for several surface types could be solved using an HS algorithm process.
The process involves (a) obtaining measurements for different road surfaces, (b) estimating the coefficients of the noise prediction models using this measurement set as training data, and (c) evaluating the estimated coefficients using another measurement set as testing data.When this procedure was utilized, an evaluation of the parameters of the traffic noise prediction model yielded good agreement between predicted and measured sound power levels.
Figure 2 :
Figure 2: Minimizing the error function of (8) through the HS algorithm.
Table 1 :
Vehicle types used in this study.
Table 2 :
A-weighted sound power levels ( WA ) in dB for a densegraded asphalt (DGA) surface.
* is a velocity.
() is the objective function, is the set of decision variables , is the number of decision variables, and is the possible range of values for the th decision variable; that is, ≤ ≤ , where and are the respective lower and upper bounds for the th decision variable.To estimate the parameters of a noise prediction model, the following minimization function can be used:Minimize eq(WA) − eq(WA) subject to ∈ = 1, 2, . . ., ,
Table 3 :
A-weighted sound power levels ( WA ) in dB for the different surface types.
* is a velocity.
Table 4 :
Vehicle velocities and hourly traffic flows.
Table 5 :
Determination of the model coefficients via the HS algorithm.
Table 6 :
Vehicle velocities and hourly traffic flows used as evaluation data.
Table 7 :
Comparison of predicted and measured traffic noise levels. | 2018-04-03T02:59:44.515Z | 2013-11-06T00:00:00.000 | {
"year": 2013,
"sha1": "f716a60348e762a54db0297a6d72108419ccd0e9",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jam/2013/953641.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f716a60348e762a54db0297a6d72108419ccd0e9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
44077379 | pes2o/s2orc | v3-fos-license | The Malingering Intussusception
While intussusception is rarely seen in adults, it is typically obstructive in nature when it does occur. Even less commonly seen is transient intussusception, which occurs without a radiological lead point or any evidence of bowel obstruction. Such findings consist of a “target pattern” seen on computed tomography (CT) but are incidental and do not require any surgical intervention. We report the case of a 31-year-old female who presented to the emergency department with abdominal pain, vomiting, and diarrhea. CT imaging revealed transient intussusception, a benign finding that is not well established in emergency medicine literature.
INTRODUCTION
In emergency medicine (EM), abdominal pain is a common complaint. The etiology of abdominal pain is vast and we are constantly faced with multiple diagnostic challenges. This case highlights a presentation of abdominal pain with a diagnostic imaging dilemma: a relatively benign exam with an unusual finding on computed tomography (CT). Intussusception, often considered a surgical emergency, is a common pediatric diagnosis but rarely seen in adults. We discuss the case of an incidental finding of adult intussusception on CT imaging. With the increasing use of CT in patients with abdominal pain, we are likely to see more of this transient and benign finding in the emergency department (ED). Although it has been discussed in surgical and radiological literature, 1,6-9 it is still a comparatively unfamiliar entity in EM literature, thereby motivating this case discussion.
CASE REPORT
A 31-year-old Caucasian female with history significant for chronic recurrent pancreatitis, endometriosis, anxiety, depression, and previous cholecystectomy presented to the ED with abdominal pain for two days. She described the pain as constant, stabbing, and localized to both lower quadrants without radiation. She also complained of non-bloody, bilious emesis "too numerous to count" with non-bloody diarrhea. She denied any fever, dysuria, or vaginal bleeding or discharge. On presentation, her vital signs were normal but she appeared anxious and in moderate distress. Her abdominal examination revealed a soft, Desert Regional Medical Center, Department of Emergency Medicine, Palm Springs, California Arrowhead Regional Medical Center, Department of Emergency Medicine, Colton, California * † non-distended abdomen with normoactive bowel sounds. She was diffusely tender to palpation without rebound or guarding. There was no palpable mass, evidence of McBurney's point tenderness, or Rovsing's sign. The remainder of her physical examination was unremarkable.
Review of the patient's laboratory tests, including complete blood count, basic metabolic panel, liver function tests, lipase, and urinalysis revealed no significant abnormalities. Human chorionic gonadotropin urine test was negative. The patient was given intravenous normal saline, ketorolac, ondansetron, and lorazepam for symptomatic control but she later noted only mild pain relief. A contrast-enhanced CT of the abdomen and pelvis was obtained and showed normal kidneys, pancreas, and appendix. There was no free air, free fluid, biliary dilatation, or pericolic inflammatory change. Stool was present in the right colon with fluid in the small bowel representing mild constipation. An incidental finding of a jejunal short segment intussusception in the left upper quadrant was seen without any evidence of bowel obstruction (see Image). Following discussion with the radiologist, it was determined to be a benign finding and completely asymptomatic in the absence of a small bowel obstruction. The patient was subsequently witnessed by nursing staff to induce vomiting while specifically requesting hydromorphone. The patient appeared comfortable and in no acute distress on multiple occasions while not being directly observed; however, when approached she promptly complained of unrelenting pain of 10 out of 10 severity. The patient was medicated with intravenous hydromorphone and shortly
DISCUSSION
Intussusception, the telescoping of one portion of the intestine into a contiguous segment, is a clinical entity that has been well described in children. It is a common cause of abdominal pain in the pediatric population and is usually idiopathic. However, intestinal intussusception is rare in adults, accounting for just 5% of all intussusceptions. 2 With considerable variability, the symptoms of adult intussusception are broad; rarely seen is the classic triad of abdominal pain, a tender palpable mass, and bloody stools. Instead, the symptoms of vomiting, GI bleed, constipation, or abdominal distention are seen. 2 The most common presentation in adults is intermittent abdominal pain, but this has been described in cases of intussusception caused by an organic lead point such as a mass or lesion that led to the intussusception and subsequently a mechanical small bowel obstruction. 1,3 In one series, two cases of idiopathic adult jejunal intussusceptions were diagnosed on CT after both patients presented with nonspecific abdominal pain and nausea; neither of them required surgical intervention and no underlying abnormality or lead point was found. 4 Volume I, no. 4: November 2017 The Malingering Intussusception Le et al.
In the absence of an inciting factor such as an organic lesion as in this case, a transient non-obstructing intussusception without a lead point was identified. Although most often idiopathic, this type of intussusception has been seen in some patients with celiac or Crohn's disease. It does not require surgical intervention and will resolve on its own. 2 On the other hand, classic intussusception with a lead point typically involves an obstruction and has been attributed to conditions such as inflammatory bowel disease, adhesions, malignancy, and trauma. 1,2 In this case, the discrepancy between the locations of her pain and the intussusception, a benign physical examination, normal laboratory results, no CT evidence of obstruction, and the patient's possible malingering behavior all support that the intussusception was nothing more than an incidental finding.
Despite being operator dependent, ultrasound is currently considered the imaging diagnostic modality of choice in children. 5 However, the most sensitive test in adults is the CT, with sensitivities between 58-100%. 1,2 Transient non-obstructing intussusception in adults has been discussed in the radiological literature 6-9 but is not commonly recognized in EM, thereby prompting this case discussion. We further investigated what CT findings would more likely represent a transient intussusception as opposed to an intussusception requiring either medical or surgical intervention. The features seen on CT that help distinguish transient intussusception from obstructing intussusception include a "short…soft tissue density structure extending into the bowel lumen," "triangular or crescent-shaped fat density due to the eccentrically placed mesentery," and "normal calib[er] of the involved loop….
[and] loops proximal to the intussusception." 6 CT evidence of the classically described "target pattern," as seen in this case, corresponds to an "initial intussusception" without any signs of ischemia. 1,6 The progressive grades of obstruction seen on CT imaging correspond to what is described as a "reniform pattern" and then as a "sausage-shaped pattern" representing the last stage of the disease. 6 In this case, the classic "target pattern" was clearly visualized on the coronal sections of the CT images of the abdomen and pelvis, and the patient was monitored in the ED with complete resolution of her symptoms. On reevaluation, abdominal examination revealed normoactive bowel sounds and a nondistended, soft abdomen without any tenderness to palpation. There were no palpable masses or evidence of rigidity, rebound, or guarding. Coupled with the patient's clinical presentation and our discussion with the radiologist, the patient was discharged home and instructed to follow up with her PCP and established gastroenterologist. A follow-up telephone call was attempted five days after discharge; however, the contact phone number provided by the patient was found to be invalid.
Review of her chart later revealed five additional ED visits also for abdominal pain. The first of these five other visits occurred only two months after her initial presentation. Two subsequent CTs performed on her did not demonstrate any target pattern, bowel obstruction, or other acute abnormality. She was discharged home in improved condition on all visits. These ensuing visits and CT images further support the transient nature of the intussusception seen initially.
CONCLUSION
In conclusion, this case demonstrates a unique finding not well documented in the EM literature. Unlike obstructive intussusception with a lead point, transient non-obstructing intussusception can present as an incidental finding that should not prompt emergent surgical evaluation in the ED. | 2018-06-05T06:40:46.680Z | 2017-10-03T00:00:00.000 | {
"year": 2017,
"sha1": "580c5ac5d07f521616cb2325509251e04ad79425",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt4cj228rt/qt4cj228rt.pdf?t=p3to49",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "580c5ac5d07f521616cb2325509251e04ad79425",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271254164 | pes2o/s2orc | v3-fos-license | Analysis of the contribution of critical thinking and psychological well-being to academic performance
This study examines the influence of critical thinking and psychological well-being on the academic performance of first-year college students. It emphasizes the importance of a model of psychological well-being focused on self-acceptance, environmental mastery and purpose in life, along with a critical thinking approach oriented to problem solving and decision making. A total of 128 first-year psychology students from a Spanish public university participated, assessed by means of Ryff’s psychological well-being scale (PWBS) and the PENCRISAL critical thinking test, complemented with grades obtained in a critical thinking course. The results show positive correlations between psychological well-being, critical thinking and academic performance, with a stronger relationship between critical thinking and academic performance. However, psychological well-being also plays a significant role in academic performance. The findings highlight the need for holistic pedagogical approaches that combine cognitive skills and personal development to enhance first-year students’ learning.
Introduction
In the context of the increasing demands of contemporary societies, in this study we address how critical thinking (CT) and psychological well-being (PWB) influence academic performance within the university setting.Upon entering university, first-year students are faced with the challenge of adapting to new academic dynamics and demands, which they must balance with the pursuit of personal satisfaction (Acee et al., 2012;Casanova et al., 2018).The adaptation process, which involves the achievement of academic goals and the projection of long-term life objectives, is fundamental to academic performance, considered a key indicator of successful adaptation and a reflection of the competencies required in the professional environment (Alonso-Borrego and Romero-Medina, 2016;Frick and Maihaus, 2016).
The goal of this research is to show the link between CT, which is characterized by analyzing and evaluating information, making evidence-based inferences, and reflecting on one's own thought process for decision making and problem solving (Bailin et al., 1999;Ennis, 2015;Jahn and Kenner, 2018;Saiz, 2020;Halpern and Dunn, 2023), and the PWB, which focuses on personal development (Ryff, 1989(Ryff, , 2013;;Ryff and Keyes, 1995); and analyze how both contribute to academic performance.Despite the complexity of the factors that can influence academic performance, in this study we want to combine cognitive and socio-affective variables to better understand these dynamics.Based on The Ryff Psychological Well-Being Scale (PWBS), we examine how well-being, especially through self-acceptance, environmental mastery, and purpose in life impacts academic performance.As a starting point we recognize that CT may have an even greater effect on academic performance.This holistic approach seeks to contribute to the debate on the competencies needed for the 21st century through the relevance of CT and PWB in university education and their role in the formation of individuals capable of coping with contemporary demands.
Contextualization and characterization of academic performance
In the university context, academic performance is influenced by a series of factors ranging from pedagogical practices and student satisfaction with them to more personal and intrinsic elements.These include the student's motivation and emotional state, academic background, IQ, personality traits and level of psychological maturity.This multi-layered approach focuses the complexity underlying academic performance and emphasizes the interaction between the educational environment and the individual qualities of each student.
A study by Oliván Blázquez et al. (2019) highlights the flipped classroom (FC) method in comparison to traditional lecture-based learning (LB) and shows that FC not only improves students' grades, but also maintains their satisfaction with learning without increasing their perceived workload.Although FC was initially perceived as more difficult, this did not have a negative impact on satisfaction or longterm learning, underscoring the importance of student perceptions and involvement in the learning process.These results support the introduction of FC in higher education and point to the need for continuous adjustments based on student feedback to maximize academic performance and develop critical and practical skills.
Beyond educational practices, Gilar-Corbi et al. (2020) investigated how motivational and emotional factors and prior academic performance influence college students' success.The study used the Motivated Strategies Learning Questionnaire (MSLQ) and the Trait Meta-Mood Scale (TMMS) to measure motivational learning strategies and emotional intelligence.The findings show that scores obtained in the diagnostic tests have a strong influence on academic performance, while emotional attention has a minor influence.The study points out that prior performance, together with self-efficacy and appropriate emotional regulation, plays a crucial role in predicting academic success.Thus, the authors suggest that interventions focused on improving self-efficacy and emotional intelligence may be key to optimizing students' academic outcomes.
In the same context, this time with more variables, Morales-Vives et al. (2020) investigate the influence of intelligence, psychological maturity and personality traits on the academic performance of adolescents, and find that these factors combined explain about 30% of their variability.Intelligence, especially in reasoning and numerical aptitude, emerges as the most significant predictor, while psychological maturity, reflected in work orientation, and traits such as conscientiousness and openness to experience, have an indirect influence.These findings show that, although intelligence plays a decisive role, maturity and personality are in a lesser proportion.
These conclusions and the recommendations derived from them resemble recent advances in academic research.One example is the work of Mammadov (2022), which draws attention to cognitive ability as the main predictor of academic performance, but also points to the relevance of conscientiousness, a personality trait associated with selfdiscipline and organization, which explains a significant part of the variability in academic performance.Mammadov also suggests that the influence of personality on performance varies by educational level, showing the dynamics between a student's personality and his or her educational context.These findings demonstrate the need for educational strategies that promote both cognitive development and the reinforcement of positive personality traits.
Recent research on academic performance shows two consensuses.First, there is a growing understanding of the influence of the interaction between intrinsic and extrinsic factors, including pedagogical methods and motivational, emotional and cognitive elements, in improving the performance and satisfaction of students in higher education.The studies reviewed highlight the relevance of cognitive ability and personality traits such as consciousness, and promote a holistic educational approach that integrates the development of cognitive and personality dimensions.Second, academic achievement is recognized as a multidimensional construct, objectively assessed through quantitative indicators such as grade point average (GPA) and standardized assessment scores.These reflect the attainment of educational objectives and the accumulation of knowledge and skills over time.
Contextualization and characterization of critical thinking
Halpern (1998) argues that intrinsic effort and a willingness to analyze and solve complex problems are key competencies for learning and adapting to a constantly changing environment.According to Halpern (1998) CT transcends the mere acquisition of analytical skills and requires the development of an active predisposition to question assumptions, consider diverse perspectives, and persist in cognitive effort.This disposition is by no means innate, but can be cultivated through a pedagogy that explicitly integrates the teaching of critical skills such as logical analysis, argument evaluation, and information synthesis, and that emphasizes problem structuring to facilitate skill transfer and metacognitive self-regulation.Halpern proposes an educational framework that promotes the acquisition of these skills and encourages reflection on the thinking process so that students are able to apply CT effectively in diverse contexts and continuously improve.This methodical and structured approach characterizes CT as a set of advanced cognitive skills and an exercise of conscious judgment that is essential for informed, evidence-based decision making, which integrates non-cognitive elements (Halpern and Dunn, 2023).
Throughout the development of the discourse on CT, various theories and their empirical foundations have evolved into meaningful educational practices, recognized in diverse academic settings.Metaanalyses, particularly those by Abrami et al. (2008Abrami et al. ( , 2015) ) have 10.3389/feduc.2024.1423441Frontiers in Education 03 frontiersin.orgcontributed significantly to the understanding of effective teaching of CT and have emphasized the need for specific and tailored teaching strategies that incorporate clear CT objectives into educational programs.These studies demonstrate that CT, defined as a process of intentional, self-regulated judgment that includes interpretation, analysis, evaluation, and inference, is increasingly recognized as essential in the knowledge era.Abrami et al. (2008) note that critical skills and dispositions are developed through explicit pedagogical interventions, as opposed to spontaneous acquisition, which challenges traditional pedagogical paradigms and fosters a shift towards intentional educational practices, placing students at the center of learning.
In addition, a more detailed analysis by Abrami et al. (2015) identifies that strategies that encourage interactive dialogue, confrontation with real problems, and individual tutorials are particularly effective.This suggests that active and meaningful learning outperforms traditional methods in the development of critical skills.This approach not only enhances students' analytical and synthesis skills, but also facilitates the transfer of knowledge to new contexts, a key skill for the 21st century.The research reinforces the view that CT is a cross-cutting competency, crucial for navigating the complexity of contemporary challenges, and argues for an education that integrates these skills into all areas of learning.
Despite in-depth analyses of the need for CT, the growing discrepancy between rapid progress, the availability of information and the ability to critically analyze it poses a major challenge.Dwyer et al. (2014) point out that the exponential increase in global information has outpaced the ability of traditional education systems to teach effective CT skills, creating a gap that may inadequately prepare students for the challenges of today's world.The authors argue that the ability to critically evaluate, synthesize, and apply knowledge is crucial for academic success and survival in the 21st century.This approach highlights how CT, by fostering analytical and reflective skills, transcends academia to positively impact individual and collective well-being, and argues for educational strategies that bridge the gap between information acquisition and critical analytical skills.
Recent research on this topic points to the indisputable relevance of CT as an essential component of academic performance and points to its role as a key predictor of success in educational processes.Rivas et al. (2023) show that CT transcends conventional cognitive skills.This is because CT is characterized as a rigorous practice that fosters in-depth analysis, critical evaluation and synthesis of information oriented to decision making and problem solving, fundamental skills to understand and apply knowledge in complex contexts.Research shows that CT skills not only maintain a positive correlation with academic performance, but can be significantly improved through targeted educational programs.For this reason, the authors advocate their integration into curricula and educational assessment systems to prepare students for the challenges of the 21st century, especially when phenomena such as artificial intelligence acquire greater prominence in social and professional dynamics (Saiz and Rivas, 2023).
The literature on CT identifies two fundamental consensuses: first, it defines it as an intentional and deep process, oriented to problem solving and decision making, based on meticulous analysis that goes beyond logical reasoning to include a critical evaluation of the basis for judgments.In addition, it involves detailed scrutiny and integration of new information in changing contexts, as well as metacognition, i.e., conscious self-regulation of thinking that facilitates adaptation and continuous improvement of cognitive strategies in accordance with the major demands and obstacles of our first half century (Dwyer, 2023).In its practical application, CT enables daily challenges to be met through informed judgments and a willingness to question and adjust perspectives in response to new information.Characterized by curiosity and adaptability, CT is essential for making responsible decisions and achieving successful outcomes, underscoring its practical value in both personal and professional settings.
Second, CT, beyond its theoretical value, can be conceived as a key theory of action for academic performance and PWB (Saiz, 2020;Saiz and Rivas, 2023), by enhancing in individuals the ability to face and solve problems in an effective and grounded manner.CT involves crucial skills such as analysis, evaluation and synthesis, indispensable for acquiring and retaining knowledge, and also for applying it in new contexts, which improves academic performance and has, in principle, positive effects on quality of life.Thus, CT emerges as an academic competence and an essential tool for everyday life (Dumitru and Halpern, 2023;Guamanga et al., 2023).Therefore, to synthesize theoretical paths with a practical function, we understand that "to think critically is to arrive at the best explanation of a fact, phenomenon or problem in order to know how to solve it effectively" (Saiz, 2024, p. 19).
Contextualization and characterization of psychological well-being
The task of relating concepts that are difficult to operationalize, such as well-being, is a major challenge; but it is necessary to approach it, more within a framework of CT understood as a means to achieve broad objectives than as an end in itself.Thinking critically transcends the mere application of skills or the accumulation of goal-oriented knowledge.In fact, it requires a detailed examination of the effect that such management has on the environment and how the satisfaction derived from reaching certain achievements is related to subjective aspects.
CT by its very deliberative and goal-oriented nature goes beyond the search for how to reach effective solutions and addresses a wider range of human and social consequences resulting from these actions (Facione, 1990;Elder, 1997;Jahn, 2019).The idea is to involve non-cognitive aspects that occupy a central place in academia, and that are crucial in the interaction between specific knowledge and skills, elements widely explored in the discourse of CT.In this sense, PWB has been selected as the focus of study, recognizing it as a desirable attribute in educational processes.The challenges this poses are not lost sight of, especially when it comes to quantifying transient, subjective and normatively mediated judgments about what states or conditions are considered good, healthy or desirable in the complexity of human experience, as detailed by Flanagan et al. (2023).Ryff (1989Ryff ( , 2013)), Ryff and Keyes (1995) contribution to the conceptual understanding and dissemination of PWB is notorious and highly valued in different fields of knowledge (Van Dierendonck and Lam, 2023).The imprint of his research has been marked by criticism of a reductionist conception of PWB that simplifies well-being to the presence of positive affective states (Ryff, 1989).Consequently, Ryff defends a much more complex multidimensional concept that seeks to attune the attainment of goals with the development of potentialities.Ryff 's thesis is that PWB is a multidimensional construct that transcends happiness or mere life satisfaction (Ryff and Keyes, 1995).Carol Ryff 's theory of PWB, based on humanistic, clinical and developmental psychology, as well as Aristotelian eudaimonia, focuses on self-actualization, the search for meaning and purpose in life as the core of well-being.As detailed in the text Happiness is everything, or is it?Explorations on the meaning of psychological well-being (Ryff, 1989) the model consists of six dimensions that converge in personal development: autonomy, environmental mastery, personal growth, positive relations with others, purpose in life, and self-acceptance.
The first dimension, self-acceptance, implies a positive attitude towards oneself and an acceptance of all aspects of one's identity, including both positive and negative qualities.As for positive relationships with others, Ryff states that these are interpersonal relationships characterized by warmth, trust and genuine concern for the well-being of others; this dimension is dominated by the value of empathy in human well-being.Autonomy is defined by an individual's capacity to maintain independence and resist social pressures in order to regulate their behavior according to internal personal norms.This dimension emphasizes self-determination as a compass for the pursuit of well-being.On the other hand, environmental mastery emphasizes the ability to effectively manage and control the external environment, which implies a feeling of competence and control over personal and professional life.Finally, purpose in life and personal growth refer to the possession of goals, direction and a sense of development and fulfillment of one's potential.These dimensions reflect the search for meaning and continuous personal evolution as fundamental components of PWB.
Ryff 's PWBS has established itself as a key instrument in positive psychology.Research after 1989 (Ryff and Keyes, 1995;Ryff, 2013) has explored the variability of these dimensions with age and across genders.These studies showed the influence of sociodemographic factors on well-being, so the model has been extended to consider the development of PWB across the lifespan and determined by more contextual factors such as health.The approach enriches the understanding of PWB and denotes the practical relevance of the construct in fields such as mental health and social policy.Ryff 's work has inspired other researchers to discuss and extend its principles (Van Dierendonck and Lam, 2023).For example, Huppert (2009) complements Ryff 's dimensions by emphasizing the management of negative emotions and resilience as key components of sustainable well-being; Huppert aligns this view with the World Health Organization (WHO) definition of health and adds a dynamic dimension on overcoming adversity.This theoretical and practical deepening demonstrates the robustness and adaptability of Ryff 's model.The synthesis of these contributions confirms the value and applicability of Ryff 's PWBS; they reveal how the eudaemonic model not only reinforces an academic discourse, but also guides practices that promote well-being in different contexts and consolidates itself as a vital field in human development.
However, due to the same complexity and extension of the PWB construct, Ryff 's PWBS has different observations that question its theoretical and statistical foundations.On the first aspect, the work of Disabato et al. (2016), by examining the distinction between hedonic and eudaimonic well-being, problematizes the theoretical basis of this dichotomy.Through an analysis incorporating data from 7,617 individuals from 109 countries, the authors find that there is no clear distinction between hedonic well-being experiences, focused on pleasure, and eudaimonic ones, related to personal fulfillment.The results indicate a high correlation between the two types of well-being (r = 0.96).This suggests that people do not significantly differentiate between pleasure seeking and self-fulfillment in their perception of well-being.This implies that the hedonic-eudaimonic dichotomy may not hold empirically and, therefore, a unified model of well-being that reflects the current behavioral dynamics should be sought.
From a statistical perspective, Ryff and Keyes (1995) analyses show that the PWBS, composed of 18 items, meets psychometric criteria and shows strong internal and moderate correlations among different scales.Correlations between dimensions range from low to modest (0.13 to 0.46), suggesting that each dimension addresses unique aspects of well-being.From the theoretical model, this diversity underscores that, although interrelated, the dimensions represent unique aspects of psychological well-being.In terms of specific results, studies indicate that with age the dimensions of environmental mastery and autonomy increase, while purpose in life and personal growth tend to decrease, with no significant changes in self-acceptance and positive relationships with others.Women outperform men on positive relationships with others and personal growth, suggesting that changes in these dimensions reflect evolving priorities and perceptions of personal development across the life span (Ryff and Keyes, 1995).
On the number of dimensions of PWBS, Blasco-Belled and Alsinet (2022) note that the six-dimensional theoretical model has generated debate even among experts in the field.Some suggest that a four-dimensional model-environmental mastery, personal growth, purpose in life, and self-acceptance-might represent a second-order PWB factor, indicating a possible conceptual overlap between Ryff 's original dimensions; others exclude positive relationships with others and autonomy from the model.The study of Ryff 's PWBS by network analysis conducted by Blasco-Belled and Alsinet (2022) shows four different dimensions, in one of these, the most important node of the network, self-acceptance, purpose in life and environmental mastery are grouped, with special emphasis on self-acceptance because of its centrality in the network at the item level.
In the Spanish-speaking context, Nogueira et al. ( 2023) identified three main factors: autonomy, positive relationships with others, and competence.This suggests that PWBS may vary according to cultural and contextual factors.Furthermore, although it is not a study analyzing the dimensions of Ryff 's PWBS, the study by Páez-Gallego et al. (2020) applied the PWBS to Spanish adolescent students and found a strong positive correlation with the use of adaptive decisionmaking strategies.Specifically, the findings show that the adaptive approach is significantly associated with improvements in selfacceptance, environmental mastery, and purpose in life.In contrast, maladaptive strategies characterized by impulsivity and avoidance are associated with lower PWB.From this we infer that fostering effective decision-making skills is important for well-being and, in particular, we identify from empirical studies the dimensions of PWBS that correlate with post decisional skills.
Taken together, these findings suggest that Ryff 's PWBS, although pioneering and widely used, could benefit from revision to more accurately reflect the structure of PWB and its application in diverse cultural and educational contexts.The convergence of evidence from factorial and network analysis perspectives points to the need for a more integrated and adaptive model capable of capturing the complexity and dynamics of the underlying constructs.This underscores the continuing interest in PWB in research and practice.It is also an indication of the ongoing scholarly debate about its conceptualization and measurement.The recurrence of dimensions such as self-acceptance, environmental mastery, and purpose in life across analyses suggests a common core of PWB.This raises the question of whether these dimensions can be conceptually aligned with academic achievement and CT.In addition, questioning the boundaries between hedonic and eudaimonic raises the issue of whether a broader construct is needed to analyze well-being in educational settings.In this context, we start from the premise that self-acceptance, environmental mastery, and purpose in life are sufficient to explore college students' PWB.These dimensions reflect students' ability to recognize their strengths and weaknesses, set goals, and navigate effectively in their educational environment, aspects that could be considered part of the dispositional component necessary for the development of higher-level competencies such as those of the CT.
The research brings to empirical analysis the complex interplay between CT, PWB, and academic performance in the university context.We seek to answer how CT skills and PWB influence college students' academic performance; and, how CT practices can be aligned with PWB to improve academic performance.We propose that the study variables converge in both a theoretical and an empirical model.The argumentative strategy consists of analyzing the direct impact of CT on academic performance, assessing whether PWB correlates with better academic outcomes, examining in detail the predictive factor of the relationship between CT and PWB on academic performance, and finally, according to the data obtained, proposing some dialogic bridges between cognitive and non-cognitive aspects of CT.
Participants
The study involved 128 first-year psychology students from a Spanish public university.The vast majority were women (83.1%), with only 16.9% men, which is usual in social sciences and humanities degrees.Age ranged from 18 to 33 years, with a mean of 19.28 (SD = 1.73).The sample was essentially composed of students who had completed secondary education (75.3% of the students were 19 years old).Between the ages of the students according to sex -females (M = 19.09,SD = 0.814) and males (M = 20.20,SD = 3.78) -there were no statistical differences, but the age of the males was not only higher, but also more dispersed.
Instruments
The instruments applied were Ryff 's PWBS in its Spanish adaptation (Díaz et al., 2006) and the PENCRISAL critical thinking test (Saiz and Rivas, 2008;Rivas and Saiz, 2012).For academic performance, the academic records of the students participating in the critical thinking course in the first year of the psychology graduation were collected.The grades have an ascending interval from 1 to 10.
Ryff 's PWBS as mentioned in the previous discussion has different models.This instrument aims to measure psychological well-being, focusing on students' own evaluations of their situations and perceived success in various aspects of life and personal development.It explores well-being through six main dimensions, self-acceptance (α: 0.83), positive relationships with others (α: 0.81), environmental mastery (α: 0.71), autonomy (α: 0.73), purpose in life (α: 0.83) and personal growth (α: 0.68).The questionnaire consists of 39 items, presented in a Likert scale format ranging from 1 (strongly disagree) to 6 (strongly agree) (Díaz et al., 2006).
Consistent with the complexity of the scale and some data in common with other studies, we have chosen to consider only selfacceptance, environmental mastery and purpose in life.In support of this methodological decision, we have performed with our sample an exploratory factor analysis (principal components method) to see if these three dimensions converge in the same factor.The data confirm this convergence and show that this single factor has an eigenvalue of 2.43 and explains a very high value of the variance of its results (81.1%).
In the case of the PENCRISAL, the full version was applied, and the score was taken for each of the five dimensions and the total score.The PENCRISAL was applied to measure CT skills.This test consists of 35 problem situations that participants answer in an open-response format.The test is organized into five key areas: deductive reasoning, inductive reasoning, practical reasoning, decision making and problem solving.
The deductive and inductive component tests different forms of reasoning, such as propositional, categorical, causal, analogical and hypothetical.Decision-making measures the ability to make probabilistic judgments and to effectively use heuristics to identify potential biases.The problem-solving section poses participants with general and specific problems that require appropriate solution strategies.These sections are intended to encourage the application of strategies necessary for effective problem planning.The open-ended question format encourages participants to justify their answers, which are evaluated using a scoring system that rates the quality of their responses on a scale of 0 to 2. Responses are converted into numerical scores using item-specific criteria.These are used to describe and identify the thinking mechanisms underlying each response.A score of 0 indicates that the answer is incorrect, 1 indicates that the answer is correct but no or inadequate justification is provided, and 2 indicates that the answer is correct and adequate justification is provided.The PENCRISAL yields an overall score of the CT ranging between 0 and 70 and between 0 and 14 for each dimension.Reliability assessments show satisfactory accuracy, with a minimum Cronbach's Alpha of 0.632 and a test-retest reliability of 0.786 (Rivas and Saiz, 2012).The test is administered online through the SelectSurvey.NET V5 platform.
Procedures
Students gave their free and informed consent to participate in the study.The PWBS was carried out at the beginning of the semester of the CT course.The PENCRISAL test was taken at the beginning and at the end of the academic period.Only the results of students who completed both instruments are considered.Academic performance is represented by the grade obtained by students at the end of the course.Statistical analyses were performed with IBM/SPSS version 29.0.After performing the descriptive statistics, we proceeded to a correlation analysis and, finally, we evaluated the impact of the PWBS and the CT on the variance of academic performance by performing a regression analysis.
Results
Table 1 presents the descriptive data of the students' scores on the two instruments applied, and the measure of academic performance.In addition to the minimum and maximum values, the mean, standard deviation and indicators of skewness and kurtosis of the distribution of the results are presented.
Observing the results, we can see a distribution with a slight tendency towards values above the mean (m = 79.80)for the PWBS, which is reflected in a negative skewness (−0.437).With respect to the five dimensions of CT, it can be stated globally that the mean value of DR, IR and PS is moving away from the maximum value observed and towards the minimum value, which represents a positive symmetry.The opposite situation occurs with the RP dimension.Regarding the TCT, the data show a tendency to scores around the mean (m = 37.21), as can be deduced from the residual values of skewness and kurtosis.Regarding the AP, the data suggest a balanced distribution of academic scores around an intermediate value between 3.66 and 9.01 as scores at the lower and upper extremes (m = 6.10), with very low skewness and kurtosis.
In general, the results show good variability or dispersion, since the mean of each variable is located in the center of the data interval, which is desirable in research to adequately represent the population studied.Skewness and kurtosis indices close to zero for academic achievement are especially indicative of a normal or Gaussian distribution of values.The slightly higher kurtosis in the IR dimension of CT (2.248) is still acceptable.
Table 2 shows the correlations between the variables in this study.Since these were interval metric variables, Pearson's product x moment method was used to calculate the correlations.For statistical significance, the two-tailed test was used and p < 0.05 was set as the limit of significance.
According to the data, the highest correlation is found between TCT and AP, with the lowest correlation being between CT and PWBS measurement (no correlation).At an intermediate level is the correlation between PWBS and AP.Likewise, all the dimensions of the CT correlate with the AP with values between 0.183 (PS) and 0.337 (PR).As can be seen, there are variations in the correlations among the five dimensions of the CT, but all have high correlations with the total score (between 0.502 and 0.668).In this sense, only the TCT score is used for the regression statistical analysis.
In summary, the data suggest that there is a significant and positive relationship between PWBS and AP, as well as an even stronger and more significant relationship between TCT and AP.There is no evidence of a significant relationship between PWBS and TCT.To further explore the relationships between cognitive and noncognitive variables in AP, we turned to a regression analysis.We opted for a linear regression with PWBS and TCT as predictors and AP as the criterion or dependent variable.Table 3 presents the regression values obtained.
The regression model was found to be statistically significant, with an F-value (2, 88) = 18.571, p < 0.001.This indicates that, collectively, PWBS and TCT provide significant prediction of AP.The coefficient of determination (R 2 adj.) is 0.285, which means that approximately 30% of the variability in AP can be explained by the independent variables in the model.As can be seen from the t-values and significance, both variables have a significant impact on AP, although TCT has a greater impact.
In a complementary manner, with the objective of enriching the analysis of the influence of the CT on the PA, we have included additional measures to the grade obtained by the students in the course (NCT), such as the selectivity grade with which they entered the university (NEBAU), the average grade of the transcript (NMEXP), that is, the grades of the other courses that the students must take, and the pretest results obtained with the PENCRISAL (PCT).The data obtained are recorded in Table 4.
Table 4 shows that the relationship between PWBS and NMEXP has a Pearson correlation of 0.075, with a p-value of 0.372.This low correlation indicates that the connection is minimal.In contrast, the relationship between TCT and NMEXP shows a stronger correlation of 0.464**, suggesting a moderate positive association.The significance of this correlation, less than 0.001, indicates a statistically significant relationship, which implies that this result is not likely to be a coincidence.A similar case occurs with the relationship between NEBAU and NMEXP.
Given this context, if we perform a multiple linear regression analysis with NMEXP as the dependent variable and PWBS and TCT as independent variables, we would expect TCT to have a more significant impact on NMEXP.This projection is based on the statistically significant correlation of these variables.On the other hand, NEBAU has a slightly lower correlation with NMEXP compared to TCT (0.455 vs. 0.464), but the difference is very small, indicating that both have similar impact capacity for NMEXP in terms of linear correlation.
Confirmation of these hypotheses by appropriate regression analysis will provide a more detailed and accurate understanding of how PWBS and TCT individually contribute to the prediction of NMEXP, considering the influence of interrelated variables.However, in performing this procedure, a reduction in sample size to only 64 cases were observed.This increases the risk of failing to detect significant differences or could lead to unstable effect estimates.
Discussion and conclusions
The CT seeks to understand and effectively solve problems, through a correct approach, the generation of solution alternatives filtered by the mechanism of explanation and the selection of a solution, all with the aim of achieving a desired change.The PENCRISAL test is based on this defining framework of the CT (Saiz and Rivas, 2008;Rivas and Saiz, 2012).Therefore, if we start from this concept and look at the data, we can conclude that the CT is a good predictor of academic performance.
Table 2 shows a positive and moderate correlation (0.514) between the CT and academic performance, suggesting that an increase in the CT is associated with an improvement in academic performance.Meanwhile, Table 3 shows -with a B coefficient of 0.074 and a Beta of 0.473 -that CT has a stronger relationship with academic performance compared to PWBS.This means that for every unit increase in CT, academic performance increases on average 0.074 units, and this effect is considerably significant in the model.The robust correlation and the impact indicated as a dependent variable highlight that the CT is a determinant competence of academic performance and is suggested as a relevant diagnostic and formative tool in the educational field.Although it is not the only factor that influences academic performance, the CT is presented as a significant predictor and one that can be worked on or trained in the classroom.
Declaratively, the current study coincides with other results obtained and recorded in Rivas et al. (2023).On that occasion, the authors found that CT is a predictor of academic performance and that the benefits of instruction can be sustained over time.The study showed a correlation between CT and academic performance of 0.32.The main difference between these two studies concerns the objectives.The previous study did not attend to the explicit discussion of how CT could influence well-being, or vice versa.The current work recovers this line and incorporates non-cognitive variables in the analysis framework to account for well-being, under the assumption that this construct should have a significant impact on academic performance.
More generally, if we consider that, although the construct intelligence is not the same as CT, they do have several points of convergence (Butler et al., 2017), then we can establish a dialogue with other studies on the factors that influence academic performance.Intelligence represents the intrinsic capacity to learn, understand, reason, and meet challenges through problem solving to adapt to the environment (Sternberg, 1985).This cognitive potentiality manifests itself in various ways, being the CT one of its most relevant expressions, particularly in situations that demand deep analysis, evaluation, and decisions based on logical reasoning (Saiz, 2024).The CT, therefore, acts as an essential tool that intelligence employs to effectively navigate through complex and challenging real-world situations (Halpern and Butler, 2018).In this conceptual line, the current results partially coincide with studies that have shown that the best predictors of academic performance are cognitive components, such as measures of general intelligence, analogical reasoning, fluid intelligence, logical, verbal and quantitative reasoning (Morales-Vives et al., 2020;Mammadov, 2022); as well as scores on the diagnostic and university entrance test (Gilar-Corbi et al., 2020).
In our study the other factor of analysis was the PWB.Although due to its non-cognitive nature it would be per se at a disadvantage compared to cognitive factors, the data also show that its inclusion in educational research, especially to account for academic performance, is significant.In Table 2, the analysis of the correlation between PWBS and academic performance reveals a positive relationship with a correlation coefficient of 0.336.Although the correlation is moderate and not as strong as that observed between CT and academic performance, it is still significant and should not be ignored in the pursuit of improving students' academic performance.Table 3 shows that PWBS has a positive and significant influence on the dependent variable.The standardized coefficient (Beta) of 0.271 indicates that there is a positive relationship between PWBS and academic achievement.The unstandardized coefficient (B) shows that, holding all other variables constant, for each unit increase in PWBS, academic performance increases on average 0.022 units.This relationship, supported by a low standard error of 0.007, points to a moderate but significant contribution of PWBS compared to other variables.
These findings show that the integration of some aspects of PWBS could be an effective strategy to improve academic performance, evidencing a beneficial and significant relationship between both aspects.PWB can influence academic performance through non-cognitive conditions or factors involved in learning, such as motivation, academic satisfaction, effective coping with stress or anxiety, and the acceptance and management of limitations related to the process of appropriation and adaptation to one's own identity.
However, it is important to emphasize that the PWB is a construct that requires careful theoretical and empirical review in the educational context, as the Ryff scale has open debates and the lack of uniqueness of criteria on the number of dimensions influences these results.To cite just one case, we have used three dimensions out of six, with statistical and literature support, but the data may be different with a different selection approach.This finding highlights the importance of students' PWB as part of a comprehensive educational strategy, but also shows that the direct impact of PWB on academic performance may be less pronounced than the impact of cognitive skills, and that due to its very multidimensional and complex nature, it is not easy to converge in an instructional design.Despite this, higher education institutions can take care of the institutional and relational climate so that students feel good and take advantage of the formative and educational opportunities of the academic environment.In the case of CT, there are concrete and validated training strategies that make it possible to improve skills such as argumentation, explanation, problem solving and decision making (Guamanga et al., 2023;Saiz, 2024).On the PWB side, the same cannot be said due to the lack of empirical support; however, some studies have proposed a path that incorporates socio-emotional competences in the training of CT, a proposal characterized by the cognitive-emotional methodology, with interesting results that still need to be explored and debated (Hanna, 2013).
Table 2 shows low and non-significant correlations between PWBS and the different forms of reasoning (deductive, inductive and practical), as well as with decision making and problem solving.For example, the correlation between PWBS and deductive reasoning is −0.082, which is not only low, but also lacks statistical significance.Additionally, the correlation between PWBS and decision making is −0.132, which is also a low correlation and not significant.Although there is a positive correlation between PWBS and problem solving (0.040), it is very low and not statistically significant, so there is not enough evidence to claim a positive relationship between these variables.This reinforces the idea that there is not a direct and significant relationship between how a student feels psychologically and CT skills or, nuanced is not supported by the data from this sample.It is possible that there are unexamined mediating factors that influence these relationships or that the relationship exists in a different context or with different measures.
The results of the present study do not coincide with other research that has shown positive relationships between decisionmaking and PWBS, especially with self-acceptance, environmental mastery, and purpose in life.The study by Páez-Gallego et al. (2020) addresses this issue by exploring how the PWBS of adolescents in Madrid, Spain, is linked to their decision-making methods.The research concludes that there is a positive correlation between the use of adaptive decision-making strategies and PWBS.Adolescents who opt for a rational and systematic evaluation of available options report higher levels of well-being.Specifically, adaptive decision-making style correlates significantly with overall well-being (0.544) and with aspects such as selfacceptance (0.485), positive relationships with others (0.242), environmental mastery (0.472), autonomy (0.359), purpose in life (0.473), and personal growth (0.346).In contrast, those who resort to maladaptive strategies, marked by impulsivity or avoidance, show reduced PWBS (−0.458).
The discrepancy in results with this study could be due to the difference between the instruments used to assess decision making.While Páez-Gallego et al. (2020) used the Flinders Adolescent Decision Making Questionnaire (FADMQ), which focuses on personal perceptions and experiences of decision making, our study uses the PENCRISAL, which although not limited to decision making, does include this ability as an essential component of the CT.The latter measures the ability to identify, analyze and solve everyday problems through items that simulate real situations, assessing the ability to choose the best solution or action strategy.Because the PENCRISAL responses are open-ended, it allows for a detailed assessment of how participants describe or explain their decisions.Ultimately, the fundamental difference between these two measures is that one is a self-report of perceptions and experiences, while the other is a set of problems to be solved correctly; in other words, one collects impressions of decision making and the other collects realized decision making.Therefore, although both studies applied Ryff 's 10.3389/feduc.2024.1423441Frontiers in Education 09 frontiersin.orgPWBS, the differences between instruments and approach to decision making explain the variations in the results.This divergence evidences the relevance of considering the context and the specific instrument when interpreting the relationship between the PWBS and decision making.Despite these findings, the need to further explore these interactions persists, especially given that the three selected dimensions-self-acceptance, environmental mastery, and life purpose-theoretically align with CT approaches focused on explanation and the development of post decisional skills, such as decision making and problem solving (Guamanga et al., 2023).A CT approach that emphasizes the development of these skills must consider effects that transcend immediate or tangible outcomes.Therefore, it is crucial to understand how the concept of PWB, as examined above, relates to CT.Specifically, it must be determined whether some of these dimensions align directly to foster effective CT, or whether they instead lean more towards a conception of well-being in a more general sense, which could include hedonic aspects.
The emphasis on CT oriented to decision making and problem solving through the analysis of explanations and causalities should be evaluated for its pragmatic effects on PWB.At first glance this idea seems to confront parallel concepts paradoxically united by the same diachronic nature.In the case of the CT, this nature explains the high demands placed on it.For example, it is not enough to say that it contributes to tangible improvements in academic performance, but its usefulness is expected to transcend beyond academia and materialize in skills of interest to organizations in all sectors of the economy (Casner-Lotto and Barrington, 2006;Atanasiu, 2021).However, their practical impact still presents serious challenges, especially when students, as active subjects of learning, face limitations in anticipating the usefulness and applicability of these critical skills for the future.This is partly explained by the fact that the educational system prioritizes academic performance over the comprehensive development required later in the professional sphere (Saiz, 2020).Which means that the CT can be interpreted as an unfulfilled or partial promise.It is certainly a reading that omits the particular contexts, interests, motivations and concerns of students while they are part of these instructional programs and then the same factors analyzed by a student who knows that he or she must make the transition to the professional field.
A similar case happens with PWB as a diachronic phenomenon.An instant in time is not enough to understand and analyze students' PWB.It is necessary to focus on how it changes and evolves through different stages, including through feelings of achievement or frustration in the academic process.Thus, it is recognized that PWB is not static and, therefore, evolves through lived experiences, among them, those comprising the applicability of a series of learned skills.This implies that as diachronic phenomena they can evolve and influence each other over time.This approach requires longitudinal studies to follow the evolution of the impact of curricular interventions aimed at strengthening cognitive skills such as those of the CT, in order to understand how these may influence the PWB in the long term.
The limitations of this study, beyond having a small sample that prevents the generalization of the results or having examined only certain dimensions of the PWBS, added to the theoretical impossibility of performing regression analyses with other performance measures, lie in the diachronic nature of the constructs studied.This characteristic makes it difficult, as has been argued, to give a definitive answer on the relationship.
Within the framework of the PWBS triad model we are analyzing, it is possible to theoretically group several key concepts.The development of the CT involves a process of self-acceptance, which is crucial given our inherent tendency for error.This process allows us, through a reflective evaluation of our past and present, to recognize and accept beliefs that we have discarded as erroneous.This self-acceptance facilitates deeper introspection, allowing us to see these errors as essential learning opportunities in our lives.On the other hand, any model that emphasizes post-decisional skills must also consider the non-linear complexity of our reality, and provide solid criteria for problem solving and decision making to master our environment more effectively.This is what allows us to adapt better, both biologically and socially.Finally, this approach to TC inevitably values purpose in life by seeking to ensure that it is in part determined by integrating the best tools of science, philosophy and education for a more effective life orientation, grounded in the principles of rationality.The importance of setting clear goals, recognizing that their achievement requires effort, discipline and determination, is essential to being an effective critical thinker.
Therefore, although each dimension proposed by Ryff 's PWBS possesses a conceptual richness that requires empirical validation, the dimensions selected for this study are aligned with a model of CT focused on problem solving and real-world decision making.Although we aspired to discover stronger links between PWB and CT, and to deepen their interrelationship, the theoretical parallelism analyzed is also reflected in the empirical results.Moreover, PWB as an operational concept, due to its complexity and multidimensionality, is subject to continuous revisions or possible unifications into a broader notion of well-being.
In future research on this topic, it is essential to include a broader set of variables predictive of academic performance.This includes, but is not limited to, students' selectivity record and cumulative grades in other subjects.In addition, a more solid and theoretically robust concept of well-being must be adopted, one that fits contemporary educational and professional demands.This concept must transcend the simple distinction between eudaemonic and hedonic well-being, and address its diachronic nature.It is important to explore how these dimensions of well-being are interrelated, either as cause or effect; and to examine whether CT fosters a virtuous circle with well-being.
TABLE 2
Correlations between study variables.
TABLE 1
Descriptive statistics for the measures used (n = 128).
TABLE 4
Correlations between study variables and complementary measures.
TABLE 3
Impact of psychological well-being and critical thinking on academic performance. | 2024-07-18T15:04:38.392Z | 2024-07-16T00:00:00.000 | {
"year": 2024,
"sha1": "9b63873018df6c0a3e0bfbd5436c2ba858c496c0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/feduc.2024.1423441",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b2f322e76700f25b04960f858c3f85d23ec4116d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
9027648 | pes2o/s2orc | v3-fos-license | Shoulder Arthroplasty Imaging: What’s New
Background: Shoulder arthroplasty, in its different forms (hemiarthroplasty, total shoulder arthroplasty and reverse total shoulder arthroplasty) has transformed the clinical outcomes of shoulder disorders. Improvement of general clinical outcome is the result of stronger adequacy of the treatment to the diagnosis, enhanced surgical techniques, specific implanted materials, and more accurate follow up. Imaging is an important tool in each step of these processes. Method: This article is a review article declining recent imaging processes for shoulder arthroplasty. Results: Shoulder imaging is important for shoulder arthroplasty pre-operative planning but also for post-operative monitoring of the prosthesis and this article has a focus on the validity of plain radiographs for detecting radiolucent line and on new Computed Tomography scan method established to eliminate the prosthesis metallic artefacts that obscure the component fixation visualisation. Conclusion: Number of shoulder arthroplasties implanted have grown up rapidly for the past decade, leading to an increase in the number of complications. In parallel, new imaging system have been established to monitor these complications, especially component loosening
INTRODUCTION
The ultimate therapy of primary shoulder osteoarthritis reluctant to medical treatment is total shoulder arthroplasty (TSA). But constant increase in indications (rotator cuff injuries, proximal humerus fracture, bone loss, cuff tear pathologies, revision of already implanted material [1]) and incidence [2] leads to numerous interventions in plan and many patients' follow-up.
From the diagnosis of the pathology to the long-term follow-up of the implant, imaging is used in many steps of the prosthesis life history.
The goal of this review article is to establish a non-exhaustive list of radiological methods that assist surgeons in the diagnosis of shoulder diseases eligible for shoulder arthroplasty and refine the prediction of clinical outcomes, adapt materials and techniques to design patient specific procedures, and also assist in long-term follow-up of the patients. We separated the processes in two groups: preoperative imaging and postoperative imaging.
PREOPERATIVE IMAGING
In this review, we will focus on the imaging used in emergency (fractures) and chronic (osteoarthritis, …) shoulder conditions requiring shoulder arthroplasty.
Fracture
Imaging has become a crucial step for the diagnosis and treatment of proximal humerus fractures. However, plain radiographs are not always sufficient, as numerous crucial radio-transparent structures (ligaments, capsule, tendons, muscles) surround the shoulder joint and bone fractures require thorough imaging exams if the radiographs are not conclusive. Thus, Computed Tomography (CT) scan and more rarely Magnetic Resonance Imaging (MRI), when an assessment of soft tissues is required, are used for better understanding of the fracture.
Three-dimensional reconstruction of bone structures is frequently available, and numerous recent studies tend to take advantage of this tool to assist surgeons in planning the surgery.
Proximal humerus fractures require "relative" urgent surgical management: the surgeon can rely on the patient's data, clinical examination, plain radiographs and CT scan images to assess the damages, adopt the most adequate therapeutic strategy and predict the functional outcome after surgery. However long-term prognosis of functional result is challenging, as complications are frequent and jeopardize the results. Boileau et al. [3] reviewed proximal humeral fractures treated by hemiarthroplasty with retrospective plain radiographs and CT scan in order to evaluate risk factors for tuberosity complication and poor functional outcome. He identified radiological criteria that can be used to predict good functional outcome: anatomical positioning of greater tuberosity, healing of greater tuberosity around the prosthesis and restoration of the scapulo-humeral arch.
Even though appropriate imaging is mandatory before surgery to establish the appropriate choice of the therapy and estimate the surgical difficulties. Gregory et al. [4] advocate the systematic use of pre-operative computed tomography in 3 and 4 part proximal humerus fracture, to analyse fragment displacement and comminution, classify the fracture, assess humeral head vitality, evaluate the mechanical properties of the underlying bone and plan the height of the prosthesis.
Elective Surgery
Total shoulder arthroplasty is the main treatment for advanced shoulder osteoarthritis. Depending on the integrity of the rotator cuff, the surgeon can choose between anatomical and reverse shoulder arthroplasty. The surgical procedure is challenging and its success is linked with a thorough preoperative planning. Beyond the analysis of the rotator cuff status (tendinopathy, trophicity and fatty degeneration) from arthrogram CT, Ultrasound scan or MRI, CT scan threedimensional reconstructions are crucial to determine the 3D deformation of glenoid due to erosion, occurrence and location of osteophytis and subsequently the centre of the native glenoid and to evaluate the residual bone stock of the glenoid [5,6]. When associated with osteoabsorptiometry, the 3D subchondral bone density distribution of the arthritic glenoid vault can be addressed [7].
The above mentioned data are critical to plan the operative management of bone loss [8].
In reverse arthroplasty, they determine the viable options in the positioning of the glenosphere [9]. In anatomical shoulder arthroplasty, they insure a correct positioning (version, inclination, rotation, offset) of the glenoid implant, knowing that non-aligned implants lead to increase radiographic loosening rates [10,11]. However Gregory et al. [12] compared preoperative and postoperative CT scans of patient undergoing TSA and showed that the glenoid component positioning strongly depends on the preoperative glenoid erosion.
Authors have recently evaluated innovative surgical methods based on pre-operative CT 3D reconstruction: patient specific instrumentation. Levy et al. [13] and Walch et al. [14] developed this novel surgical method for placing the glenoid component with the use of patient specific templates created by preoperative surgical planning and 3D models: the principle is to virtually place the glenoid implant on preoperative CT exams with the use of a dedicated software. Then, patient-specific guide is created from 3D printing technology to direct the guide pin into the desired orientation and position in the glenoid during the surgical procedure.
Immediate Postoperative Imaging
Standard plain X-rays (true anterior-posterior view and lateral Lamy view) are routinely shots after total shoulder arthroplasty procedure. It allows verification of the correct positioning of the implants (matching, orientation), and provides reference images on which, the follow-up of the patient depends: anything new appearing in the follow-up images can be compared with the first one.
Recently, these plain radiographs have been used in numerous studies to find new immediate postoperative criteria leading to longer lifespan of the implant and improving long term general acceptance, generally using scores (Constant, Oxford shoulder). For instance, Lädermann et al. [15] showed that reverse shoulder arthroplasty performed using a deltopectoral approach reduced the length of the arm by 0.5cm than using a transdeltoid approach. But when it comes to active anterior elevation, the transdeltoid approach minimally restricted the angular amplitude by 10°. These results are based on the comparison of preoperative plain radiographs with immediate postoperative images. Without entering the details concerning all the results, early plain radiographs allowed searchers to study numerous other postoperative parameters: anatomic restoration of the humeral head using Copeland shoulder resurfacing arthroplasty versus standard approach [16] or subscapularis sparing approach versus standard approach [17], displacement of the centre of rotation induced by the operation using stemmed or resurfacing method [18], mean neck shaft angle for 3 or 4 part proximal humeral fracture [19] or using novel reverse shoulder arthroplasty [20], and the involvement of scapular neck length in scapular notching after a reverse shoulder arthroplasty [21]. Other imaging techniques have also been used to visualize anatomical elements not seen on plane X-rays. For instance, Felix et al. [22] used magnetic resonance imaging and ultrasound systems to assess the wholeness of the subscapularis tendon after an assumed sparing novel technique.
Long-Term Follow-Up
During the subsequent years after surgery, the patient is monitored by evaluation on regular basis intervals. The monitoring includes clinical level of functionality, mostly assessed by scores, angles of shoulder amplitude and radiographic images. The most common complications are: infection, stiffness, remaining pain, shoulder instability, rotator cuff secondary tear and aseptic loosening of the implant. Glenoid loosening continues to be the primary reason for the failure of total shoulder arthroplasty (TSA) [23]. In a metanalysis involving 33 studies and 2540 TSA from 1996 to 2005 [2], the rate of aseptic loosening was reported to be 39% and 83% of those involving the glenoid component
Loosening Mechanisms
In TSA, aseptic loosening emerges mostly from the glenoid implant. The different mechanisms involved are: the rocking horse effect and impingement between the edge of the glenoid rim and humeral metaphysis, specially in the uncovered area [24]. These two mechanisms generate PE particles that induces the deterioration of polyethylene, leading to the development of a polyethylene granuloma causing the aseptic loosening. Numerous aggravating factors encourage these mechanisms [25]. The most important are the quality of the primary fixation of the glenoid implant and its positioning, positioning of the humeral head, mismatch between these two implants, quality of the underlying subchondral bone, roughness of the implants and the cementing technique. Specifically, positioning of the glenoid implant is a critical step for clinical outcomes and long-term lifetime of the anatomic total shoulder arthroplasties [28]. Therefore, preoperative assistance for the implant positioning seems necessary when setting an anatomic total shoulder arthroscopy. One can select the instrument set positioned on the non-damaged areas of the scapula, individual instrument set based on preoperative images (CT scan or MRI), Rapid Prototype instrumentation or navigation systems.
Radiolucent Lines Observed on Plain Radiographs are Not a Reliable Evidence of Loosening
The mean rate of radiolucent lines in series with more than 10 years of follow-up is reported to be 80% [26]. However, the reported occurrence of radiolucent lines varies greatly between the published series (from 0 to 100%) and has proven to be inconsistent.
It is widely admitted that only progressive radiolucent lines are associated with loosening of the glenoid implant. This criteria of "progression" is questionable, since the value of one observation of a radiolucent line is questionable: it is observer dependant, and even a slight change in the incidence of the radiograph can interfere with the RL analyses [26,27]. Beyond that, X-rays underestimate radiolucent lines. Yian et al. [26,27] studied a series of 47 TSA: 40% of the radiolucent lines visualized with CT scan could not be seen on the plain X-rays. More recently, Gregory et al. [26] showed that the inter observer reliability is three times higher from the analysis based on CT scan images rather than the one based on plain X-rays, and 74% of the osteolysis seen on CT scan images could not be seen on the plain X-rays. Thus, the results of the studies based on radiolucent lines from plain X-rays are questionable.
Radiolucent Lines and Osteolysis Seen From CT Scan Images are Linked to the Loosening of the Glenoid Implant
RL analysis based on CT scan images is more reproducible than based on X-rays [27,28] and allows the periprothetic osteolysis analysis. This osteolysis can be defined by an area free from bone framework wider than 2mm. Gregory et al. proposed a five stage score to classify osteolysis around the glenoid implant [28]: absence of osteolysis, osteolysis located to one or two aspect of the fixation, massive osteolysis surrounding the whole fixation with respect to the cortical bone, massive osteolysis with one or more cortical permeation, and massive osteolysis associated to the lysis of the cortical bone. In a sample of 68 TSA followed-up and assessed with CT scan within a 6 to 88 months period (mean 35, SD 26), Gregory et al. [28] showed an increase in the radiolucent lines, assessed with both the Molé score, and osteolysis. Clinical results are consistent with deterioration of the fixation of the glenoid implant with time.
Besides that, the connection between radiolucent line assessed from CT scan images and the aseptic loosening of the implant has been confirmed during in-vitro studies [29]. In this study, the constraints applied by the humeral head to the coil were repeated on 6 prosthetic coils implanted on cadaveric bone. The loosening evolution was evaluated by iterative CT scans. Later, the implants were cut, and CT scan images were compared to the analysis of the fragment with optic microscope. This comparison showed that the radiolucent lines matched with a loosening of the implant, undergoing eccentric mechanical stress (distraction and compression constraints), showing that the loosening progressed from the periphery of the implant to the centre of the fixation. The involved interface develops first between the implant and the cement, and then lately between the cement and the bone. This last interface leads to the complete loosening of the implant [30].
If the assessment of a radiolucent line on plain radiographs is not strongly conclusive [26], and its evolution is hardly predictable, the revealing of radiolucent lines from CT scan images is associated with the loosening of the implant, partially if it is only restrained to a limited area of the implant, and it is completed when the radiolucent line surrounds the implant [29].
Radiolucent Line, Osteolysis and Clinical Relevance
Even if these radiolucent lines and radiologic osteolysis match with the loosening of the implant, they however, not always lead to functional and clinical loss of shoulder feature. According to Torchia et al. [31], the assessment of osteolysis or a complete radiolucent line surrounding the implant and wider than 1.5mm, leads to clinical pain felt by the patient. In many other studies, the clinical restriction of movement was limited compared to the strong radiological loosening signs [32,33]. This mismatch between the results of the different studies is due to the fact that the study of loosening is based on the analysis of plain graphs, rather than CT scan. As previously noticed, the radiolucent lines observed on plain graphs are not reliable for the assessment of loosening [26,27]. In 2006, Zilber et al. [34] introduced the concept of "floating glenoid" after studying the long term results (15 to 21 years) of a TSA sample; it designates a glenoid surrounded by osteolysis without any functional limitation.
According to Gregory et al. [28], the functional limitation (excluding shoulder rotator cuff injuries and/or trauma induced loosening) might be due to the expansion of the osteolysis to the cortical bone with its lysis, inducing destabilisation of the implant, thus provoking pain.
Polyethylene Deterioration, Pace of Deterioration, Polyethylene Granuloma
A CT scan study of 68 TSA [28] showed osteolysis in nearly all the subjects with a follow up over 40 months (24 subjects within 27). There is, to date, no consensus on the significance of these images. Wirth et al. [25] performed a histological analysis of the membrane surrounding three TSA retrieved because of aseptic loosening (with major osteolysis on the follow-up X-rays). They found in each subject the same polyethylene granuloma liable for the aseptic loosening of total hip arthroplasties. The difference lied in the shape of the particles (less spherical, more fibrillar). Other authors performed PET CT to assess the biological activity of these images of osteolysis. They found an intense reaction around the implant, where the osteolysis could be seen with the CT scan. It can possibly match with the polyethylene granuloma inflammatory reaction [35].
In another study, the deterioration pace of the polyethylene was studied using CT scan in vivo method [36]. Neer 2 (Smith and nephew) implants were assessed. The rate of deterioration of the polyethylene was estimated to be 0.38mm per year (for a 4mm thick implant). Even though the shoulder is not a weight bearing joint, this rate of deterioration is close to those found for total hip arthroplasties (0.1 to 0.4mm per year [37,38]). Knowing the limited PE thickness of glenoid implants (4 to 5mm), these results might explain why the lifetime of these implants rarely exceeds 10 years. The mechanisms responsible for polyethylene deterioration are therefore especially relevant.
CONCLUSION
The results of this review suggest that even though imaging is already strongly considered in preoperative and postoperative usage, many applications are yet to be developed and spread. Research efforts are to be made for its promising use concerning highly patient specific materials and techniques based on preoperative CT scan. Moreover, we recommend employing CT scan for the long-term follow-up, specifically to monitor aseptic loosening as it has proven to be more reliable than plain graphs alone.
ETHICS APPROVAL AND CONSENT TO PARTICIPATE
This article does not contain any studies with human participants or animals performed by any of the authors.
HUMAN AND ANIMAL RIGHTS
No Animals/Humans were used for studies that are base of this research.
CONSENT FOR PUBLICATION
Not applicable. | 2017-10-20T21:08:44.585Z | 2017-09-30T00:00:00.000 | {
"year": 2017,
"sha1": "1efbd30f760e04ab65c372b32e9d312b4eb52278",
"oa_license": "CCBY",
"oa_url": "https://openorthopaedicsjournal.com/VOLUME/11/PAGE/1126/PDF/",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1efbd30f760e04ab65c372b32e9d312b4eb52278",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54677985 | pes2o/s2orc | v3-fos-license | Large-scale coordinated observations of Pc 5 pulsation events
HF (high-frequency) radars belonging to SuperDARN (Super Dual Auroral Radar Network) receive backscatter over substantial fields of view which, when combined, allow for simultaneous returns over extensive regions of the polar caps and midlatitudes. This makes them ideal instruments for the observation of pulsations in the Pc5 (1– 5 mHz) frequency band. Relatively few pulsation events observed by multiple radars have been reported in the literature. Here we describe observations of three such events which extend over more than 120 of magnetic longitude in the Northern Hemisphere and one of which is also detected in the Southern Hemisphere. All three events show characteristics of field line resonances. In one case the pulsation has also been observed by magnetometers under or near the radar fields of view. The extensive longitudinal coverage allows accurate determination of azimuthal wave numbers. These are at the upper end of the lower values associated with external sources such as those in the solar wind. Such sources imply antisunward flow. However, the azimuthal wave number is negative, implying westward propagation at magnetic local times on both sides of noon, as would be expected from drift–bounce resonance with positive particles. Quiet conditions and a very low ring current during the events argue against this. The identification of the source of pulsations from a number of different mechanisms remains a problem of interest.
Introduction
Ultralow-frequency (ULF) pulsations arising from field line resonances and those in the Pc5 (1-5 mHz) are global magnetohydrodynamic (MHD) events in the magnetosphere, which may extend over several hours of local time and be observed at magnetically conjugate locations.HF (high-frequency) radars are useful observing instruments because of their extensive fields of view and spatial resolution.They are, however, limited to periods when radar backscatter exists.Magnetometers, on the other hand, have poorer spatial resolution because the pulsation signal arises from the effects of the ionospheric currents associated with the pulsation and this is integrated over a transverse ionospheric region of dimensions comparable with the height of the ionosphere.Ideally, to get as complete a picture as possible, data from many instruments must be combined over as large a global range as available.The first pulsation study using an extensive magnetometer array was an investigation of a giant pulsation by Glassmeier (1980).Only a few pulsation events using multiple radars and coordinated magnetometers simultaneously observing a pulsation over an extensive range of longitude have been presented and analysed in the literature (e.g.Samson et al., 1991;Fenrich et al., 1995;Ziesolleck et al., 1998;James et al., 2013;Bland et al., 2014).Spacecraft observations over considerable spatial ranges, such as those by Agatipov et al. (2009), Balasis et al. (2012) and Balasis et al. (2015), have been rare.
Recently, (hereafter Paper 1; Mtumela et al., 2015) have reported a detailed case study of a magnetospheric field line resonance with a frequency in the Pc5 range using radars from SuperDARN (Super Dual Auroral Radar Network; Greenwald et al., 1995;Chisham et al., 2007) and Z. Mtumela et al.: Coordinated observations of Pc5 pulsations the CARISMA (Canadian Array for Realtime Investigations of Magnetic Activity; Mann et al., 2008), Greenland (DTU Space, 2015), and IMAGE (International Monitor for Auroral Geomagnetic Effects; Tanskanen, 2009) magnetometer arrays.The global coverage of this event was substantially greater than in previous such studies (Fenrich et al., 1995;Ziesolleck et al., 1998).It extended over several hours of local time in the Northern Hemisphere.In this paper we extend the work of Paper 1 to include additional events with similar global coverage.In one case the study includes conjugate behaviour.We also consider the nature of the sources of such events and suggest a possible source for future investigation.
Pc5 field line resonances occur as the result of a compressional MHD oscillation exciting the magnetic shell with the same frequency into toroidal oscillation.When observed in the meridian plane the oscillation appears to be standing.There is generally phase advance in the azimuthal direction, which is characterized by the azimuthal wave number m. Various sources have been identified such as Kelvin-Helmholtz oscillations at the magnetopause (Southwood, 1968), excitation of cavity or waveguide modes (Kivelson and Southwood, 1985;Samson et al., 1992), direct control by oscillations in the solar wind (Stephenson and Walker, 2002), and more recently fluctuating field-aligned currents in the auroral zone (Pilipenko et al., 2016).Pulsations can also be generated by drift-bounce resonance with ring current particles (Dungey, 1965).These have shorter azimuthal wavelengths.
We review a number of mechanisms that have been proposed as the source of Pc5 pulsations.Our data allow us to eliminate many of these.The magnetic conditions effectively rule out drift-bounce particle resonance.Only studies such as this one, with observations over a number of hours of local time, can determine azimuthal wave number m with reasonable confidence.The results imply that the source is located in the magnetotail and generates a fast wave that can excite the magnetospheric waveguide and hence trigger the field line resonances.
SuperDARN is well suited for the determination of spatial characteristics of ULF waves.In this paper three events, which were all unambiguously resolved by SuperDARN data as toroidal resonances, are presented.Each event analysis utilizes up to three radars in the network, which allows the pulsation to be monitored over a significant azimuthal extent of more than 40 % of the Earth's circumference where it occurred.This in itself is a unique observation.
Identifying the energy source of Pc5 pulsations is an important and open question in space physics.The nature of the magnetosphere makes it a complex problem.However, SuperDARN has the ability to determine spectral information, azimuthal wave number, phase and group velocity, and polarization properties of the resonance.These parameters are essential clues to the generation mechanism and their determination in these events presented an unusual scenario of toroidal resonances with sunward phase velocities during extremely magnetically quiet conditions.This effectively rules out a number of popular candidates for the generation mechanism as discussed in the section "Source mechanisms".A different energy source is proposed here.
Instrumentation
Data from four SuperDARN HF (high-frequency) radars, located at Goose Bay (GBR), Saskatoon (SAS), Þykkvibaer (PYK) and Sanae (SAN), were used in the analysis of three events.Table 1 gives the times, frequencies and locations of the three events analysed.In addition, data from the magnetometer arrays Greenland, CARISMA and IMAGE, were examined to verify the existence of pulsation data.Figure 1 shows maps in geographical coordinates of the northern and southern polar regions (inset), outlining the field of view of the radars and the locations of the magnetometer stations used for events 2 and 3.The corresponding map for event 1 is shown in Paper 1.The solid lines indicate the AACGM (Altitude Adjusted Corrected Geomagnetic Coordinates) magnetic latitudes (Baker and Wing, 1989;Shepherd, 2014).
There were good data for the Northern Hemisphere radars.In the south, Sanae observed no scatter for events 1 and 2. Sanae data for event 3 were analysable but sparse.We also examined other Southern Hemisphere radars in the same magnetic longitude sector.For events 1 and 3 Halley showed some activity in the region of interest, but the time series were not sufficiently continuous to analyse.For event 2 Halley showed no scatter in the region of interest.The only other relevant station, Syowa South, was not active at these times.
The radars of the system use an electronically phased antenna array to sweep the beam through successive beam positions with azimuthal separation of 3.24 • .For each beam position returns are obtained from up to 75 ranges separated by 45 km.In full scan mode a radar runs through a 16 beam scan with a dwell time of between 3 and 7 s (depending on radar), which gives a full scan that covers 52 • in azimuth once every 1 or 2 min.
The radars are sensitive to backscatter from field-aligned irregularities in the ionosphere.The nature of the backscattered signal and how it is measured are described by Greenwald et al. (1995).For such targets, they measure backscattered power, line-of-sight Doppler velocity and spectral width in the ionosphere in the cells mapped out by the scan.The Doppler velocity in each cell represents the component of velocity of the irregularities which experience an E × B flow parallel to the beam.Such line-of-sight Doppler velocities can be used to measure ULF oscillations in F region plasma flow associated with Pc5 field line resonance (see references in Paper 1).
The radars are also sensitive to ground scatter.Ponomarenko et al. (2003) have shown that ground scatter also can be used very effectively to observe ULF pulsations.Ground scatter can be identified by zero or very small line-of-sight Doppler velocities and very small spectral width.If the ray path of the radar signal is stationary, these quantities are zero.If, however, the ionosphere moves so that the path length changes (Poole et al., 1988), small (< 10 m s −1 ) Doppler velocities may be observed in the ground scatter.There are several mechanisms giving rise to this shift.The two most significant are (i) a change in refractive index through compression of the plasma leading to changes in delay time and hence range and (ii) the vertical motion of the plasma as it is forced by the pulsation field to move up and down the magnetic field lines.When pulsations are observed in ground scatter, the measurements are not so directly related to the velocity perturbations associated with the plasma.The changes in radar range are quite small.As a result, the magnitude of the oscillation is typically only a few metres per second compared with a few hundred metres per second when the convection velocity is observed in the ionosphere using backscatter.
The ground-based magnetometer arrays cover polar cap, cusp and auroral regions as shown in Fig. 1.The large latitudinal coverage allows features such as the phase change across resonance and the amplitude peak of the wave to be observed.The longitudinal coverage provides information about the azimuthal wave number m.Each station can be identified using the accompanying key.The Greenland, CARISMA and IMAGE magnetometer data are sampled every 20, 1 and 10 s respectively in geographical (X, Y, Z) coordinates and were rotated into geomagnetic (H, D, Z) coordinates before analysis.
The magnetic latitude ranges within which resonance occurs for the events are ∼ 71-74 • (event 1), ∼ 74-78 • (event 2) and ∼ 78-82 • (event 3).The red curves in the figure indicate these regions for events 2 and 3. Radars and magnetometer stations considered in the study are labelled in Fig. 1 and listed in Table 1.Magnetic local times for observations during event 1 were all on the afternoon side.For event 2 all Saskatoon observations were on the morning side, Goose Bay was near noon and Þykkvibaer near dusk.For event 3 Saskatoon was near noon and Goose Bay on the dusk side.At the time of these events magnetic conditions were quiet.For event 1 Kp was 3, for event 2 it was zero and for event 3 it was 3.For all events the interplanetary magnetic field was northward.Ring current activity was small.During event 1 Dst varied from −3 to −4, during event 2 from −3 to −2 and during event 3 from −4 to zero. 3 Selection and analysis of data
Selection of events
The method of selecting and analysing data is described in detail in Paper 1. Briefly, events were selected using a pulsation-finder software package developed by Magnus (2010).This scanned data from a particular radar to identify pulsation events where spectral peaks were 3 standard deviations above the ULF noise and where data were available over a large enough time interval and spatial region to be suitable for analysis.In practice the events used in this paper were first identified in the Goose Bay radar data by the pulsation finder.The Goose Bay radar was chosen to identify events because it had the best data returns during the periods scanned.A major objective of this work was to identify events with a large global extent.The data from adjacent and conjugate radars and magnetometers were searched for useful data.In addition magnetometer data sets from the neighbourhood of the radars were also examined.While pulsations are frequently observed by individual radars, events for which conditions are optimal at several radars are relatively rare.The three events used in this analysis were identified using this process.
The data for all radar cells and magnetometers used in an event were analysed using the same time interval as shown in Table 1.The location of the spectral peak had been determined by the pulsation finder, and the bandwidth is recorded in Table 1.Table 2 shows the instruments contributing to each event.
Determination of resonance nature of events
The next stage of the process was to confirm that each event showed characteristics consistent with field line resonance behaviour, namely a peak in the magnitude of the oscillation at a particular magnetic latitude and a decreas-ing phase as magnetic latitude increased.This was done for beams that were approximately aligned with the magnetic meridian.The theory of field line resonance (Tamao, 1965;Southwood, 1974;Chen and Hasegawa, 1974) shows that if an ideal pulsation is observed at points along a line of magnetic longitude the toroidal component has a strong peak at the resonance and the phase decreases by 180 • across the resonance.The poloidal component has little or no peak and a small phase change.At resonance the toroidal and poloidal components are in phase so that the polarization is linear.Calculations showing examples of this behaviour are shown by Walker (1980).There is, of course, also a phase dependence on longitude.Early radar observations, using the STARE VHF (Scandinavian Twin Auroral Radar Experiment very high-frequency) radar, obtained full vector information about the convection velocity so that the resonance behaviour could be unambiguously found.SuperDARN pulsation observations usually have the limitation that, because it is rare to have simultaneous observations of a pulsation using crossing beams from two radars, only the line-of-sight velocity component is available.The consequence is that, for observations using ionospheric backscatter, if the beam is exactly aligned with magnetic longitude, the poloidal component of the velocity dominates so that the peak in the amplitude and the magnitude of the phase change may be missed.If, on the other hand, the beam is aligned with magnetic latitude to see the toroidal component, we do not see the change of amplitude and phase across the resonance.Instead the phase change is associated with the azimuthal propagation.
If ground scatter is used for the observations, then, depending on conditions, transverse oscillations may perturb the ray path length.Poole et al. (1988), Sutcliffe and Poole (1989) and Sutcliffe and Poole (1990) considered various mechanisms for an ionosonde at vertical incidence (the SP model) and showed that the most important mechanism was the compression of the plasma as a result of vertical motion.This modified the refractive index so that the virtual path length changed.Other mechanisms also contributed.Waters et al. (2007) have considered oblique propagation and have shown that the dominant mechanism for pulsations is the vertical E × B drift due to the pulsation field, which affects the path through the SP mechanism.A limitation of such observations is that the perturbation occurs in the ionosphere where the pulsation is located, while the radar measured the range of the ground.The pulsation range is thus uncertain.The as- sumption that the pulsation is located halfway between the antenna and the ground target is not necessarily true.
For both types of backscatter the best compromise is to choose beams approximately aligned with longitude so that the phase change is obvious, even though the peak may be small.The best beam alignment to observe classic resonance features makes a relatively small angle with the meridian so that the much larger toroidal component makes a significant contribution while the azimuthal phase change is still small.It is important to note, however, that near the resonance maximum, toroidal and poloidal components are in phase, so that phase measurements are unambiguous.
The resonance behaviour of event 1 was verified in Paper 1.That for events 2 and 3 is shown in Figs. 2 and 3 respectively.In each case the behaviour is consistent with a field line resonance.Figure 2 shows a quite a strong resonant peak, implying that a significant component of the toroidal oscillation is included and phase change is consistent with resonance.The peak is weaker in Fig. 3, but the features are still consistent with resonance.
Analysis procedure
Data from radar cells for the complete available range of longitudes in a latitude band on either side of resonance were analysed by complex demodulation as described, for example, by Walker et al. (1992).Each time series was Fourier analysed, the negative frequency components were removed and the spectral peak bandpass filtered using the pass band in Table 1.The result was a complex time series whose real part was the filtered time series, with amplitude representing the instantaneous amplitude of the signal and with phase representing the instantaneous phase.The data set in this form is what is used in extracting the results that follow.
Results
Figure 4 shows examples of the analysis for event 2. This is for Goose Bay, beam 9. Beam 9 is approximately in the magnetic meridian.The top panel represents the filtered real time series, that is, the real part of the analytic signal calculated www.ann-geophys.net/34/857/2016/Ann.Geophys., 34, 857-870, 2016 the ground.The actual pulsation range is in the ionosphere, roughly halfway between the target and the radar.Similar plots are shown in Paper 1 for event 1 at Goose Bay and in Figs. 5 and 6 for event 3 at Goose Bay and Saskatoon respectively.
The data presentation for Sanae (see Fig. 7) in the conjugate hemisphere is somewhat different.The backscatter in this case is from irregularities in the ionosphere.Although the velocities are small, there is no possibility of ground scatter because a ray tracing test shows that the range is well within the skip distance.There are three peaks in the spectrum, of which the middle one, centred on 1.55 mHz, is at the frequency of interest.The bottom panel shows stacked plots of the unfiltered data.These events are observed over a very wide range of longitudes (about 150 • for events 1 and 2 and about 120 • for event 3).They are truly global, extending over more than a third of the Earth's circumference on the afternoon side.
Examples of the power spectra of the H component of the magnetometer data are shown in Figs. 8 and 9.In these figures the shaded regions are the frequency bands in which the Pc5 events are observed by the radars.In Fig. 8 the spectra during event 1 are shown for CONT in the field of view of Saskatoon, SKT and GHB in the field of view of Goose Bay, and HOR in the field of view of Þykkvibaer.In Fig. 9, similar plots are shown for event 2. RANK and UMQ are in the fields of view of Saskatoon and Goose Bay respectively.In some cases there is a well-defined spectral peak in the frequency band; in others it is weak.Because the radar observations are made in the ground scatter, the exact location of the resonance maximum is not precisely known.The stations showing the weakest peaks are probably some dis- tance from the resonance maximum.There is, however, confirmation that the oscillation is also seen in the magnetometer data.
An important feature of the behaviour that contributes to the possibility of determining the source of the event is the sense of the phase propagation in the azimuthal direction.If φ is the magnetic longitude, increasing eastward, and the signal varies as exp{i(ωt − mφ)}, then, if the azimuthal wave number m is positive, the direction of phase advance is eastward and vice versa.
For each radar used in the study, range gates in each beam that are closest to the estimated range of the resonance maximum have been selected.The phase of the analytic signal at the time of maximum pulsation activity has been plotted against AACGM latitude, and a least-squares straight-line fit to the data has been carried out.Examples of such plots for Event 1 are shown in Paper 1.For each of the radars Saskatoon, Goose Bay and Þykkvibaer, the slope of the plot is positive, implying a negative value of m and westward propagation.Furthermore, the three values of m obtained were very similar, namely 12 ± 2.1 (Saskatoon), 13 ± 2.9 (Goose Bay) Frequency (mHz) Spectral power (arb.units) . constant wavelength between them.In Fig. 10 they are plotted on the same axes on the assumption that, if the Saskatoon and Goose Bay are extrapolated backwards and forwards, the straight lines will approximately coincide.This allows us to estimate the phase ambiguity by adding a suitable multiple of 2π to one of the stations.As noted in Paper 1, the value of m for Þykkvibaer is larger, which might suggest a change in phase velocity with latitude.Nevertheless, this value has a relatively large error estimate and, in fact, the error estimates of the data for the three stations overlap.We have thus also plotted the Þykkvibaer data on the same plot.We have not in this case derived a mean value for m as the phase ambiguity for Þykkvibaer is subject to error.The procedure has been adopted for the other two events.For event 2 we obtain m values of 7 ± 3.2 for Saskatoon, 9 ± 2.3 for Goose Bay and 9 ± 4.3 for Þykkvibaer.The results are shown in Fig. 11a.In this case we are confident enough to fit a straight line to all three events, getting m = 8.0 ± 0.3 for the combined data set.Similarly, for event 3 the m values for the individual radars were 9 ± 2.4 for Saskatoon and 9 ± 4.3 for Goose Bay.The consolidated plot shown in Fig. 11b gives a value of m = 8.0 ± 0.5.Note that this value from the combined plot is lower than either individual plot.The errors on the individual plots are fairly large but not so large that the relative phase ambiguity could be 2π different.
The data from Sanae show a spectral peak at the correct frequency for event 3 in the opposite hemisphere.Unfortunately, as can be seen from Fig. 7, there is a fairly large data gap in the data, and phase information at the time of the maximum activity is not available.It is introduced here in order to consider conjugate behaviour of the pulsation.The nominal latitude in the Southern Hemisphere is significantly different The slope of this relation yields the azimuthal number m: 7 ± 3.2 for Saskatoon (red), 9 ± 2.3 for Goose Bay (cyan) and 9 ± 4.3 for Þykkvibaer (black), corresponding to a westward phase velocities at the ionosphere.The m value of the three radars is ∼ 8 ± 0.3 (purple).The phase of the resonance from the magnetometers RANK and UMQ were determined and plotted (green: UMQ; orange: RANK).(b) Event 3: plot of phase versus longitude measured at 19:00 UT for the 1.55 mHz frequency for event 3. The slope of this relation yields the azimuthal wave number m: 9 ± 2.4 for Saskatoon (red) and 9 ± 4.3 for Goose Bay (cyan), corresponding to a westward phase velocity at the ionosphere.The m value of the two radars is ∼ 8 ± 0.5 (purple).
from that in the north.However, as we have noted above, the Sanae data are from half-hop backscatter, while the Northern Hemisphere data are from one-hop ground scatter.The northern pulsation location is in the ionosphere on the ray path somewhere between the radar and the ground target.
Ray tracing shows that the paths of the ground scatter rays in the Northern Hemisphere pass through the ionosphere at the points conjugate to the Sanae backscatter region.This suggests that we are indeed seeing conjugate pulsations for event 3.
For event 2 in Fig. 11a we have also plotted the phase of the UMQ and RANK magnetometers.Magnetometer pulsations are not well resolved in latitude because of the integrating effect of the ionosphere.These stations are well removed from the resonance, which means that the phase is likely to be largely from the poloidal component which is close to the resonance phase.
Source mechanisms
ULF pulsations in the Pc5 frequency range have a variety of origins.The determination of the source mechanism is not trivial.Because the pulsations are global phenomena, a full set of observations covering the whole extent of the phenomenon requires good fortune in finding coincident data sets from a variety of instruments both in the ionosphere or on the ground and in spacecraft near the magnetic equatorial plane.
Pc5 pulsations observed on the ground or in the ionosphere are the footprints of the characteristic modes of oscillation of the magnetosphere.Two types of standing wave were first identified by Dungey (1954).In the case of small β, the fast wave becomes an isotropic Alfvén wave.In general, the MHD wave equation, in the complicated geometry of the magnetosphere, is not separable into isotropic and transverse Alfvén modes; these modes are strongly coupled by the geometry.In some circumstances separation is possible.The two cases are (i) m = 0, for which a single L shell can oscillate in the toroidal direction as a transverse Alfvén mode, and (ii) m → ∞, in which a single field line can oscillate incompressibly in the poloidal direction, also as a transverse Alfvén mode.The second of these can be excited by driftbounce resonance with energetic ring-current particles.The first, in its ideal form, is not easily excited.If, however, m is small, there is weak coupling between the poloidal isotropic Alfvén mode and the toroidal mode if their frequencies are the same.The isotropic wave can propagate energy across field lines, transporting it from a remote source to an L shell whose natural frequency of oscillation matches that of the source where coupling takes place.This results in the socalled field line resonance.
Consider Fig. 12.It shows the value of the DST (disturbance storm time) index for intervals straddling the three events studied.In each case the magnitude of DST is very small, implying that the ring current is also very small.Indeed, for such values of DST the ring current is essentially absent.For this reason we reject the possibility of a driftbounce resonance with energetic ring current particles as a source of the observed pulsations.In addition, they show features consistent with field line resonances.This leads us to conclude that the pulsations originate from a remote source and are propagated by a fast wave to the location of the field line resonance.A number of mechanisms have been suggested for exciting such waves.Consider Fig. 13, which is adapted from Fig. 9 of Walker (2002).It shows the equatorial plane of the magnetosphere in a simple model in which the field lines have been straightened.We discuss various mechanisms in the context of this model.Outside the plasmapause, the Alfvén speed decreases with radius.The isotropic Alfvén wave is reflected at the turning point where V A = ω/ k 2 y + k 2 z with k y = m/r (Walker, 2002).This radius is shown for various frequencies in the figure for the case where m lies between 1 and 3.The value of k z corresponds to a field line length of 20 R E so that it is small compared to k y .For larger values of m, the reflection level is at a smaller value of V A and thus at a somewhat larger radius.For larger values of m, the reflection levels would be at slightly larger radii, but the dependence of r on m is quite weak.The resonance level for each frequency is at the radius where V A = ω/k z , which lies inside the turning point.
For each frequency the isotropic Alfvén wave can be propagated outside the reflection at the turning point and it is evanescent inside this.We see that, for these conditions, the reflection level for the lowest frequency (1.3 kHz) intersects the magnetopause in the noon sector.The region of evanescence extends to the magnetopause for this case.For the higher frequencies the region just inside the magnetopause allows propagation of the isotropic Alfvén wave.
1.3 mHz 1.9 mHz 2.6 mHz 3.3 mHz Walker, 2002.)We now consider a number of mechanisms that have been proposed as the source of the compressional wave that transfers energy to the field line resonance.
The earliest of these was the Kelvin-Helmholtz instability (Southwood, 1968;Walker, 1981).This is an instability of the fast wave as a result of the shear across the magnetopause.A small perturbation in the boundary grows if the shear is large enough provided that the boundary conditions determine a surface wave.This requires the wave on either side of the boundary to decrease exponentially on either side of the boundary to confine the energy.The energy can leak to the resonance by evanescent barrier penetration.The rest frame of the wave is that for which the momentum densities of the plasma on either side of the boundary are equal and opposite.In the rest frame of the magnetosphere, the magnetosheath speed is much larger than the convection speed and thus, when the wave is observed from this frame, its phase motion is antisunward.As we see from Fig. 13 the instability tends to occur only for the lowest frequencies, near the front of the magnetosphere.Since our observations show sunward phase motion on the flanks of the magnetosphere, we reject the Kelvin-Helmholtz instability on the magnetopause as a source.
Several other mechanisms depend on the model proposed by Kivelson and Southwood (1985).They noted that the magnetopause and the turning point level formed the boundaries of a cavity with natural frequencies of oscillation.These modes could be excited by a suitable disturbance.The picture was modified by Samson et al. (1992), who pointed out that the cavity behaved more like a waveguide.These authors envisaged the cavity being excited by a broadband impulse originating from the solar wind.Other ways of exciting the cavity include the mechanism of Mann et al. (1999), who showed how the overreflection mechanism could extract energy from the shear motion at the boundary.Observations of some events show correlation with coherent waves in the solar wind (Kepko et al., 2002;Stephenson and Walker, 2002).These can excite the cavity modes (Walker, 2002) and hence excite the field line resonance.
All these mechanisms excite cavity waves with phase motion in the antisunward direction and therefore cannot explain our observations.We must seek an isotropic Alfvén wave source elsewhere.
In Fig. 13 the whole magnetotail region allows propagation of the compressible magnetohydrodynamic wave.In the plasma sheet region, where the plasma pressure is significant, this is the fast magnetosonic wave.As we move towards the Earth the magnetic field pressure becomes larger and the fast wave approximates the isotropic Alfvén wave.A source of such waves in the tail would provide waves propagated sunwards towards the waveguide as indicated by the large grey arrows in the figure.In general, there would be a mismatch between this wave and a wave propagated in the wave guide.If, however, the spectrum of the signal contained frequencies matching the frequencies of the waveguide modes, these modes would be excited and would be propagated sunwards in the waveguide, exciting the field line resonances of the type observed here.Mathews et al. (2004) and Eriksson et al. (2008) have also observed field line resonances with sunward phase propagation and suggested that they were generated by earthward flows in the magnetotail.Early radar observations of such field line resonances by Nielsen (1979), using the STARE VHF radar, associated them with large disturbances of the drift velocity on the afternoon side of the magnetosphere.They suggested that these disturbances were the footprint of a flux transfer event.
These events occurred during very quiet magnetic conditions.As we have seen, the DST index was essentially zero.The magnetic field in the solar wind as observed by ACE (Advanced Composition Explorer) is about 3 nT or less, with B z less than 2 nT and northward.The SuperDARN convection maps show a simple two-cell convection pattern.We have been unable to find any data that would help to identify a specific source in the tail.Neither Geotail nor the Cluster group is favourably placed at these times.No SuperDARN radars were suitably located.We can, however, consider the conditions in order to suggest a possible source.
At quiet times such as these with northward interplanetary B z , the magnetotail is not static.There are numerous observations of so-called "Tail Reconnection during IMF-Northward Non-substorm Intervals" (TRINNI) events (Grocott et al., 2005, and references therein) involving local reconnection and dipolarization of the tail magnetic field.The precise behaviour is influenced by B y (Grocott et al., 2004).
We suggest that, under these magnetic conditions, there are likely to be TRINNI reconnection events in the tail.If so, they would form a strong source of fast MHD waves propagated sunwards and exciting the waveguide as described above.Field line resonances would then be excited by the usual mechanism of leakage from the waveguide.An investigation of this hypothesis is being undertaken.
Discussion and conclusions
In this paper we have presented observations of several long period events over an unusually large range of longitudes.Their properties can be briefly summarized as follows: -Their frequencies are in the Pc5 range (1.6-3.3 mHz), with the lowest frequency on the boundary between the Pc5 and Pc6 bands.
-Their longitudinal extent is more then 120 • on the afternoon/dusk side.
-In one case there is an oscillation at the same frequency observed in the other hemisphere.
-Their azimuthal wave numbers ( 7 − 12) are well below m = 17, the upper limit for a classic field line resonance.
-The sense of azimuthal phase propagation is westward.
Pc5 disturbances can be quite large with drift velocity amplitudes of several hundreds or even exceeding a thousand metres per second.For these events it has not been possible to estimate the amplitude because the bulk of the observations are derived from ground scatter.In such cases the radars do not measure the drift velocity, but the effective rate of change of the HF path length as a result of the refractive index changes because of compression driven by the pulsation fields.It should be noted that the very small observed Doppler velocities do not imply small drift velocities.
These events appear to be truly global extending over an azimuthal range of more than one third of the Earth's circumference in the Northern Hemisphere.The nature of the conjugate behaviour is not well determined.Because the Northern Hemisphere observation are derived from ground scatter, the exact location of the footprint of the pulsation is uncertain.The Southern Hemisphere data are sparse and only exist for one radar and one event, where the observations come from ionospheric backscatter and determine a latitude range within which the pulsation footprint is located.The ionospheric part of the ray path of the ground-scattered signal in the northern conjugate region includes this latitude range.This is indicative of conjugate behaviour of the pulsation but does not establish it definitively.
The source of the pulsations in these events has been discussed in detail.Particle drift-bounce resonance can be ruled
Figure 2 .
Figure 2. Latitude profile of amplitude and phase along beam 2 (beam aligned closely with the magnetic meridian) of Þykkvibaer for event 2.
Figure 3 .
Figure 3. Latitude profile of amplitude and phase along beam 7 (beam aligned closely with the magnetic meridian) of Saskatoon for event 3.
Figure 4 .
Figure 4. Event 2: (a) the top panel shows Pc5 band-filtered Doppler velocity data from the Goose Bay radar.The lower panel contains the corresponding power spectrum together with the significant limit level from filtered data.The peak detector recorded the peak in frequency band 3.2-3.5 mHz as significant.(b) A timerange summary plot of Doppler velocity in metres per second measured by beam 9 of Goose Bay radar showing range and AACGM latitude.
FitFigure 5 .
Figure 5. Event 3: (a) the top panel shows Pc5 band-filtered Doppler velocity data from the Goose Bay radar.The lower panel contains the corresponding power spectrum together with the significant limit level from filtered data.The peak detector recorded the peak in frequency band 1.4-1.7 mHz as significant.(b) A timerange summary plot of Doppler velocity in metres per second measured by beam 10 of Goose Bay radar showing range and AACGM latitude.
Figure 6 .
Figure 6.Event 3: (a) the top panel shows Pc5 band-filtered Doppler velocity data from the Saskatoon radar.The lower panel contains the corresponding power spectrum together with the significant limit level from filtered data.The peak detector recorded the peak in frequency band 1.4-1.7 mHz as significant.(b) A timerange summary plot of Doppler velocity in metres per second measured by beam 7 of Saskatoon radar showing range and AACGM latitude.
Figure 7 .
Figure 7. Event 3. The top panel (a) shows Pc5 band-filtered Doppler velocity data from the Sanae radar.The middle panel (b) contains the corresponding power spectrum together with the significant limit level from filtered data.The peak detector recorded the peak in frequency band 1.4-1.7 mHz as significant.The bottom panel (c) is a stack plot of the velocity data in the range gates showing the strongest activity.
Figure 8 .Figure 9 .
Figure8.Event 1, the spectral power for the magnetometer stations (CONT, SKT, GHB and HOR).They are plotted so that top to bottom is west to east.(Adapted fromMtumela et al., 2015.)
Figure 10 .
Figure 10.Event 1. Consolidated plot of phase as a function of AACGM latitude across the fields of view of the Saskatoon, Goose Bay and Þykkvibaer radars.
Figure 11 .
Figure11.Event 2: (a) plot of phase versus longitude measured at 17:00 UT for the 3.35 mHz frequency.The slope of this relation yields the azimuthal number m: 7 ± 3.2 for Saskatoon (red), 9 ± 2.3 for Goose Bay (cyan) and 9 ± 4.3 for Þykkvibaer (black), corresponding to a westward phase velocities at the ionosphere.The m value of the three radars is ∼ 8 ± 0.3 (purple).The phase of the resonance from the magnetometers RANK and UMQ were determined and plotted (green: UMQ; orange: RANK).(b) Event 3: plot of phase versus longitude measured at 19:00 UT for the 1.55 mHz frequency for event 3. The slope of this relation yields the azimuthal wave number m: 9 ± 2.4 for Saskatoon (red) and 9 ± 4.3 for Goose Bay (cyan), corresponding to a westward phase velocity at the ionosphere.The m value of the two radars is ∼ 8 ± 0.5 (purple).
Figure 12 .
Figure12.DST during the periods of interest.The labelled, unshaded portions represent the three events studied.
Figure 13 .
Figure 13.Magnetospheric equatorial plane showing turning points for fast wave at several frequencies.The large grey arrows indicate the direction of a fast wave propagated from the tail into the waveguide.(Adapted fromWalker, 2002.)
Table 1 .
Events used in study.The latitude range is the nominal target range for Northern Hemisphere events, which were seen in ground scatter.Their pulsation location is at a lower latitude.
Table 2 .
See Fig.1for map and key to station codes. | 2018-12-11T12:57:54.348Z | 2016-09-29T00:00:00.000 | {
"year": 2016,
"sha1": "f89ef0c8cc873bf2f050da4b366e6b7771814c51",
"oa_license": "CCBY",
"oa_url": "https://www.ann-geophys.net/34/857/2016/angeo-34-857-2016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f89ef0c8cc873bf2f050da4b366e6b7771814c51",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Geology"
]
} |
44140814 | pes2o/s2orc | v3-fos-license | Association between IL-6 production in synovial explants from rheumatoid arthritis patients and clinical and imaging response to biologic treatment: A pilot study
Introduction The need for biomarkers which can predict disease course and treatment response in rheumatoid arthritis (RA) is evident. We explored whether clinical and imaging responses to biologic disease modifying anti-rheumatic drug treatment (bDMARD) were associated with the individual’s mediator production in explants obtained at baseline. Methods RA Patients were evaluated by disease activity score 28 joint C-reactive protein (DAS 28-)), colour Doppler ultrasound (CDUS) and 3 Tesla RA magnetic resonance imaging scores (RAMRIS). Explants were established from synovectomies from a needle arthroscopic procedure prior to initiation of bDMARD. Explants were incubated with the bDMARD in question, and the productions of interleukin-6 (IL-6), monocyte chemo-attractive protein-1 (MCP-1) and macrophage inflammatory protein-1-beta (MIP-1b) were measured by multiplex immunoassays. The changes in clinical and imaging variables following a minimum of 3 months bDMARD treatment were compared to the baseline explant results. Mixed models and Spearman’s rank correlations were performed. P-values below 0.05 were considered statistically significant. Results 16 patients were included. IL-6 production in bDMARD-treated explants was significantly higher among clinical non-responders compared to responders (P = 0.04), and a lack of suppression of IL-6 by the bDMARDS correlated to a high DAS-28 (ρ = 0.57, P = 0.03), CDUS (ρ = 0.53, P = 0.04) and bone marrow oedema (ρ = 0.56, P = 0.03) at follow-up. No clinical association was found with explant MCP-1 production. MIP-1b could not be assessed due to a large number of samples below the detection limit. Conclusions Synovial explants appear to deliver a disease-relevant output testing which when carried out in advance of bDMARD treatment can potentially pave the road for a more patient tailored treatment approach with better treatment effects.
Introduction
Predicting response to treatment and achieving disease control without progressive joint destruction are among the greatest challenges in rheumatoid arthritis (RA). Joint destruction is driven by an inflammatory process encompassing numerous cell types, and leading to cartilage and bone damage by release of metalloproteases, as well as an activation of chondrocytes and osteoclasts [1,2] With biologic disease modifying anti-rheumatic drugs (bDMARDs) emerging as a treatment option more than 20 years ago, a paradigm shift happened in RA treatment. However, it has become clear that only around 15 percent of RA patients achieve disease remission with bDMARDs [3][4][5][6][7]. Drug adherence is also short considering that potentially life-long treatment is required [3]. Switch to another bDMARD due to adverse events or treatment failure is common, and the choice of both first and second bDMARD is ruled by tradition rather, and the guidelines are not well-defined [8]. The increasing number of bDMARD options, and the unmet treatment challenges, warrant methods for testing drug efficacy at patient level. Studies have reported that changes in inflammatory markers such as interleukin 6 (IL-6) are associated with the clinical response to treatment [9,10]. However, baseline levels of biomarkers, which can be used for screening of RA patients with regards to choice of bDMARD have not yet been presented.
A possible approach to a patient-tailored treatment strategy could be explored using cultures of synovial tissue. Previous studies on explants obtained from RA patients undergoing arthroplasty, have demonstrated the cultures' capacity to produce key inflammatory mediators involved in RA pathology. The production of these mediators can be modulated by addition of different bDMARDs or other immuno-modulatory compounds [11][12][13][14][15]. We recently demonstrated that synovial explants produce IL-6, monocyte chemo-attractant protein 1 (MCP-1) and macrophage inflammatory protein 1 beta (MIP-1b), and that this production was associated with colour Doppler ultrasound (CDUS) activity, magnetic resonance imaging (MRI) findings of synovitis, bone marrow oedema (BME), and erosions, using the RA MRI score (RAMRIS) and disease activity score 28 joints C-reactive protein (DAS-28) [16].
The aim of this study was to explore whether in vitro effects of a bDMARD on the individual's baseline RA synovial explants were associated with the in vivo treatment response to the same bDMARD, both clinically and by imaging.
Patients
The study period took place between May 2010 and October 2013. Study participants (N = 20) were recruited from a larger cohort of RA patients [16]. Inclusion criteria were as previously described; RA patients opted for bDMARD therapy with active arthritis involving hand joints as defined by synovial hypertrophy on ultrasound. Baseline and follow up evaluation included DAS28 CRP , CDUS and 3 Tesla MRI of the joints opted for synovectomy. Within 24 hours after baseline examination, patients had a synovectomy performed of up to two joints on the same hand.
Patients were retested at follow-up after a minimum of three months of treatment, and European Legue Against Rheumatism (EULAR) response was determined [17,18]. Patients were excluded from the present study if a local steroid injection was given in the synovectomised joint during the follow-up period, or if daily steroid consumption exceeded 10 mg. Other reasons for exclusion were skin changes over the target joint, allergy to local anaesthetics, and anti-coagulatory treatment that could not be paused for 48 hours pre-surgery. Patient examination and imaging procedures were performed at the Departments of Rheumatology and Radiology Bispebjerg & Frederiksberg Hospital, Denmark. The study was approved by the Health Research Ethics Committee of the Capital Region of Denmark (study number No. H-4-2009-117), and signed informed consent was obtained from each patient.
Procedures
The needle arthroscopic procedure was carried out at The Department of Orthopaedics, Section of Hand Surgery, Gentofte Hospital, approximately 24 hours after recording of baseline data. Briefly, synovectomies were performed using a 1.9 mm Karl-Storz arthroscope with a two portal technique ensuring that the surgeon (NS) had full visual control over the anatomical origin of the synovectomy material. Each patient could have up to two joints synovectomised, and up to six joint positions from the wrist and three joint positions from the metacarpo-phalangeal (MCP) joint. Mapping of synovial tissue with the corresponding anatomical sites on imaging was secured by the surgeon being guided by the CDUS description. S1 Table offers an overview of the anatomical landmarks of the synovectomy positions in the wrist.
corresponded to a 33% increase in BME. The RAMRIS erosion score ranged from 0-10, each step corresponding to 10% increments in bone area eroded in the anatomy of interest. The anatomical site of synovectomy was mapped to the same area on the MRI and ultrasound images.
Imaging scores were averaged according to the anatomic location of the synovectomy, or if synovectomy positions had been pooled. MRI was performed by an experienced radiologist (MB) who was blinded to all patient characteristics and ultrasound data.
Outcome measures for the bDMARD cohort
Associations of EULAR responses, with mean fold changes in synovial mediator production from baseline to follow-up, were chosen as the primary outcome. As secondary outcomes the changes in DAS-28, ultrasound and MRI scores were correlated with response to bDMARD treatment at a joint level. Changes in DAS-28, ultrasound colour Doppler (CDUS) and MRI parameters were defined as the difference from baseline to follow-up.
Synovial explant assay
Synovial explants were established as previously described [16]. In brief, synovial explants were distributed at approximately 2 mg per well in 96 flat bottom culture plates containing bovine bone slices. Tissue was incubated for 72 hours at 37˚C with 95% O 2 and 5% CO 2 with 200 μL RPMI 1640 containing 10% heat inactivated (HI) foetal bovine serum and 2% HI human serum, penicillin and streptomycin. As depicted in Fig 1, at 72 hours of culture 50% medium was replaced and the relevant bDMARD added in triplicates or quadruplets, depending on the amount of available tissue. Medium was, furthermore, replaced after 1 week of culture, and finally aspirated after 2 weeks where the explant culture system was terminated. Supernatants from 72 hours and 2 weeks medium replacement were stored at -80˚C, until analysed. Commercially available bDMARDs and isotype controls (IgG1 light chains, Sigma 1 ) were added at 10μg/ml and/or 50 μg/ml. Each explant culture setup contained untreated wells for detection of spontaneous cytokine production, bDMARD-treated tissue, and isotype controls.
Changes in cytokine concentration from baseline (72 hours) to two weeks were calculated as a ratio: (2 weeks/72 h). Wells with baseline cytokine production less than 20% of the average cytokine concentration were excluded from further analysis, since it was judged that these wells would not represent overall synovial activity. In the case of sparse tissue, synovectomy material was pooled from neighbouring positions.
Statistical analysis
Due to the exploratory nature of the study, sample size was based on feasibility. A study population of 20 patients was judged to be sufficient for both practical and ethical reasons.
For imaging outcome measures and EULAR responses, mixed linear models were applied for the statistical tests, since data were clustered within patients, thereby preventing doublecounting errors with inflated standard errors. The mixed model analysis, associating clinical response with fold change in mediator explant mediator production, included three pre-specified covariates: baseline mediator levels, EULAR response and type of in vitro intervention. As Rheumatoid arthritis patients with clinically suspected active arthritis involving the hand joints and opted for bDMARD treatment were evaluated by Doppler ultrasound for study participation. Explants were only included from patients, who were initiated in bDMARD treatment. bDMARD = biologic disease modifying anti-rheumatic treatment; CDUS = colour Doppler ultrasound; DAS-28 = Disease activity score 28 joints c-reactive protein; MRI = magnetic resonance imaging; T = Tesla; Ω = 26 patients were offered participation of whom 3 declined and 3 did not fulfil inclusion criteria (no synovial hypertrophy), respectively. Ã At baseline, MRI was not performed in 5 patients due to logistics (N = 3) and contra-indications (N = 2). ÃÃ At follow-up, CDUS was missing in 1 patient due to logistics. ÃÃÃ At follow-up MRI was only performed in 11 patients due to logistics.
https://doi.org/10.1371/journal.pone.0197001.g001 previously described [16], parsimony in the statistical models for imaging outcome variables was achieved by omitting design variables from the model, if no statistical significance was determined (p>0.1). For model optimization purposes, square root, inverse and logarithmic transformations were applied to achieve an approximate Gaussian distribution of residuals. Clinical outcome measures (DAS-28) were analysed by Spearman's rank correlations, averaging explant activity data from the various joint positions in each particular patient. Since Spearman's rho estimates were considered important to the overall visual data interpretation, Spearman's estimates were calculated in the same way for the imaging data. P-values < 0.05 were considered statistically significant.
Clinical outcomes
As depicted in Fig 1, a total of 20 patients were opted for bDMARD. Out of these, 16 were initiated on bDMARD and included in the statistical analysis. These patients consisted primarily of seropositive women (65%) with high disease activity (median DAS-28 = 5.4, and median CRP = 27 mg/Ll) and long-standing disease (a median of 8.5 years) ( Table 1). At baseline 15 patients received various DMARDs, mostly methotrexate in monotherapy, while one patient received 5 mg prednisolone as monotherapy. Five of the 16 patients were bDMARD failures (infliximab: N = 4, rituximab N = 1), and thus synovectomised during a pause prior to switching to another bDMARD. The other 11 patients were treated de novo with their bDMARD. In 5 patients, 10 mg prednisolone per day was given in combination with the conventional DMARD at baseline. Three patients withdrew from their other anti-rheumatic drug (Leflunomide, Methotrexate and prednisolone, respectively) and received bDMARD as monotherapy (Etanercept, Certulizumab and Tocilizumab, respectively) during the study period.
No patient was lost during the study period. Follow-up was after bDMARD treatment for a median of 7.0 months (IQR 6.8 to 11.3 months). The patients showed a median change in DAS28 of -1.7 (IQR: -3.1; 0.3), and a median CRP reduction of 14 mg/ml (IQR: -35; 0.0). At follow-up, 1 patient had withdrawn from the bDMARD due to non-response.
Explant cultures
In the 16 patients, 51 joint positions were synovectomized. Due to sparse material in some positions, 40 explant cultures (38 cultures from wrists and two cultures from MCP joints) were established. On average 95 mg of wet weigh tissue was harvested from wrist-joints and 56 mg from MCP joints.
All explants cultures exhibited progressive cellular outgrowth throughout the 2 week culture period when examined under light microscope during harvest of supernatants. A detailed overview of the fold change in cytokine production, grouped by in vitro treatment and EULAR response, is found in the additional files, as additional file 2. Median fold increase of IL-6 and MCP-1 was generally increasing through the study period. MIP-1b production remained low and was discarded from data-analysis since 42% of samples remained under assay detection limit at two weeks in contrast to IL-6 (12%) and MCP-1 (15%). Two patients, one Rituximab treated and one treated with Cimzia had only data available from wells treated with50 μg/ml, which were included in the statistical analysis.
Associations of EULAR response to changes in explant mediator production
EULAR good responders had a significantly lower fold change in IL-6 of bDMARD-treated samples in contrast to non-responders (P = 0.04), with a mean fold difference of 3.45 (CL 95 = 1.06; 11.25). IL-6 production of bDMARD-cultured samples was significantly lower than matched isotype controls in samples from good responders (P = 0.01), with a mean decrease of 45% (CL 95 = 66%; 14%). The difference in IL-6 production was borderline significant with regards to spontaneous production and bDMARD (P = 0.06). No significant differences were seen with regards to in vitro effects among moderate responders or no responders, or with any groups and changes in MCP-1. Table. For model optimization purposes, one data point out of 236 was excluded from the analysis (Cooks distance = 0.6).
Correlation coefficients for spontaneous and bDMARD-treated explants with regards to IL-6 and changes in DAS-28 were significant (P = 0.03 for both), with ρ = 0.56 and ρ = 0.57. Scatterplots of changes in IL-6 from bDMARD-treated samples and changes in DAS-28, CF max and BME are given in Fig 3. Changes in MCP-1 production were not correlated to changes in DAS-28 (ρ = 0.37, P = 0.15).
Prednisolone therapy, 10 mg/day 1 6 (6%) 1 (6%) ‡ -5 Overview of patient demographics at baseline and change at follow up. Patients were followed for a medium time of 10 months (IQR 7 to 11 months). At follow up
Imaging and explant data
Of the 40 cultures, 38 explant cultures (15 patients) had matching CDUS data at baseline and follow-up (two cultures from one patient had no CDUS data at follow-up due to logistics). Baseline CDUS was median CF max of 9% (IQR 0% to 16%) with ranges from no Doppler activity (10 explants) to high activity (3 explants). At follow-up, Doppler activity had decreased in the areas corresponding to approximately half of the explants (18/38). All10 sites with absence of Doppler activity at baseline remained Doppler negative at follow up. An increase in CF max was seen at the remaining 10 sites. A total of 28 cultures (from 11 patients) had MRI data available at both baseline and followup. The causes for missing MRI data were logistics at the Department of Radiology in three cases, one incident of claustrophobia, and one patient with a contra-indication to MRI (coronary stent).
24 culture positions showed moderate to severe synovitis. Severe synovitis was the predominant finding accounting for 61% (17/28) of the explant material. Three cultures from the same patient had no MRI signs of synovitis.
The presence of BME was moderate at baseline with a median score of 1.75 (IQR: 1.0 to 2.75), and all except 2 patients (3 explant cultures)) had BME present at baseline. At followup, the changes were polarized with 50% of joint positions having experienced a decrease in BME, whereas other 40% showed an unchanged pathological score or an increase in BME score. On a group basis, a median decrease of -0.25 point (IQR: -1.75 to 0.0) in BME was observed.
All but one patient exhibited erosions at baseline. The extent of erosions was moderate with a median of 1.75 on the RAMRIS score (IQR: 1.0 to 2.5). A slight increase in eroded bone developed during the study period with a median RAMRIS erosion score of 2.0 eroded bone at follow up (IQR: 1.3 to 2.7). 20 of the 28 explant cultures came from joint positions showing an increase in erosion, whereas only one position showed a decrease in RAMRIS erosion score at follow up. All changes were moderate with changes less than 5 percent, apart from two explant cultures with an increase of 8 and 10% in the corresponding anatomical region, respectively. A detailed overview of the imaging data is found in Table 2.
Correlations of explant mediator production with MRI
The strongest signals came from changes in bDMARD-treated explants for increase in IL-6 and MCP-1, which showed a moderate degree of correlation of (ρ = 0.56 P = 0.03) and (ρ = 0.49 P = 0.01) for changes in RAMRIS BME, respectively. The correlations were insignificant between RAMRIS and the explants' spontaneous release, as well as isotype controls', for IL-6 and MCP-1. S3-S6 Tables offer details on the mixed model covariate elimination steps.
Mixed model analyses could not be performed for RAMRIS synovitis score or RAMRIS erosion score due to failure of the model control criteria.
Correlations were weak to moderate between changes in the RAMRIS synovitis score and changes in bDMARD-treated explants' production of IL-6 or MCP-1. The highest correlation coefficient was seen for bDMARD-treated samples for changes in IL-6 (ρ = 0.30) and spontaneous release of MCP-1 (ρ = 0.17). When explant data from the different joint positions was averaged per patient, all P-values for the correlations using Spearman's rank test were insignificant for both changes in RAMRIS BME and erosion scores (data not shown).
The changes in RAMRIS erosion score did not correlate significantly with any changes in cytokine production.
Discussion
In this study, explants cultured from RA joints obtained prior to bDMARD in vivo therapy demonstrated that change in IL-6 production significantly corresponded to both the overall clinical and the imaging effect determined following a median of 7 months of treatment. Thus, explants from non-responders based on the EULAR response criteria had a significantly higher IL-6 production in bDMARD-treated samples than samples from EULAR good responders. Furthermore, IL-6 production was significantly lower in bDMARD-treated samples from good responders than in matched isotype controls. In contrast, IL-6 production was not suppressed in samples from moderate responders and non-responders, indicating that the explant model provided disease-relevant information. No significant associations were found between EULAR response or DAS-28 changes and bDMARD effects on explant MCP-1 production.
In recent years, multi-biomarker tests based on blood tests have shown good correlations with clinical disease activity measures. Furthermore, IL-6 was shown to correlate with DAS-28, TJC and SJC [9,21,22]. Plasma IL-6 levels have been associated with clinical remission in an Infliximab-treated RA cohort which underlines the qualities of IL-6 as a biomarker of disease activity and treatment response [23]. Baseline CRP and DAS-28 did not reveal statistically significantly differences according to EULAR response. However, baseline DAS-28 was borderline significantly lower among non-responders compared to good responders (P = 0.07, nonparametric testing) indicating that clinical evaluation is still an important prognostic feature.
Explant production of both IL-6 and MCP-1 correlated with imaging responses at the explant sites following a median of 7 months of bDMARD treatment. This indicates that whole tissue synovial explants may provide valuable information concerning the subsequent bDMARD effect in vivo, both at the local joint and for the overall disease activity.
The explants' mediator release correlated with changes in CDUS and bone marrow oedema in contrast to MRI-detected synovitis and erosive changes. The RAMRIS synovitis score evaluates synovial volume. The score is thus likely to have a higher degree of bias introduced by the synovectomy, since the synovial volume becomes reduced by this procedure. In contrast, CDUS activity was calculated as the systolic pixel/gray scale fraction (CF max ), and this may not be influenced to the same extent by a decrease in synovial volume. With respect to erosions, these changes are less than the other imaging parameters and lack of significant associations may have been caused by the small sample size and relatively short follow period [24][25][26][27].
Overall, the synovectomy procedure on a single small joint only introduced a limited bias on DAS-28, which was included on average 7 swollen and painful joints at the time of synovectomy. With respect to the Doppler findings of the target joint, changes may have been introduced by the surgical procedure both causing excess flow from reparative changes in the tissue as well as the opposite, i.e. reduced flow due to removal of affected synovium.
The imaging data appeared representative for a general RA cohort, since both baseline values and changes in imaging correlations to clinical disease-activity outcome measures corresponded to previous observations in RA cohorts [28,29].
Limitations
The vast majority of samples were obtained from wrists where the synovectomy material came from shavings at the dorsal side. Thus, it cannot be ruled out that synovium from neighbouring sites may have contributed to imaging pathology. Due to logistic limitations it was not possible to uniform the timing of follow up visits, wherefore there was quite a big difference in length of bDMARD therapies among patients. The impact of the different intervals was limited by the fact that no patients were switched to other bDMARD therapy between baseline and follow up visits.
This pilot study was not designed to identify the optimal in vitro dose of bDMARD and isotype control. The choice of an IgG light chain isotype would not have detected a possible unspecific in vitro Fc mediated effect of the bDMARDs consisting of whole antibodies. This is however not likely to be the case since the bDMARDs are highly specific for their molecular targets.
Other studies have shown that a dose-dependent suppression of explant culture mediators can be observed using increasing doses of anti-TNFα inhibitors as high as 100μg/ml [30,31]. The choice of a minimum use of 10 μg/ml of bDMARD here seemed appropriate according to previous studies, where concentration varied between 1 μg/ml and 10μg/ml [12,[30][31][32][33][34]. Among patients with data from samples treated with both 10 μg/ml and 50 μg/ml we did not see a clear indication of superior suppression of the higher bDMARD doses. Our in vitro design did not account for in vivo differences in dosage of the various bDMARDs. This could pose a potential bias when translating from in vitro to in vivo response.
Further studies are now needed to identify the optimal in vitro doses of the different bDMARDs.
Although the surgeon had full visible overview during the arthroscopic procedure and was guided by the ultrasound description, an accurate anatomical match between the site of synovectomy and the imaging data was not possible.
Another study limitation was the heterogeneity regarding previous treatment with bDMARDs. Approximately two thirds of the patients were bDMARD naïve and one third bDMARD failures who were opted for treatment switching. However, synovectomy in the latter group was only performed after a pause in treatment with no trace expected of the former bDMARD. We therefore believe that a difference in treatment response biased by previous bDMARD exposure is limited.
The small sample number increased the risk of type II errors. Thus, the model could only detect a statistical difference between EULAR none responders and EULAR good responders, but not differentiate between all three EULAR response types.
Due to financial limitations, it was unfortunately not possible to analyse more mediators.
Strengths
The patients recruited for the study had active systemic disease and were scheduled for bDMARD treatment. This is in contrast to most previous studies of biopsies that were mainly obtained from end-stage disease, which may not be representative for the general RA inflammation. The use of hand joints reduced a risk of bias from concomitant osteoarthritis that might otherwise blur RA specific inflammatory signals. Explants based on synovectomy products from site-specific areas in small joints by needle arthroscopy ensured an optimal harvest of all relevant synovium. The arthroscopy procedure enabled full visual overview during the synovectomy and thereby optimal conditions for mapping the synovectomized sites with the corresponding anatomical areas on imaging. Finally, the use of intact tissue cultured on bone without addition of exogenous immuno-stimulation and enzyme digestion does mimic the in vivo situation as far as it is possible [35][36][37][38].
Conclusions
To our knowledge this is the first study investigating the association of short term change in synovial mediator production in vitro with long term clinical outcomes in RA patients treated with bDMARDs. The model suggests that short term changes in the synovium are associated with clinical outcome following treatment. The results are encouraging concerning use of explant models in the ongoing process of clarifying the underlying pathology in RA and identification of future biomarkers that could pave the road for patient-tailored treatment options.
Supporting information S1 Table. 'Overview of the anatomical landmarks used for mapping the anatomical origin of the explants with the corresponding anatomical location on imaging.' Overview of the anatomical landmarks used for mapping the anatomical origin of the explants with the corresponding anatomical location on imaging. (DOCX) S2 Table. 'Overview of IL-6 and MCP-1 explant changes according to the type of in vitro intervention and EULAR response to bDMARD treatment.' Changes in explant fold change of IL-6 and MCP-1 during the 2 week culture period grouped by EULAR DAS-28 response criteria. bDMARD = explants treated with 10μg/ml biologic DMARD; DMARD = disease modifying anti-rheumatic drug; IL-6 = Interleukin 6; Isotype = Matched isotype control, 10μg/ ml; Max = maximum value; MCP-1 = macrophage chemoattractant protein 1; Min = minimum value; Ne = Number of explants; Nmis = number missing explants due to baseline value < 20% of overall average; NP = number of patients; Response = EULAR DAS-28 response criteria; Spontaneous = fold change in untreated explant IL-6 production, STDV = standard deviation. (DOCX) S3 Table. Fold change in RA explant IL-6 release vs. change in CFmax upon biologic DMARD treatment. Stepwise covariate elimination. This table depicts the statistical associations between CDUS (ΔCFmax) activity and synovial explant mediator fold change (2 weeks culture concentration divided by the concentration at 72h of culture) for the spontaneous release of mediators, mediator release of cultures with bio.DMARD (10μg/ml) and isotype control (10μg/ml). A mixed model has been used for the statistical analysis, P<0.05 was considered significant. In the reduced model covariates were excluded if P>0.10. All of the four pre-specified covariates, tested in the models, are illustrated above. RAMRIS = Rheumatoid arthritis magnetic resonance score; syno = synovitis; Log10 = 10 logarithm; Table. Fold change in RA explant IL-6 release vs. change in RAMRIS BME score upon biologic DMARD treatment. Stepwise covariate elimination. This table depicts the statistical associations between RAMRIS BME score and synovial explant mediator fold change (2 weeks culture concentration divided by the concentration at 72h of culture) for the spontaneous release of mediators, mediator release of cultures with bDMARD (10μg/ml) and isotype control (10μg/ml). A mixed model has been used for the statistical analysis, P<0.05 was considered significant. In the reduced model covariates were excluded if P>0.10. All of the four prespecified covariates, tested in the models, are illustrated above. RAMRIS = Rheumatoid arthritis magnetic resonance score; BME = Rheumatoid arthritis magnetic resonance score bone marrow edema score; Log10 = 10 logarithm; Table. 'Fold change in RA explant MCP-1 release vs. change in CFmax upon biologic DMARD treatment. Stepwise covariate elimination.' This table depicts the statistical associations between CDUS (ΔCFmax) activity and synovial explant mediator fold change (2 weeks culture concentration divided by the concentration at 72h of culture) for the spontaneous release of mediators, mediator release of cultures with bDMARD (10μg/ml) and isotype control (10μg/ml). A mixed model has been used for the statistical analysis, P<0.05 was considered significant. In the reduced model covariates were excluded if P>0.10. All of the four pre-specified covariates, tested in the models, are illustrated above. Inv = Inverted; Log10 = 10 logarithm; p = square root; syno = synovitis.Covariates included in the statistical model: Joint Synovectomized = Wrist, MCP or PIP; Synovectomy position = Ulnar, central, radial or mixed for pooled synovectomy positions; Side = left or right; bDMARD = biologic disease modifying anti-rheumatic drugs; CFmax = maximal color fraction; Δ = change in imaging variable after a minimum of three months treatment with a bDMARD; MCP-1 = monocyte chemoatrractant protein 1; MCP = metacarpophalangeal joint,; PIP = Proximal interphalangeal joint. (DOC) S6 Table. 'Fold change in RA explant MCP-1 release vs. change in RAMRIS BME score upon biologic DMARD treatment. Stepwise covariate elimination.' This table depicts the statistical associations between the change in RAMRIS BME score in bDMARD treated RA patients (N = 11, 28 explants) and change in synovial explant mediator release after 2 weeks of culture. A mixed model has been used for the statistical analysis, P<0.05 was considered significant. In the reduced model covariates were excluded if P>0.10. All of the four pre-specified covariates, tested in the models, are illustrated above. bDMARD = biologic disease modifying anti-rheumatic drugs; RAMRIS BME = Rheumatoid Arthritis Magnetic Resonance Imaging Score for Bone Marrow Oedema. Log10 = 10 logarithm, p = square root. Inv = Inverted. | 2018-06-05T03:27:48.019Z | 2018-05-22T00:00:00.000 | {
"year": 2018,
"sha1": "0a831419304c514255d984d8d371bba8059338f6",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0197001&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a831419304c514255d984d8d371bba8059338f6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3662617 | pes2o/s2orc | v3-fos-license | The Malliavin Derivative and Application to Pricing and Hedging a European Exchange Option
The exchange option was introduced by Margrabe in [1] and its price was explicitly computed therein, albeit with some small variations to the models considered here. After that important introduction of an option to exchange one commodity for another, a lot more work has been devoted to variations of exchange options with attention focusing mainly on pricing but not hedging. In this paper, we demonstrate the efficiency of the Malliavin derivative in computing both the price and hedging portfolio of an exchange option. For that to happen, we first give a preview of white noise analysis and theory of distributions.
Introduction
White noise analysis and theory of distributions is treated extensively in [2][3][4][5] and references therein.Applications in the form of the generalized Clark-Haussmann-Ocone (CHO) formula was studied in [6][7][8] and references therein.The theorem takes advantage of the martingale representation theorem which expresses every square integrable martingale as a sum of a previsible process and an Itô integral.The power of the generalized CHO is that one can take advantage of the Malliavin derivative for computing the hedging portfolio.The Malliavin derivative is a better mathematical operation as opposed to the delta hedging approach whose limitations are a failure to explain differentiation of some payoffs which are not differentiable everywhere or if the underlying security is not Markovian.Most of the attention in contingent claim analysis is directed at pricing because of its importance to market practitioners.It is in this regard that explicit results of hedging portfolios for different options are not always readily available.In this paper, we present both explicit results of the price and hedging portfolio of an exchange option, written on two underlying securities with independent Brownian motions.The ground-breaking work was done in [1].The market setup is a complete market setup to escape the problem of not finding a perfect hedge.
Hedging portfolios are just as important as prices of options in that they give us an understanding of how sellers or writers can managed dynamically to replicate the payoff of a contingent claim.The price at any time of the contingent claim equals the intrinsic value of the hedging portfolio at that point.In the case of a European exchange option, the payoff is the difference in terminal value of the underlying securities, conditional on the buyer's terminal asset price 1 X T being more than the seller's, 2 X T .A more interesting problem will be to look at an American exchange option where the buyer would exercise on or before maturity.Such an exercise time will be a stopping time and the price for such an option will be the essential supremum, over all stopping times, of the payoff above.Our attention in this paper is on the European exchange option.
The price of the exchange option will be determined from the CHO formula as the discounted expectation of the payoff F while the hedging portfolio will be obtained from the integrant in the martingale representation theorem setup of the the payoff.This integrant involves the Malliavin derivative of the payoff and its market price of risk and in the case that the latter is time-dependent, it reduces to the discounted expectation of the Malliavin derivative of F conditioned with respect to the filtration.
Preliminaries
The following is a summary of important results from [6] and [7].One of the weaknesses of the delta hedging approach is its failure to justify fully the delta F may not be differen- tiable.Here represents the number of units of stock to be held at any time .
F is not differentiable everywhere.As a result, white noise theory justifies differentiability of F in distribution.The differential operator is the Malliavin derivative t .This operator is defined in the space of distributions S D discussed fully in [6] and summarized below.
Let be the Schwartz space of rapidly decreasing smooth functions and be its dual, which is the space of tempered distributions.Now, for denote the action of on , then by the Bochner-Minlows theo- rem, there exists a probability measure P on S such that . In this case P is called the white noise probability measure and is the white noise probability space.
S B P
As a result, we shall be considering the space S , as the sample space , so that our asset prices will be defined on the probability space , , where F is the family of all Borel subsets of .The construction of a version of the Brownian motion then is a direct consequence of the Bochner-Minlows theorem in that if where is normal with mean 0 and variance .One can easily prove that is really a standard Brownian motion described in [7] as a continuous modification of the white noise process constructed above.
B t
The Brownian motion constructed this way is a distribution and thus special operations like the Malliavin derivative, defined below, are possible.Note that the Brownian motion is not differentiable in the classical sense but is differentiable in the Malliavin sense.The Malliavin derivative is a stochastic version of the directional derivative in classical calculus, with the direction carefully chosen.The following definition is from [7].
Definition 1.1 Assume that has a directional derivative in all directions
Just like any operation where using "first principles" is not usually easy operationally, one can use a series of characterizations to the above definition, which includes the chain rule, to compute the Malliavin derivative of any random variable which is differentiable.The set of all differentiable square integrable random variables was denoted by in [7].As an illustration, we see that and the chain rule yield that, Here and elsewhere i Therefore classically, one sees that the Malliavin derivative, in some sense, mimics differentiation in deterministic calculus.This is a big departure from Itô derivation which does not in any way make sense as a derivative in classical sense.Thus the space S , the sample space, is rich enough to accommodate the concepts we require for our calculations.
The paper is organized as follows: The next section gi
W
prices defined on the filtered ves the general pricing and hedging formulae for general contingent claims.The next section defines our market model and the final section gives our pricing and hedging results for the European exchange option.
Complete Markets
and is a solution of the deterministic di fferential equation be the matrix of coefficients of volatility where for easy .
Then if
where .If we let In all these cases we consider fin ve In this case we are considering .as the usual norm in .
for some ite time horizon T and throughou er, we are taking Tr to mean transposition.
An in stor who selects a portfolio consisting of the 1 assets will have to work out the proportions of alth that he has to invest in each of the his we Solving for V we get , From now on, without loss of generality, we assume constant coefficients.Then Equation (2.5) becomes represents the investor's holdings at any time , where for each is the n units of security number tha nvestor will hold.In future we shall refer to the ector of prices as the market and the vector as the portfolio.The holder of a portfoli y decide to liquidate his position at any time , and his wealth is the cumulative savings in the count plus the trading gains up to and including the date of liquidation.We assume that the portfolio is self financing, so that, the value of this portfolio at time 0 t is given by This is a particular version of the Martingale Representation Theorem which can be found for example, in [9] applied to a particular square integrable martingale It is this Martingale Representation theorem which the CHO formula relies on.We state here the theorem without proof and refer the reader to [6] for more details.
Theorem 2.1 (The generalized Clark-Ocone-Haussmann formula) F D and assume that the following conditions hold: s and app ing the generalized CHO formula to G , we ha ly ve where denotes the Malliavin derivative.By ness due to the Martingale Representation Theorem, we get 0 , d d and where as before and means transpose.Therefore
1 , , This gives the explicit number of units of stocks.The olding in the bank account can be found from h th 0 t e self financing condition.The importance of these results is that in a complete market, every contingent claim with payoff F is attainable by a portfolio of stocks and bonds.Therefore , the initial value of a self financing portfolio, equals the price of such a derivative, since It then shows that the time zero price of such a contingent claim is the discounted expectation of the pay plifying (2.8) depends on the nature of the payoff.One may directly compute the expectation on condition that the distribution of F off. Sim is known.Sometimes it may be easier to determine the Black-Scholes partial differential equation sat d by the value function with corresponding boundary conditions.If such a boundary value problem can be simplified explicitly, or through numerical techniques, then the price can be determined either explicitly or as a good approximation respectively.Other direct numerical methods of solution like the Monte Carlo simulations involve simulations of the underlying security itself and approximations of the expected values give estimate of (2.8).In this paper, we will find explicit results using some important change of measure transformations which we prove first.
The Two Dimensional Market Model and
Transformation Theorems where as Then the payoff of the exchange option will be . ant to determine the price and the hedging portfolio of this optio eralized CHO formula.We assume in our case that the coefficients are constant.
The Girsanov chang We w n by using the gen e of measure for this setup can be Note that since we have assumed that t complete, then easily done by letting u ; and fficients, we can easily justif vikov conditions, so that th easure defined by Then with constant coe y that u satisfies the No e probability m Q With respe rtingale.In order to exploit the results from the previous discussions, we note here that the market is a special case e ered in the previous We assume that of th is invertible so that the market is complete.Therefore if we choose a self financing portfolio which is also admissible, then the discounted value of the is given by . In this case we note that from the CHO formula , for any contingent claim where
Transf ormation Theorems
In order to facilitate our computation and taking advantage of the distribution of the terminal values of the underlying securities In our case we have e e e e e X i t e 0 and variance 1 since e 1 To prove that 1 X and 2 X are independent with respect to 2 e e 0.
X P
X be as given in Proposition 3.1 and let 1 2 1 , , y y and 2 be real numbers.Then where and We have By using the notation in Proposition 3.1, then the previous expression can be re-written 1 1 We have shown in Proposition 3.1 that the random variables 2 X and 1 1 , so that with respect to the same probability measure, the random variable where
Price and Hedging Portfolio of an Exchange Option
Note that if, for a fixed time horizon , the random variables We are w ready ive the price and hedging portfolio of the European excha n.
Proposition 4.1 The price exchange option is given by
is the cumulative distribution function of the standard normal distribution.
Proof. had noted that with r of We espect to the equivalent martingale measure Q which we defined, the prices the two underlying assets 1 X and 2 X are given by fore the time zero European ge option is There price of the exchan If we define the probability measure equivalent to by By using the results of the previous proposition, we then conclude that the time zero price of the European exchange option is given by Note that this pr epend o he stocks nor on the market inte , but just on the m rownian motions are correlated and also with a special assumption that the noise terms for each stock are different.We have allowed that stock prices to depend on the two Brownian motions.
Hedging an Exchange Option
We now calculate the hedging portfolio .For this two dimensional case, ula, we get, from (2.9), that arket volatilities.This result is also similar to the one obtained in [1] but in that paper the author considers the case when the B , 1 , We thus have the following result Proposition 4. 2 The perfect hedge is given by where In order to calculate Proof.
1
, 1 ,2, we use the Markov property .We first calculate need to Therefore the previous expression becomes Note that, with respect to , we have Using the previous notations of and es to where Now, with respect to and for each j = 1, 2 we have is a normal distribution with zero mean and vari- is a normally distributed random vector with mean zero (vector) and covariance matrix equal to the identity matrix.Moreover the previous expression becomes
Conclusion
ave shown that white noise analysis is of vital importance to Finance in that the generalized CHO formula becomes important in finding explicit expressions for the rice and hedging portfolio of European contingent of these results would be to get simiwith efully pleteness.Hedging an option is important in that the seller would know how much of each security to hol order to hedge his liability.In complete markets, this should al were as co related.In our case, we allowed the stock prices to each noise terms which are independent.Also in our paper, we have computed explicitly, the he-dging portfolio, something which was not done in [1].As a result, our results are extensions of that paper with the strength of using white noise analysis.
Acknowledgements
This work was supported by the University of Cape n Research Grant 461091 .REFER[1] W. Margrabe, "The Value of an Option to Exchange One Asset for Another," Journal of Finance, Vol.33, No. 1, 1978, pp.177-186.doi:10.1111/j.1540-6261.1978.tbWe h p claims.Extensions lar explicit results when modelling stock prices general Itô-Lévy processes, though one has to car consider the models of prices to avoid incom d in ways be possible and thus the results in this paper can be applied to any European contingent claim.The [1] opened the door for pricing the exchange options though in that paper, the stock prices were influenced each by one Brownian motion and the two given rdepend on the two Tow | 2018-03-04T00:52:43.573Z | 2012-11-19T00:00:00.000 | {
"year": 2012,
"sha1": "14b2967b4a5d82302354a9cbbf02a8901dcf2d30",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=24462",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "14b2967b4a5d82302354a9cbbf02a8901dcf2d30",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
235281156 | pes2o/s2orc | v3-fos-license | Research on Anti-Icing Technology of High Voltage Line Insulator Based on Advanced Sensor
The insulator has an ice flashover failure. To reduce the calculation error of the equivalent icing thickness of transmission lines due to the insufficient number of sensors, a new online monitoring device for wire icing based on 3 sets of force sensors and inclination sensors was designed. The device integrates multiple monitoring functions such as micro-meteorology, wire temperature, distributed multi-rod tension, and images. The study found that the main reason for the ice flash of insulators is that the ice bridging the insulation on the string skirt. The high conductivity of melting ice water causes the ice flash voltage to be too low. A water curtain that blocks the melted ice water with high conductivity from forming a flashover channel. It is a basic measure to improve the ice-coated insulation and the flashover voltage of the ice string.
Introduction
China is one of the countries with the most severe icing on transmission lines in the world. Since there were records of ice disasters on transmission lines in 1954, large-scale ice disasters occurred across the country from 1974 to 1976, 1984, 2005, and 2008. From January to February 2008, Central China, East China, Southwest China, etc. Heavy snowstorms and freezing rain in the region once in 50 years caused large areas of ice on transmission lines and towers of power grids in Henan, Hunan, Hubei, Jiangxi, Sichuan, Chongqing, Zhejiang, Anhui, Guangxi, Guizhou and Yunnan, and transmission and transformation facilities. The damage was severe, which had a serious impact on the national economy and people's lives.
At present, many research results of icing on transmission lines at home and abroad mainly focus on the causes, mechanisms, conditions, influencing factors, icing process and flashover of insulators, etc., and on-line icing monitoring and early warning technology, some devices have also been developed and run on-line. However, due to the insufficient number of sensors, these ice-coating monitoring devices have made many simplifications in the process of establishing the ice-coating thickness model. They have not considered the influence of the ice coating on the central poles on the wires on both sides of the poles, and ignored the influence of parameters such as wire temperature on the wires [1]. The influence of icing causes large errors in the calculation of icing thickness during actual operation, and it is impossible to obtain important information such as the icing density and type of the wire.
In view of these situations, the new online monitoring device for wire icing on transmission lines studied in this paper increases the number of sensors, can obtain the wire icing conditions of 3 base poles at the same time, and realizes real-time monitoring of wire temperature, which can be considered in the model establishment process Based on the influence of wire temperature on the calculation results, the calculation model of equivalent icing thickness established based on this is more in line with the actual icing situation [2]. At the same time, the sensor monitoring technology is combined with the image monitoring technology, and finally the important information required for dicing and melting of the iced line, such as the thickness, density and type of the ice coating on the wire, is given.
Common types and effects of icing
The common types of icing in mountainous areas in China are rime, rime, and mixed rime. The special geographical environment of China's mountainous areas often causes the central and top mountainous areas to form an inversion layer in winter due to the influence of the north-south cold and warm air flow. Therefore, it is prone to catastrophic weather of freezing rain, which in turn causes rime and ice on the line. The rime in this area is generally formed by freezing rain. The surface is relatively smooth and hard. It is transparent or white and transparent due to the inclusion of a large number of air bubbles. It has strong adhesion and its specific gravity is the largest among several icing forms, generally 0.80-0.92g/cm3; rime is formed by freezing of supercooled clouds or very small water particles when the temperature is low (-3--25) ℃, there is dense fog, and the wind speed is 0-5m/s. Rime is characterized by fine organization, fluffy structure, small adhesion, white colour, and easy to fall off when subjected to external force vibration or melting. The specific gravity of rime is generally small, ranging from 0.1 to 0.4g/cm3. When the rime is severely icing, the upper and lower surfaces of the umbrella skirts of the insulator string are wrapped by rime in the wind, and the rims between the sheds are filled with rime, and even a cylindrical ice-coated whole is formed. Mixed rime is a form of mixed icing that is between rime and rime but is similar to rime [3]. The mixed rime structure is dense layered, transparent and opaque appear alternately, and the specific gravity is 0.6-0.8g/cm3. Among the three types of icing, rime is the most harmful type of icing to the transmission line. It can not only cause flashover of the insulator string, but also cause the line inverted pole disconnection accident caused by the icing of the conductor and the ground wire. The icing form in the mountainous areas of central China in Shaanxi is mainly mixed rime icing.
Ice flash mechanism
Ice coating is a special kind of pollution. During the ice melting process, a conductive water film will form on the surface of the insulator. Therefore, ice flash is similar to pollution flash discharge. The discharge process is also caused by surface leakage current, but its discharge mechanism is different from that of pollution. flash. The icing not only causes the distortion of the voltage distribution of the insulator string, but also causes the distortion of the surface voltage distribution of the monolithic insulator [4]. This voltage distribution distortion is one of the main reasons for the drop of the ice flashover voltage of the insulator string. The voltage distribution of XP-70 insulator 7-slice string is shown in Figure 1. The flashover process of ice-coated insulators is as follows: icing causes distortion in the voltage distribution of the insulator (string), and the grounding terminal, especially the high-voltage terminal, bears a large voltage; as the voltage increases, the high-voltage terminal and the grounding terminal of the insulator string become blue-violet the current is generally in the range of 5-20mA. At this time, the ice begins to melt obviously; the voltage continues to rise [5]. When the current exceeds 20mA, the blue-violet spark turns into pink and suddenly becomes a white arc. When the current reaches 250mA, the intermittent white arc burns and develops steadily, and each segment of the arc quickly connects and crosses about 70% of the string length; when the arc current exceeds 450mA, the arc quickly connects to the two poles to complete the flashover.
Factors affecting ice flashover of ice-coated insulators
There are many factors that affect the icing of line insulators. There are many relevant literatures at home and abroad. In summary, they mainly include meteorological conditions, topography and geographic conditions, altitude, and seasonal influences. The test results show that icing causes distortion of the voltage distribution of the insulator string. The uneven icing on the upper and lower surfaces can also cause distortion of the surface voltage distribution of the monolithic insulator. The heavier the icing, the more serious the voltage distribution distortion; the two ends of the insulator string, especially the high voltage lead end insulators withstand the highest percentage of voltage, which causes these parts to discharge and arc first, and when the arc develops to a certain extent, it gradually extends upwards [6]. At this time, due to the ice layer Melting, the leakage current is also large, further aggravating the development of the arc, and finally the insulator string flashover along the surface.
Factors affecting calculation formula analysis
3.3.1. The impact of pollution. Clean ice (the ice layer does not contain other impurities) or the ice layer with low conductivity of ice-coated water will not significantly reduce the ice flashover voltage of the insulator, only the presence of dirt in the ice layer, that is, the insulator has been the flashover voltage will be significantly reduced when the ice-coated water is contaminated or contaminated during the icing process. As the degree of contamination (ice-coated water conductivity or salt density) increases, the flashover voltage of the ice-coated insulator will decrease significantly. The influence of pollution on the flashover voltage of ice-coated insulators can also be expressed as In the formula: f U is the flashover voltage, kV; M is the coefficient determined by the insulator type, string length, altitude, etc.; S is the pollution degree, which can be characterized by the conductivity of iced water 20 γ or salt density, respectively; b is the pollution degree Characteristic index of the influence on ice voltage.
The influence of the amount of icing.
Regardless of the thickness or quality of ice coating, the effect of ice coating on the flashover voltage of insulators is aggravated as the amount of ice coating increases, but when the amount of ice coating reaches a certain level, the minimum flashover voltage decreases. It's not obvious anymore. The influence of the amount of ice coating on the ice flash voltage can be expressed as In the formula: is the flashover voltage at W (in kg) and 0, respectively, for the amount of icing, kV; m is a characteristic index that characterizes the influence of the amount of icing. Some scholars pointed out that the influence of icing thickness on the ice flashover voltage of insulators is affected by pollution. When there is pollution, the influence is weakened.
The influence of altitude.
Since icing is a conductive substance, the ice flash voltage of insulators gradually decreases with the increase in altitude and the decrease of air pressure. The influence of air pressure on ice flash voltage can be expressed as In the formula: 0 , , f f U U is the flashover voltage under atmospheric pressure P and standard atmospheric pressure 0 P respectively, in kV; n is the characteristic index of the influence of atmospheric pressure. n is different when the insulator type, voltage polarity, degree of icing, and degree of contamination are different.
Monitoring system design
The new-type wire ice on-line monitoring device designed in the thesis is mainly composed of the main extension of the tower monitoring, the auxiliary extension of the large side and the trumpet side, and the wireless communication network. The overall structure of the device is shown in Figure 2. The device integrates multiple functions such as micro-weather monitoring, wire temperature monitoring, distributed multi-pole tower tension monitoring, and image monitoring. The whole device adopts the structure design of main extension + double auxiliary extension. Among them, the monitoring sub-stations are installed on the large and small poles on both sides of the central main tower to monitor the large and small side conductors' ice load, the inclination angle of the insulator string and the conductor temperature, and transmit the monitoring information to the ZigBee network. Monitoring main extension: The monitoring main extension is installed on the central main tower, which mainly completes the monitoring of on-site images, micro-meteorology (environmental temperature and humidity, wind speed, wind direction, rainfall, air pressure, etc.) and the central main tower wire icing load and insulator string inclination [7]. After collecting the monitoring data, preliminary calculations and image compression, they are transmitted to the monitoring centre via GPRS. The monitoring centre host uses the calculation model of equivalent icing thickness integrated by expert software, the automatic recognition algorithm of icing image, the calculation method of icing density and other algorithms to realize the accurate calculation of the equivalent icing thickness and density of conductors and judge the icing of transmission lines. According to the situation, the ice thickness warning information will be given in time, and relevant staff will be notified to take effective measures to prevent the occurrence of icing disasters. In addition, the device can be linked with the existing ice melting device of the power system to realize automatic monitoring of wire ice coating and automatic ice melting, which is of great significance for improving the reliability of the transmission line operation in the ice coating area.
The structural block diagram of the monitoring sub-extension is shown in Figure 3. It mainly includes the main control unit, power supply module, ZigBee communication module, wire temperature monitoring unit, tension sensor, insulator string inclination sensor, etc. The monitoring sub-extension power module also uses solar + battery Way of working. The wire temperature monitoring unit completes the wire temperature monitoring and is composed of a temperature sensor, power supply module, MCU, ZigBee, etc. Since it is installed on the side of the wire, its power supply adopts the method of wire mutual inductance to obtain energy. The wire temperature information collected by the wire temperature monitoring unit is sent to the tower monitoring sub-extension main control unit through the ZigBee network, while the wire tension change and insulator string inclination information collected by the tension sensor and inclination sensor are transmitted to the sub-extension main control unit via RS485. The main control unit of the auxiliary extension can process signals such as wire temperature, wire tension change, and insulator string inclination angle, and send this information to the monitoring main extension via the ZigBee network according to the set time.
Conclusions
In this paper, a new type of on-line monitoring device for icing of transmission lines is designed. It adopts the structure of main extension + double auxiliary extension, adds 2 sets of force sensors and inclination sensors, and integrates wire temperature monitoring and icing image monitoring, which can realize 3 base towers. Comprehensive monitoring of wire icing. This article believes that the effect of PRTV in delaying ice coating is not obvious when severe icing occurs, and the ice flashover voltage of insulators coated with PRTV is lower than that of insulators without PRTV. Therefore, coating PRTV is not an effective anti-ice flashover measure. In addition, the use of V-strings or the interposition of increased climbing skirts and large-diameter aerodynamic insulators can effectively increase the ice flash voltage. This method is more practical in the design and transformation of transmission lines. | 2021-06-03T00:37:36.280Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "26c65cb150263523981894a051702256dc925fd5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/769/4/042026",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "26c65cb150263523981894a051702256dc925fd5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.